Initial 3.0.0 rebase
This commit is contained in:
parent
d05eaba8dd
commit
58305efc05
1
.gitignore
vendored
1
.gitignore
vendored
@ -0,0 +1 @@
|
||||
/qemu-3.0.0.tar.xz
|
455
0001-Initial-redhat-build.patch
Normal file
455
0001-Initial-redhat-build.patch
Normal file
@ -0,0 +1,455 @@
|
||||
From f03d3b79bc1908b0b6e257ee7aaa6567ecb91e38 Mon Sep 17 00:00:00 2001
|
||||
From: Miroslav Rezanina <mrezanin@redhat.com>
|
||||
Date: Mon, 11 Sep 2017 07:11:00 +0200
|
||||
Subject: Initial redhat build
|
||||
|
||||
This patch introduces redhat build structure in redhat subdirectory.
|
||||
In addition, several issues are fixed in QEMU tree:
|
||||
|
||||
- Change of app name for sasl_server_init in VNC code from qemu to qemu-kvm
|
||||
- As we use qemu-kvm as name in all places, this is updated to be consistent
|
||||
- Man page renamed from qemu to qemu-kvm
|
||||
- man page is installed using make install so we have to fix it in qemu tree
|
||||
- Use "/share/qemu-kvm" as SHARE_SUFFIX
|
||||
- We reconfigured our share to qemu-kvm to be consistent with used name
|
||||
- Added .gitpublish configuration file
|
||||
- Support for git publish has to be stored in repository root
|
||||
|
||||
Rebase changes (3.0.0):
|
||||
- python detection changed
|
||||
- added --disable-debug-mutex
|
||||
|
||||
Merged patches (3.0.0):
|
||||
- 9997a46 Fix annocheck issues
|
||||
- 35230f9 redhat: remove extra % in rhel_rhev_conflicts macro(
|
||||
- c747d3f redhat: syncronizing specfile
|
||||
- e6abfc4 rpm: Add nvme VFIO driver to rw whitelist
|
||||
- 7043465 rpm: Whitelist copy-on-read block driver
|
||||
- f9a897c rpm: add throttle driver to rw whitelist
|
||||
- b9ea80f redhat: replacing %pkname by %name
|
||||
- eeeea85 redhat: Remove unused ApplyPatch macro
|
||||
- b42c578 redhat:removing disable code for libcacard
|
||||
- cee6bd5 redhat: improve packaging layout with modularization of the block layer
|
||||
- 0cb4c60 redhat: Introducing qemu-kvm-core package
|
||||
- 1ff4106 Add qemu-keymap to qemu-kvm-common
|
||||
- 47838a5 redhat: Make gitpublish profile the default one
|
||||
- a82f87b redhat: s390x: add hpage=1 to kvm.conf
|
||||
- 3d52169 Enabling vhost_user
|
||||
- 57aa228 spec: Enable Native Ceph support on all architectures
|
||||
- 5f9ea03 Thu Jun 21 2018 Danilo C. L. de Paula <ddepaula@redhat.com> - 2.12.0-13.el8
|
||||
- ed4d62a spec: Fix ambiguous 'python' interpreter name
|
||||
- 74b3e6c qemu-ga: blacklisting guest-exec and guest-exec-status RPCs
|
||||
- 2fd2cf7 redhat: rewrap "build_configure.sh" cmdline for the "rh-env-prep" target
|
||||
- f48dc7f redhat: remove the VTD, LIVE_BLOCK_OPS, and RHV options in local builds too
|
||||
- ccdf46b redhat: fix the "rh-env-prep" target's dependency on the SRPM_NAME macro
|
||||
- f258fbf redhat: remove dead code related to s390 (not s390x)
|
||||
- d186100 redhat: sync compiler flags from the spec file to "rh-env-prep"
|
||||
- 727aa86 redhat: sync guest agent enablement and tcmalloc usage from spec to local
|
||||
- b5d47e2 redhat: fix up Python 3 dependency for building QEMU
|
||||
- 70c64dd redhat: fix up Python dependency for SRPM generation
|
||||
- 96aca9f redhat: disable glusterfs dependency/support temporarily
|
||||
- e9aff9d block/vxhs: modularize VXHS via g_module
|
||||
- ecf40bf Defining a shebang for python scripts
|
||||
- 55e3177 redhat: changing the prefix and blurb scheme to support rhel8-like handling
|
||||
- 571e4ac Removing "rh-srpm-rhel" make target
|
||||
- 9db09ef redhat: enforce python3 usage
|
||||
- 56cda0b spec: Re-add dependency to seavgabios and ipxe for ppc64 architectures
|
||||
- c780848 Drop build_configure.sh and Makefile.local files
|
||||
- cca9118 Fix subject line in .gitpublish
|
||||
- 9745e27 redhat: Update build configuration
|
||||
- 193830c redhat: Disable vhost crypto
|
||||
- 9dc30cb redhat: Make rh-local actually work in a RHEL-8 environment
|
||||
- 99011c9 redhat: enable opengl, add build and runtime deps
|
||||
- 7290e3f redhat: Improve python check
|
||||
---
|
||||
.gitpublish | 61 +-
|
||||
Makefile | 3 +-
|
||||
block/Makefile.objs | 2 +-
|
||||
block/vxhs.c | 119 ++-
|
||||
configure | 33 +-
|
||||
os-posix.c | 2 +-
|
||||
redhat/.gitignore | 5 +
|
||||
redhat/85-kvm.preset | 5 +
|
||||
redhat/95-kvm-memlock.conf | 10 +
|
||||
redhat/99-qemu-guest-agent.rules | 2 +
|
||||
redhat/Makefile | 82 ++
|
||||
redhat/Makefile.common | 47 ++
|
||||
redhat/bridge.conf | 1 +
|
||||
redhat/ksm.service | 13 +
|
||||
redhat/ksm.sysconfig | 4 +
|
||||
redhat/ksmctl.c | 77 ++
|
||||
redhat/ksmtuned | 139 ++++
|
||||
redhat/ksmtuned.conf | 21 +
|
||||
redhat/ksmtuned.service | 12 +
|
||||
redhat/kvm-s390x.conf | 19 +
|
||||
redhat/kvm-setup | 40 +
|
||||
redhat/kvm-setup.service | 14 +
|
||||
redhat/kvm-x86.conf | 12 +
|
||||
redhat/kvm.conf | 3 +
|
||||
redhat/kvm.modules | 18 +
|
||||
redhat/qemu-ga.sysconfig | 19 +
|
||||
redhat/qemu-guest-agent.service | 20 +
|
||||
redhat/qemu-kvm.spec.template | 1531 ++++++++++++++++++++++++++++++++++++
|
||||
redhat/qemu-pr-helper.service | 15 +
|
||||
redhat/qemu-pr-helper.socket | 9 +
|
||||
redhat/rpmbuild/BUILD/.gitignore | 2 +
|
||||
redhat/rpmbuild/RPMS/.gitignore | 2 +
|
||||
redhat/rpmbuild/SOURCES/.gitignore | 2 +
|
||||
redhat/rpmbuild/SPECS/.gitignore | 2 +
|
||||
redhat/rpmbuild/SRPMS/.gitignore | 2 +
|
||||
redhat/scripts/frh.py | 24 +
|
||||
redhat/scripts/git-backport-diff | 327 ++++++++
|
||||
redhat/scripts/git-compile-check | 215 +++++
|
||||
redhat/scripts/process-patches.sh | 92 +++
|
||||
redhat/scripts/tarball_checksum.sh | 3 +
|
||||
redhat/vhost.conf | 3 +
|
||||
ui/vnc.c | 2 +-
|
||||
42 files changed, 2921 insertions(+), 93 deletions(-)
|
||||
create mode 100644 redhat/.gitignore
|
||||
create mode 100644 redhat/85-kvm.preset
|
||||
create mode 100644 redhat/95-kvm-memlock.conf
|
||||
create mode 100644 redhat/99-qemu-guest-agent.rules
|
||||
create mode 100644 redhat/Makefile
|
||||
create mode 100644 redhat/Makefile.common
|
||||
create mode 100644 redhat/bridge.conf
|
||||
create mode 100644 redhat/ksm.service
|
||||
create mode 100644 redhat/ksm.sysconfig
|
||||
create mode 100644 redhat/ksmctl.c
|
||||
create mode 100644 redhat/ksmtuned
|
||||
create mode 100644 redhat/ksmtuned.conf
|
||||
create mode 100644 redhat/ksmtuned.service
|
||||
create mode 100644 redhat/kvm-s390x.conf
|
||||
create mode 100644 redhat/kvm-setup
|
||||
create mode 100644 redhat/kvm-setup.service
|
||||
create mode 100644 redhat/kvm-x86.conf
|
||||
create mode 100644 redhat/kvm.conf
|
||||
create mode 100644 redhat/kvm.modules
|
||||
create mode 100644 redhat/qemu-ga.sysconfig
|
||||
create mode 100644 redhat/qemu-guest-agent.service
|
||||
create mode 100644 redhat/qemu-kvm.spec.template
|
||||
create mode 100644 redhat/qemu-pr-helper.service
|
||||
create mode 100644 redhat/qemu-pr-helper.socket
|
||||
create mode 100644 redhat/rpmbuild/BUILD/.gitignore
|
||||
create mode 100644 redhat/rpmbuild/RPMS/.gitignore
|
||||
create mode 100644 redhat/rpmbuild/SOURCES/.gitignore
|
||||
create mode 100644 redhat/rpmbuild/SPECS/.gitignore
|
||||
create mode 100644 redhat/rpmbuild/SRPMS/.gitignore
|
||||
create mode 100755 redhat/scripts/frh.py
|
||||
create mode 100755 redhat/scripts/git-backport-diff
|
||||
create mode 100755 redhat/scripts/git-compile-check
|
||||
create mode 100755 redhat/scripts/process-patches.sh
|
||||
create mode 100755 redhat/scripts/tarball_checksum.sh
|
||||
create mode 100644 redhat/vhost.conf
|
||||
|
||||
diff --git a/Makefile b/Makefile
|
||||
index 2da686b..eb4c57a 100644
|
||||
--- a/Makefile
|
||||
+++ b/Makefile
|
||||
@@ -501,6 +501,7 @@ CAP_CFLAGS += -DCAPSTONE_HAS_ARM
|
||||
CAP_CFLAGS += -DCAPSTONE_HAS_ARM64
|
||||
CAP_CFLAGS += -DCAPSTONE_HAS_POWERPC
|
||||
CAP_CFLAGS += -DCAPSTONE_HAS_X86
|
||||
+CAP_CFLAGS += -Wp,-D_GLIBCXX_ASSERTIONS
|
||||
|
||||
subdir-capstone: .git-submodule-status
|
||||
$(call quiet-command,$(MAKE) -C $(SRC_PATH)/capstone CAPSTONE_SHARED=no BUILDDIR="$(BUILD_DIR)/capstone" CC="$(CC)" AR="$(AR)" LD="$(LD)" RANLIB="$(RANLIB)" CFLAGS="$(CAP_CFLAGS)" $(SUBDIR_MAKEFLAGS) $(BUILD_DIR)/capstone/$(LIBCAPSTONE))
|
||||
@@ -819,7 +820,7 @@ install-doc: $(DOCS)
|
||||
$(INSTALL_DATA) docs/interop/qemu-qmp-ref.txt "$(DESTDIR)$(qemu_docdir)"
|
||||
ifdef CONFIG_POSIX
|
||||
$(INSTALL_DIR) "$(DESTDIR)$(mandir)/man1"
|
||||
- $(INSTALL_DATA) qemu.1 "$(DESTDIR)$(mandir)/man1"
|
||||
+ $(INSTALL_DATA) qemu.1 "$(DESTDIR)$(mandir)/man1/qemu-kvm.1"
|
||||
$(INSTALL_DIR) "$(DESTDIR)$(mandir)/man7"
|
||||
$(INSTALL_DATA) docs/interop/qemu-qmp-ref.7 "$(DESTDIR)$(mandir)/man7"
|
||||
$(INSTALL_DATA) docs/qemu-block-drivers.7 "$(DESTDIR)$(mandir)/man7"
|
||||
diff --git a/block/Makefile.objs b/block/Makefile.objs
|
||||
index c8337bf..cd1e309 100644
|
||||
--- a/block/Makefile.objs
|
||||
+++ b/block/Makefile.objs
|
||||
@@ -21,7 +21,7 @@ block-obj-$(CONFIG_LIBNFS) += nfs.o
|
||||
block-obj-$(CONFIG_CURL) += curl.o
|
||||
block-obj-$(CONFIG_RBD) += rbd.o
|
||||
block-obj-$(CONFIG_GLUSTERFS) += gluster.o
|
||||
-block-obj-$(CONFIG_VXHS) += vxhs.o
|
||||
+#block-obj-$(CONFIG_VXHS) += vxhs.o
|
||||
block-obj-$(CONFIG_LIBSSH2) += ssh.o
|
||||
block-obj-y += accounting.o dirty-bitmap.o
|
||||
block-obj-y += write-threshold.o
|
||||
diff --git a/block/vxhs.c b/block/vxhs.c
|
||||
index 0cb0a00..9164b3e 100644
|
||||
--- a/block/vxhs.c
|
||||
+++ b/block/vxhs.c
|
||||
@@ -9,7 +9,8 @@
|
||||
*/
|
||||
|
||||
#include "qemu/osdep.h"
|
||||
-#include <qnio/qnio_api.h>
|
||||
+#include "block/vxhs_shim.h"
|
||||
+#include <gmodule.h>
|
||||
#include <sys/param.h>
|
||||
#include "block/block_int.h"
|
||||
#include "block/qdict.h"
|
||||
@@ -59,6 +60,97 @@ typedef struct BDRVVXHSState {
|
||||
char *tlscredsid; /* tlscredsid */
|
||||
} BDRVVXHSState;
|
||||
|
||||
+#define LIBVXHS_FULL_PATHNAME "/usr/lib64/qemu/libvxhs.so.1"
|
||||
+static bool libvxhs_loaded;
|
||||
+static GModule *libvxhs_handle;
|
||||
+
|
||||
+static LibVXHSFuncs libvxhs;
|
||||
+
|
||||
+typedef struct LibVXHSSymbols {
|
||||
+ const char *name;
|
||||
+ gpointer *addr;
|
||||
+} LibVXHSSymbols;
|
||||
+
|
||||
+static LibVXHSSymbols libvxhs_symbols[] = {
|
||||
+ {"iio_init", (gpointer *) &libvxhs.iio_init},
|
||||
+ {"iio_fini", (gpointer *) &libvxhs.iio_fini},
|
||||
+ {"iio_min_version", (gpointer *) &libvxhs.iio_min_version},
|
||||
+ {"iio_max_version", (gpointer *) &libvxhs.iio_max_version},
|
||||
+ {"iio_open", (gpointer *) &libvxhs.iio_open},
|
||||
+ {"iio_close", (gpointer *) &libvxhs.iio_close},
|
||||
+ {"iio_writev", (gpointer *) &libvxhs.iio_writev},
|
||||
+ {"iio_readv", (gpointer *) &libvxhs.iio_readv},
|
||||
+ {"iio_ioctl", (gpointer *) &libvxhs.iio_ioctl},
|
||||
+ {NULL}
|
||||
+};
|
||||
+
|
||||
+static void bdrv_vxhs_set_funcs(GModule *handle, Error **errp)
|
||||
+{
|
||||
+ int i = 0;
|
||||
+ while (libvxhs_symbols[i].name) {
|
||||
+ const char *name = libvxhs_symbols[i].name;
|
||||
+ if (!g_module_symbol(handle, name, libvxhs_symbols[i].addr)) {
|
||||
+ error_setg(errp, "%s could not be loaded from libvxhs: %s",
|
||||
+ name, g_module_error());
|
||||
+ return;
|
||||
+ }
|
||||
+ ++i;
|
||||
+ }
|
||||
+}
|
||||
+
|
||||
+static void bdrv_vxhs_load_libs(Error **errp)
|
||||
+{
|
||||
+ Error *local_err = NULL;
|
||||
+ int32_t ver;
|
||||
+
|
||||
+ if (libvxhs_loaded) {
|
||||
+ return;
|
||||
+ }
|
||||
+
|
||||
+ if (!g_module_supported()) {
|
||||
+ error_setg(errp, "modules are not supported on this platform: %s",
|
||||
+ g_module_error());
|
||||
+ return;
|
||||
+ }
|
||||
+
|
||||
+ libvxhs_handle = g_module_open(LIBVXHS_FULL_PATHNAME,
|
||||
+ G_MODULE_BIND_LAZY | G_MODULE_BIND_LOCAL);
|
||||
+ if (!libvxhs_handle) {
|
||||
+ error_setg(errp, "The VXHS library from Veritas might not be installed "
|
||||
+ "correctly (%s)", g_module_error());
|
||||
+ return;
|
||||
+ }
|
||||
+
|
||||
+ g_module_make_resident(libvxhs_handle);
|
||||
+
|
||||
+ bdrv_vxhs_set_funcs(libvxhs_handle, &local_err);
|
||||
+ if (local_err) {
|
||||
+ error_propagate(errp, local_err);
|
||||
+ return;
|
||||
+ }
|
||||
+
|
||||
+ /* Now check to see if the libvxhs we are using here is supported
|
||||
+ * by the loaded version */
|
||||
+
|
||||
+ ver = (*libvxhs.iio_min_version)();
|
||||
+ if (ver > QNIO_VERSION) {
|
||||
+ error_setg(errp, "Trying to use libvxhs version %"PRId32" API, but "
|
||||
+ "only %"PRId32" or newer is supported by %s",
|
||||
+ QNIO_VERSION, ver, LIBVXHS_FULL_PATHNAME);
|
||||
+ return;
|
||||
+ }
|
||||
+
|
||||
+ ver = (*libvxhs.iio_max_version)();
|
||||
+ if (ver < QNIO_VERSION) {
|
||||
+ error_setg(errp, "Trying to use libvxhs version %"PRId32" API, but "
|
||||
+ "only %"PRId32" or earlier is supported by %s",
|
||||
+ QNIO_VERSION, ver, LIBVXHS_FULL_PATHNAME);
|
||||
+ return;
|
||||
+ }
|
||||
+
|
||||
+ libvxhs_loaded = true;
|
||||
+}
|
||||
+
|
||||
static void vxhs_complete_aio_bh(void *opaque)
|
||||
{
|
||||
VXHSAIOCB *acb = opaque;
|
||||
@@ -226,7 +318,7 @@ static void vxhs_refresh_limits(BlockDriverState *bs, Error **errp)
|
||||
static int vxhs_init_and_ref(void)
|
||||
{
|
||||
if (vxhs_ref++ == 0) {
|
||||
- if (iio_init(QNIO_VERSION, vxhs_iio_callback)) {
|
||||
+ if ((*libvxhs.iio_init)(QNIO_VERSION, vxhs_iio_callback)) {
|
||||
return -ENODEV;
|
||||
}
|
||||
}
|
||||
@@ -236,7 +328,7 @@ static int vxhs_init_and_ref(void)
|
||||
static void vxhs_unref(void)
|
||||
{
|
||||
if (--vxhs_ref == 0) {
|
||||
- iio_fini();
|
||||
+ (*libvxhs.iio_fini)();
|
||||
}
|
||||
}
|
||||
|
||||
@@ -306,8 +398,17 @@ static int vxhs_open(BlockDriverState *bs, QDict *options,
|
||||
char *client_key = NULL;
|
||||
char *client_cert = NULL;
|
||||
|
||||
+ bdrv_vxhs_load_libs(&local_err);
|
||||
+ if (local_err) {
|
||||
+ error_propagate(errp, local_err);
|
||||
+ /* on error, cannot cleanup because the iio_fini() function
|
||||
+ * is not loaded */
|
||||
+ return -EINVAL;
|
||||
+ }
|
||||
+
|
||||
ret = vxhs_init_and_ref();
|
||||
if (ret < 0) {
|
||||
+ error_setg(&local_err, "libvxhs iio_init() failed");
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
@@ -392,8 +493,8 @@ static int vxhs_open(BlockDriverState *bs, QDict *options,
|
||||
/*
|
||||
* Open qnio channel to storage agent if not opened before
|
||||
*/
|
||||
- dev_handlep = iio_open(of_vsa_addr, s->vdisk_guid, 0,
|
||||
- cacert, client_key, client_cert);
|
||||
+ dev_handlep = (*libvxhs.iio_open)(of_vsa_addr, s->vdisk_guid, 0,
|
||||
+ cacert, client_key, client_cert);
|
||||
if (dev_handlep == NULL) {
|
||||
trace_vxhs_open_iio_open(of_vsa_addr);
|
||||
ret = -ENODEV;
|
||||
@@ -453,11 +554,11 @@ static BlockAIOCB *vxhs_aio_rw(BlockDriverState *bs, uint64_t offset,
|
||||
|
||||
switch (iodir) {
|
||||
case VDISK_AIO_WRITE:
|
||||
- ret = iio_writev(dev_handle, acb, qiov->iov, qiov->niov,
|
||||
+ ret = (*libvxhs.iio_writev)(dev_handle, acb, qiov->iov, qiov->niov,
|
||||
offset, size, iio_flags);
|
||||
break;
|
||||
case VDISK_AIO_READ:
|
||||
- ret = iio_readv(dev_handle, acb, qiov->iov, qiov->niov,
|
||||
+ ret = (*libvxhs.iio_writev)(dev_handle, acb, qiov->iov, qiov->niov,
|
||||
offset, size, iio_flags);
|
||||
break;
|
||||
default:
|
||||
@@ -506,7 +607,7 @@ static void vxhs_close(BlockDriverState *bs)
|
||||
* Close vDisk device
|
||||
*/
|
||||
if (s->vdisk_hostinfo.dev_handle) {
|
||||
- iio_close(s->vdisk_hostinfo.dev_handle);
|
||||
+ (*libvxhs.iio_close)(s->vdisk_hostinfo.dev_handle);
|
||||
s->vdisk_hostinfo.dev_handle = NULL;
|
||||
}
|
||||
|
||||
@@ -528,7 +629,7 @@ static int64_t vxhs_get_vdisk_stat(BDRVVXHSState *s)
|
||||
int ret = 0;
|
||||
void *dev_handle = s->vdisk_hostinfo.dev_handle;
|
||||
|
||||
- ret = iio_ioctl(dev_handle, IOR_VDISK_STAT, &vdisk_size, 0);
|
||||
+ ret = (*libvxhs.iio_ioctl)(dev_handle, IOR_VDISK_STAT, &vdisk_size, 0);
|
||||
if (ret < 0) {
|
||||
trace_vxhs_get_vdisk_stat_err(s->vdisk_guid, ret, errno);
|
||||
return -EIO;
|
||||
diff --git a/configure b/configure
|
||||
index 2a7796e..0314d53 100755
|
||||
--- a/configure
|
||||
+++ b/configure
|
||||
@@ -3460,7 +3460,7 @@ fi
|
||||
|
||||
glib_req_ver=2.40
|
||||
glib_modules=gthread-2.0
|
||||
-if test "$modules" = yes; then
|
||||
+if test "$modules" = yes -o "$vxhs" = yes; then
|
||||
glib_modules="$glib_modules gmodule-export-2.0"
|
||||
fi
|
||||
|
||||
@@ -5435,33 +5435,6 @@ if compile_prog "" "" ; then
|
||||
fi
|
||||
|
||||
##########################################
|
||||
-# Veritas HyperScale block driver VxHS
|
||||
-# Check if libvxhs is installed
|
||||
-
|
||||
-if test "$vxhs" != "no" ; then
|
||||
- cat > $TMPC <<EOF
|
||||
-#include <stdint.h>
|
||||
-#include <qnio/qnio_api.h>
|
||||
-
|
||||
-void *vxhs_callback;
|
||||
-
|
||||
-int main(void) {
|
||||
- iio_init(QNIO_VERSION, vxhs_callback);
|
||||
- return 0;
|
||||
-}
|
||||
-EOF
|
||||
- vxhs_libs="-lvxhs -lssl"
|
||||
- if compile_prog "" "$vxhs_libs" ; then
|
||||
- vxhs=yes
|
||||
- else
|
||||
- if test "$vxhs" = "yes" ; then
|
||||
- feature_not_found "vxhs block device" "Install libvxhs See github"
|
||||
- fi
|
||||
- vxhs=no
|
||||
- fi
|
||||
-fi
|
||||
-
|
||||
-##########################################
|
||||
# check for _Static_assert()
|
||||
|
||||
have_static_assert=no
|
||||
@@ -6759,8 +6732,8 @@ if test "$pthread_setname_np" = "yes" ; then
|
||||
fi
|
||||
|
||||
if test "$vxhs" = "yes" ; then
|
||||
- echo "CONFIG_VXHS=y" >> $config_host_mak
|
||||
- echo "VXHS_LIBS=$vxhs_libs" >> $config_host_mak
|
||||
+ echo "CONFIG_VXHS=m" >> $config_host_mak
|
||||
+ echo "VXHS_LIBS= -lssl" >> $config_host_mak
|
||||
fi
|
||||
|
||||
if test "$tcg_interpreter" = "yes"; then
|
||||
diff --git a/os-posix.c b/os-posix.c
|
||||
index 9ce6f74..c4cfd0d 100644
|
||||
--- a/os-posix.c
|
||||
+++ b/os-posix.c
|
||||
@@ -82,7 +82,7 @@ void os_setup_signal_handling(void)
|
||||
/* Find a likely location for support files using the location of the binary.
|
||||
For installed binaries this will be "$bindir/../share/qemu". When
|
||||
running from the build tree this will be "$bindir/../pc-bios". */
|
||||
-#define SHARE_SUFFIX "/share/qemu"
|
||||
+#define SHARE_SUFFIX "/share/qemu-kvm"
|
||||
#define BUILD_SUFFIX "/pc-bios"
|
||||
char *os_find_datadir(void)
|
||||
{
|
||||
diff --git a/ui/vnc.c b/ui/vnc.c
|
||||
index 3596932..050c421 100644
|
||||
--- a/ui/vnc.c
|
||||
+++ b/ui/vnc.c
|
||||
@@ -4054,7 +4054,7 @@ void vnc_display_open(const char *id, Error **errp)
|
||||
trace_vnc_auth_init(vd, 1, vd->ws_auth, vd->ws_subauth);
|
||||
|
||||
#ifdef CONFIG_VNC_SASL
|
||||
- if ((saslErr = sasl_server_init(NULL, "qemu")) != SASL_OK) {
|
||||
+ if ((saslErr = sasl_server_init(NULL, "qemu-kvm")) != SASL_OK) {
|
||||
error_setg(errp, "Failed to initialize SASL auth: %s",
|
||||
sasl_errstring(saslErr, NULL, NULL));
|
||||
goto fail;
|
||||
--
|
||||
1.8.3.1
|
||||
|
1094
0002-Enable-disable-devices-for-RHEL-7.patch
Normal file
1094
0002-Enable-disable-devices-for-RHEL-7.patch
Normal file
File diff suppressed because it is too large
Load Diff
3017
0003-Add-RHEL-machine-types.patch
Normal file
3017
0003-Add-RHEL-machine-types.patch
Normal file
File diff suppressed because it is too large
Load Diff
32
0004-Use-kvm-by-default.patch
Normal file
32
0004-Use-kvm-by-default.patch
Normal file
@ -0,0 +1,32 @@
|
||||
From 5a441b820faa4e6e9e6fc80cccc813a3c333b6c2 Mon Sep 17 00:00:00 2001
|
||||
From: Miroslav Rezanina <mrezanin@redhat.com>
|
||||
Date: Thu, 18 Dec 2014 06:27:49 +0100
|
||||
Subject: Use kvm by default
|
||||
|
||||
Bugzilla: 906185
|
||||
|
||||
RHEL uses kvm accelerator by default, if available.
|
||||
|
||||
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
|
||||
---
|
||||
accel/accel.c | 4 ++--
|
||||
1 file changed, 2 insertions(+), 2 deletions(-)
|
||||
|
||||
diff --git a/accel/accel.c b/accel/accel.c
|
||||
index 966b2d8..e8ca7bb 100644
|
||||
--- a/accel/accel.c
|
||||
+++ b/accel/accel.c
|
||||
@@ -79,8 +79,8 @@ void configure_accelerator(MachineState *ms)
|
||||
|
||||
accel = qemu_opt_get(qemu_get_machine_opts(), "accel");
|
||||
if (accel == NULL) {
|
||||
- /* Use the default "accelerator", tcg */
|
||||
- accel = "tcg";
|
||||
+ /* RHEL uses kvm as the default accelerator, fallback to tcg */
|
||||
+ accel = "kvm:tcg";
|
||||
}
|
||||
|
||||
accel_list = g_strsplit(accel, ":", 0);
|
||||
--
|
||||
1.8.3.1
|
||||
|
65
0005-vfio-cap-number-of-devices-that-can-be-assigned.patch
Normal file
65
0005-vfio-cap-number-of-devices-that-can-be-assigned.patch
Normal file
@ -0,0 +1,65 @@
|
||||
From 0c57186334ab4ef7f04de604a8f13b39ad6578c8 Mon Sep 17 00:00:00 2001
|
||||
From: Bandan Das <bsd@redhat.com>
|
||||
Date: Tue, 3 Dec 2013 20:05:13 +0100
|
||||
Subject: vfio: cap number of devices that can be assigned
|
||||
|
||||
RH-Author: Bandan Das <bsd@redhat.com>
|
||||
Message-id: <1386101113-31560-3-git-send-email-bsd@redhat.com>
|
||||
Patchwork-id: 55984
|
||||
O-Subject: [PATCH RHEL7 qemu-kvm v2 2/2] vfio: cap number of devices that can be assigned
|
||||
Bugzilla: 678368
|
||||
RH-Acked-by: Alex Williamson <alex.williamson@redhat.com>
|
||||
RH-Acked-by: Marcelo Tosatti <mtosatti@redhat.com>
|
||||
RH-Acked-by: Michael S. Tsirkin <mst@redhat.com>
|
||||
|
||||
Go through all groups to get count of total number of devices
|
||||
active to enforce limit
|
||||
|
||||
Reasoning from Alex for the limit(32) - Assuming 3 slots per
|
||||
device, with 125 slots (number of memory slots for RHEL 7),
|
||||
we can support almost 40 devices and still have few slots left
|
||||
for other uses. Stepping down a bit, the number 32 arbitrarily
|
||||
matches the number of slots on a PCI bus and is also a nice power
|
||||
of two.
|
||||
|
||||
Signed-off-by: Bandan Das <bsd@redhat.com>
|
||||
---
|
||||
hw/vfio/pci.c | 15 ++++++++++++++-
|
||||
1 file changed, 14 insertions(+), 1 deletion(-)
|
||||
|
||||
diff --git a/hw/vfio/pci.c b/hw/vfio/pci.c
|
||||
index 6cbb8fa..59b3c0f 100644
|
||||
--- a/hw/vfio/pci.c
|
||||
+++ b/hw/vfio/pci.c
|
||||
@@ -36,6 +36,7 @@
|
||||
#include "qapi/error.h"
|
||||
|
||||
#define MSIX_CAP_LENGTH 12
|
||||
+#define MAX_DEV_ASSIGN_CMDLINE 32
|
||||
|
||||
static void vfio_disable_interrupts(VFIOPCIDevice *vdev);
|
||||
static void vfio_mmap_set_enabled(VFIOPCIDevice *vdev, bool enabled);
|
||||
@@ -2809,7 +2810,19 @@ static void vfio_realize(PCIDevice *pdev, Error **errp)
|
||||
ssize_t len;
|
||||
struct stat st;
|
||||
int groupid;
|
||||
- int i, ret;
|
||||
+ int ret, i = 0;
|
||||
+
|
||||
+ QLIST_FOREACH(group, &vfio_group_list, next) {
|
||||
+ QLIST_FOREACH(vbasedev_iter, &group->device_list, next) {
|
||||
+ i++;
|
||||
+ }
|
||||
+ }
|
||||
+
|
||||
+ if (i >= MAX_DEV_ASSIGN_CMDLINE) {
|
||||
+ error_setg(errp, "Maximum supported vfio devices (%d) "
|
||||
+ "already attached", MAX_DEV_ASSIGN_CMDLINE);
|
||||
+ return;
|
||||
+ }
|
||||
|
||||
if (!vdev->vbasedev.sysfsdev) {
|
||||
if (!(~vdev->host.domain || ~vdev->host.bus ||
|
||||
--
|
||||
1.8.3.1
|
||||
|
55
0006-Add-support-statement-to-help-output.patch
Normal file
55
0006-Add-support-statement-to-help-output.patch
Normal file
@ -0,0 +1,55 @@
|
||||
From c2858d09461c6f69553e8b9d69804f243c2d08bb Mon Sep 17 00:00:00 2001
|
||||
From: Eduardo Habkost <ehabkost@redhat.com>
|
||||
Date: Wed, 4 Dec 2013 18:53:17 +0100
|
||||
Subject: Add support statement to -help output
|
||||
|
||||
RH-Author: Eduardo Habkost <ehabkost@redhat.com>
|
||||
Message-id: <1386183197-27761-1-git-send-email-ehabkost@redhat.com>
|
||||
Patchwork-id: 55994
|
||||
O-Subject: [qemu-kvm RHEL7 PATCH] Add support statement to -help output
|
||||
Bugzilla: 972773
|
||||
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
|
||||
RH-Acked-by: knoel@redhat.com
|
||||
RH-Acked-by: Paolo Bonzini <pbonzini@redhat.com>
|
||||
|
||||
Add support statement to -help output, reporting direct qemu-kvm usage
|
||||
as unsupported by Red Hat, and advising users to use libvirt instead.
|
||||
|
||||
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
||||
---
|
||||
vl.c | 9 +++++++++
|
||||
1 file changed, 9 insertions(+)
|
||||
|
||||
diff --git a/vl.c b/vl.c
|
||||
index 4f96203..43c4b78 100644
|
||||
--- a/vl.c
|
||||
+++ b/vl.c
|
||||
@@ -1876,9 +1876,17 @@ static void version(void)
|
||||
QEMU_COPYRIGHT "\n");
|
||||
}
|
||||
|
||||
+static void print_rh_warning(void)
|
||||
+{
|
||||
+ printf("\nWARNING: Direct use of qemu-kvm from the command line is not supported by Red Hat.\n"
|
||||
+ "WARNING: Use libvirt as the stable management interface.\n"
|
||||
+ "WARNING: Some command line options listed here may not be available in future releases.\n\n");
|
||||
+}
|
||||
+
|
||||
static void help(int exitcode)
|
||||
{
|
||||
version();
|
||||
+ print_rh_warning();
|
||||
printf("usage: %s [options] [disk_image]\n\n"
|
||||
"'disk_image' is a raw hard disk image for IDE hard disk 0\n\n",
|
||||
error_get_progname());
|
||||
@@ -1895,6 +1903,7 @@ static void help(int exitcode)
|
||||
"\n"
|
||||
QEMU_HELP_BOTTOM "\n");
|
||||
|
||||
+ print_rh_warning();
|
||||
exit(exitcode);
|
||||
}
|
||||
|
||||
--
|
||||
1.8.3.1
|
||||
|
89
0007-globally-limit-the-maximum-number-of-CPUs.patch
Normal file
89
0007-globally-limit-the-maximum-number-of-CPUs.patch
Normal file
@ -0,0 +1,89 @@
|
||||
From 36dda20ae7312b1db0b4060bb2420ab18e5f5483 Mon Sep 17 00:00:00 2001
|
||||
From: Andrew Jones <drjones@redhat.com>
|
||||
Date: Tue, 21 Jan 2014 10:46:52 +0100
|
||||
Subject: globally limit the maximum number of CPUs
|
||||
|
||||
We now globally limit the number of VCPUs.
|
||||
Especially, there is no way one can specify more than
|
||||
max_cpus VCPUs for a VM.
|
||||
|
||||
This allows us the restore the ppc max_cpus limitation to the upstream
|
||||
default and minimize the ppc hack in kvm-all.c.
|
||||
|
||||
Signed-off-by: David Hildenbrand <david@redhat.com>
|
||||
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
|
||||
Signed-off-by: Danilo Cesar Lemes de Paula <ddepaula@redhat.com>
|
||||
---
|
||||
accel/kvm/kvm-all.c | 12 ++++++++++++
|
||||
vl.c | 18 ++++++++++++++++++
|
||||
2 files changed, 30 insertions(+)
|
||||
|
||||
diff --git a/accel/kvm/kvm-all.c b/accel/kvm/kvm-all.c
|
||||
index eb7db92..c2e7095 100644
|
||||
--- a/accel/kvm/kvm-all.c
|
||||
+++ b/accel/kvm/kvm-all.c
|
||||
@@ -1586,6 +1586,18 @@ static int kvm_init(MachineState *ms)
|
||||
soft_vcpus_limit = kvm_recommended_vcpus(s);
|
||||
hard_vcpus_limit = kvm_max_vcpus(s);
|
||||
|
||||
+#ifdef HOST_PPC64
|
||||
+ /*
|
||||
+ * On POWER, the kernel advertises a soft limit based on the
|
||||
+ * number of CPU threads on the host. We want to allow exceeding
|
||||
+ * this for testing purposes, so we don't want to set hard limit
|
||||
+ * to soft limit as on x86.
|
||||
+ */
|
||||
+#else
|
||||
+ /* RHEL doesn't support nr_vcpus > soft_vcpus_limit */
|
||||
+ hard_vcpus_limit = soft_vcpus_limit;
|
||||
+#endif
|
||||
+
|
||||
while (nc->name) {
|
||||
if (nc->num > soft_vcpus_limit) {
|
||||
warn_report("Number of %s cpus requested (%d) exceeds "
|
||||
diff --git a/vl.c b/vl.c
|
||||
index 43c4b78..b50dbe4 100644
|
||||
--- a/vl.c
|
||||
+++ b/vl.c
|
||||
@@ -133,6 +133,8 @@ int main(int argc, char **argv)
|
||||
|
||||
#define MAX_VIRTIO_CONSOLES 1
|
||||
|
||||
+#define RHEL_MAX_CPUS 384
|
||||
+
|
||||
static const char *data_dir[16];
|
||||
static int data_dir_idx;
|
||||
const char *bios_name = NULL;
|
||||
@@ -1430,6 +1432,20 @@ MachineClass *find_default_machine(void)
|
||||
return mc;
|
||||
}
|
||||
|
||||
+/* Maximum number of CPUs limited for Red Hat Enterprise Linux */
|
||||
+static void limit_max_cpus_in_machines(void)
|
||||
+{
|
||||
+ GSList *el, *machines = object_class_get_list(TYPE_MACHINE, false);
|
||||
+
|
||||
+ for (el = machines; el; el = el->next) {
|
||||
+ MachineClass *mc = el->data;
|
||||
+
|
||||
+ if (mc->max_cpus > RHEL_MAX_CPUS) {
|
||||
+ mc->max_cpus = RHEL_MAX_CPUS;
|
||||
+ }
|
||||
+ }
|
||||
+}
|
||||
+
|
||||
MachineInfoList *qmp_query_machines(Error **errp)
|
||||
{
|
||||
GSList *el, *machines = object_class_get_list(TYPE_MACHINE, false);
|
||||
@@ -3993,6 +4009,8 @@ int main(int argc, char **argv, char **envp)
|
||||
"mutually exclusive");
|
||||
exit(EXIT_FAILURE);
|
||||
}
|
||||
+ /* Maximum number of CPUs limited for Red Hat Enterprise Linux */
|
||||
+ limit_max_cpus_in_machines();
|
||||
|
||||
machine_class = select_machine();
|
||||
|
||||
--
|
||||
1.8.3.1
|
||||
|
104
0008-Add-support-for-simpletrace.patch
Normal file
104
0008-Add-support-for-simpletrace.patch
Normal file
@ -0,0 +1,104 @@
|
||||
From 84763026a2e71d7b9f7fc9249ba25771724c272d Mon Sep 17 00:00:00 2001
|
||||
From: Miroslav Rezanina <mrezanin@redhat.com>
|
||||
Date: Thu, 8 Oct 2015 09:50:17 +0200
|
||||
Subject: Add support for simpletrace
|
||||
|
||||
As simpletrace is upstream, we just need to properly handle it during rpmbuild.
|
||||
|
||||
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
|
||||
---
|
||||
.gitignore | 2 ++
|
||||
Makefile | 4 +++
|
||||
README.systemtap | 43 +++++++++++++++++++++++++++++++++
|
||||
redhat/qemu-kvm.spec.template | 29 ++++++++++++++++++++--
|
||||
scripts/systemtap/conf.d/qemu_kvm.conf | 4 +++
|
||||
scripts/systemtap/script.d/qemu_kvm.stp | 1 +
|
||||
6 files changed, 81 insertions(+), 2 deletions(-)
|
||||
create mode 100644 README.systemtap
|
||||
create mode 100644 scripts/systemtap/conf.d/qemu_kvm.conf
|
||||
create mode 100644 scripts/systemtap/script.d/qemu_kvm.stp
|
||||
|
||||
diff --git a/Makefile b/Makefile
|
||||
index eb4c57a..6b6d3f6 100644
|
||||
--- a/Makefile
|
||||
+++ b/Makefile
|
||||
@@ -880,6 +880,10 @@ endif
|
||||
$(INSTALL_DATA) $(SRC_PATH)/pc-bios/keymaps/$$x "$(DESTDIR)$(qemu_datadir)/keymaps"; \
|
||||
done
|
||||
$(INSTALL_DATA) $(BUILD_DIR)/trace-events-all "$(DESTDIR)$(qemu_datadir)/trace-events-all"
|
||||
+ $(INSTALL_DIR) "$(DESTDIR)$(qemu_datadir)/systemtap/script.d"
|
||||
+ $(INSTALL_DATA) $(SRC_PATH)/scripts/systemtap/script.d/qemu_kvm.stp "$(DESTDIR)$(qemu_datadir)/systemtap/script.d/"
|
||||
+ $(INSTALL_DIR) "$(DESTDIR)$(qemu_datadir)/systemtap/conf.d"
|
||||
+ $(INSTALL_DATA) $(SRC_PATH)/scripts/systemtap/conf.d/qemu_kvm.conf "$(DESTDIR)$(qemu_datadir)/systemtap/conf.d/"
|
||||
for d in $(TARGET_DIRS); do \
|
||||
$(MAKE) $(SUBDIR_MAKEFLAGS) TARGET_DIR=$$d/ -C $$d $@ || exit 1 ; \
|
||||
done
|
||||
diff --git a/README.systemtap b/README.systemtap
|
||||
new file mode 100644
|
||||
index 0000000..ad913fc
|
||||
--- /dev/null
|
||||
+++ b/README.systemtap
|
||||
@@ -0,0 +1,43 @@
|
||||
+QEMU tracing using systemtap-initscript
|
||||
+---------------------------------------
|
||||
+
|
||||
+You can capture QEMU trace data all the time using systemtap-initscript. This
|
||||
+uses SystemTap's flight recorder mode to trace all running guests to a
|
||||
+fixed-size buffer on the host. Old trace entries are overwritten by new
|
||||
+entries when the buffer size wraps.
|
||||
+
|
||||
+1. Install the systemtap-initscript package:
|
||||
+ # yum install systemtap-initscript
|
||||
+
|
||||
+2. Install the systemtap scripts and the conf file:
|
||||
+ # cp /usr/share/qemu-kvm/systemtap/script.d/qemu_kvm.stp /etc/systemtap/script.d/
|
||||
+ # cp /usr/share/qemu-kvm/systemtap/conf.d/qemu_kvm.conf /etc/systemtap/conf.d/
|
||||
+
|
||||
+The set of trace events to enable is given in qemu_kvm.stp. This SystemTap
|
||||
+script can be customized to add or remove trace events provided in
|
||||
+/usr/share/systemtap/tapset/qemu-kvm-simpletrace.stp.
|
||||
+
|
||||
+SystemTap customizations can be made to qemu_kvm.conf to control the flight
|
||||
+recorder buffer size and whether to store traces in memory only or disk too.
|
||||
+See stap(1) for option documentation.
|
||||
+
|
||||
+3. Start the systemtap service.
|
||||
+ # service systemtap start qemu_kvm
|
||||
+
|
||||
+4. Make the service start at boot time.
|
||||
+ # chkconfig systemtap on
|
||||
+
|
||||
+5. Confirm that the service works.
|
||||
+ # service systemtap status qemu_kvm
|
||||
+ qemu_kvm is running...
|
||||
+
|
||||
+When you want to inspect the trace buffer, perform the following steps:
|
||||
+
|
||||
+1. Dump the trace buffer.
|
||||
+ # staprun -A qemu_kvm >/tmp/trace.log
|
||||
+
|
||||
+2. Start the systemtap service because the preceding step stops the service.
|
||||
+ # service systemtap start qemu_kvm
|
||||
+
|
||||
+3. Translate the trace record to readable format.
|
||||
+ # /usr/share/qemu-kvm/simpletrace.py --no-header /usr/share/qemu-kvm/trace-events /tmp/trace.log
|
||||
diff --git a/scripts/systemtap/conf.d/qemu_kvm.conf b/scripts/systemtap/conf.d/qemu_kvm.conf
|
||||
new file mode 100644
|
||||
index 0000000..372d816
|
||||
--- /dev/null
|
||||
+++ b/scripts/systemtap/conf.d/qemu_kvm.conf
|
||||
@@ -0,0 +1,4 @@
|
||||
+# Force load uprobes (see BZ#1118352)
|
||||
+stap -e 'probe process("/usr/libexec/qemu-kvm").function("main") { printf("") }' -c true
|
||||
+
|
||||
+qemu_kvm_OPT="-s4" # per-CPU buffer size, in megabytes
|
||||
diff --git a/scripts/systemtap/script.d/qemu_kvm.stp b/scripts/systemtap/script.d/qemu_kvm.stp
|
||||
new file mode 100644
|
||||
index 0000000..c04abf9
|
||||
--- /dev/null
|
||||
+++ b/scripts/systemtap/script.d/qemu_kvm.stp
|
||||
@@ -0,0 +1 @@
|
||||
+probe qemu.kvm.simpletrace.handle_qmp_command,qemu.kvm.simpletrace.monitor_protocol_*,qemu.kvm.simpletrace.migrate_set_state {}
|
||||
--
|
||||
1.8.3.1
|
||||
|
1040
0009-Use-qemu-kvm-in-documentation-instead-of-qemu-system.patch
Normal file
1040
0009-Use-qemu-kvm-in-documentation-instead-of-qemu-system.patch
Normal file
File diff suppressed because it is too large
Load Diff
82
0010-usb-xhci-Fix-PCI-capability-order.patch
Normal file
82
0010-usb-xhci-Fix-PCI-capability-order.patch
Normal file
@ -0,0 +1,82 @@
|
||||
From 268966c530da2d8e07e2c9034a82acd01335e2c2 Mon Sep 17 00:00:00 2001
|
||||
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
|
||||
Date: Fri, 5 May 2017 19:06:14 +0200
|
||||
Subject: usb-xhci: Fix PCI capability order
|
||||
|
||||
RH-Author: Dr. David Alan Gilbert <dgilbert@redhat.com>
|
||||
Message-id: <20170505190614.15987-2-dgilbert@redhat.com>
|
||||
Patchwork-id: 75038
|
||||
O-Subject: [RHEL-7.4 qemu-kvm-rhev PATCH 1/1] usb-xhci: Fix PCI capability order
|
||||
Bugzilla: 1447874
|
||||
RH-Acked-by: Laszlo Ersek <lersek@redhat.com>
|
||||
RH-Acked-by: Michael S. Tsirkin <mst@redhat.com>
|
||||
RH-Acked-by: Gerd Hoffmann <kraxel@redhat.com>
|
||||
RH-Acked-by: Juan Quintela <quintela@redhat.com>
|
||||
|
||||
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
|
||||
|
||||
Upstream commit 1108b2f8a9 in 2.7.0 changed the order
|
||||
of the PCI capability chain in the XHCI pci device in the case
|
||||
where the device has the PCIe endpoint capability (i.e. only
|
||||
older machine types, pc-i440fx-2.0 upstream, pc-i440fx-rhel7.0.0
|
||||
apparently for us).
|
||||
|
||||
Changing the order breaks migration compatibility; fixing this
|
||||
upstream would mean breaking the same case going from 2.7.0->current
|
||||
that currently works 2.7.0->2.9.0 - so upstream it's a choice
|
||||
of two breakages.
|
||||
|
||||
Since we never released 2.7.0/2.8.0 we can fix this downstream.
|
||||
|
||||
This reverts the order so that we create the capabilities in the
|
||||
order:
|
||||
PCIe
|
||||
MSI
|
||||
MSI-X
|
||||
|
||||
The symptom is:
|
||||
qemu-kvm: get_pci_config_device: Bad config data: i=0x71 read: a0 device: 0 cmask: ff wmask: 0 w1cmask:0
|
||||
qemu-kvm: Failed to load PCIDevice:config
|
||||
qemu-kvm: Failed to load xhci:parent_obj
|
||||
qemu-kvm: error while loading state for instance 0x0 of device '0000:00:0d.0/xhci'
|
||||
qemu-kvm: load of migration failed: Invalid argument
|
||||
|
||||
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
|
||||
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
|
||||
---
|
||||
hw/usb/hcd-xhci.c | 12 ++++++------
|
||||
1 file changed, 6 insertions(+), 6 deletions(-)
|
||||
|
||||
diff --git a/hw/usb/hcd-xhci.c b/hw/usb/hcd-xhci.c
|
||||
index ca19474..45fcce3 100644
|
||||
--- a/hw/usb/hcd-xhci.c
|
||||
+++ b/hw/usb/hcd-xhci.c
|
||||
@@ -3373,6 +3373,12 @@ static void usb_xhci_realize(struct PCIDevice *dev, Error **errp)
|
||||
xhci->max_pstreams_mask = 0;
|
||||
}
|
||||
|
||||
+ if (pci_bus_is_express(pci_get_bus(dev)) ||
|
||||
+ xhci_get_flag(xhci, XHCI_FLAG_FORCE_PCIE_ENDCAP)) {
|
||||
+ ret = pcie_endpoint_cap_init(dev, 0xa0);
|
||||
+ assert(ret > 0);
|
||||
+ }
|
||||
+
|
||||
if (xhci->msi != ON_OFF_AUTO_OFF) {
|
||||
ret = msi_init(dev, 0x70, xhci->numintrs, true, false, &err);
|
||||
/* Any error other than -ENOTSUP(board's MSI support is broken)
|
||||
@@ -3421,12 +3427,6 @@ static void usb_xhci_realize(struct PCIDevice *dev, Error **errp)
|
||||
PCI_BASE_ADDRESS_SPACE_MEMORY|PCI_BASE_ADDRESS_MEM_TYPE_64,
|
||||
&xhci->mem);
|
||||
|
||||
- if (pci_bus_is_express(pci_get_bus(dev)) ||
|
||||
- xhci_get_flag(xhci, XHCI_FLAG_FORCE_PCIE_ENDCAP)) {
|
||||
- ret = pcie_endpoint_cap_init(dev, 0xa0);
|
||||
- assert(ret > 0);
|
||||
- }
|
||||
-
|
||||
if (xhci->msix != ON_OFF_AUTO_OFF) {
|
||||
/* TODO check for errors, and should fail when msix=on */
|
||||
msix_init(dev, xhci->numintrs,
|
||||
--
|
||||
1.8.3.1
|
||||
|
@ -0,0 +1,66 @@
|
||||
From 126cb3f3717b266f27dc7c657da833779f9f3b54 Mon Sep 17 00:00:00 2001
|
||||
From: Fam Zheng <famz@redhat.com>
|
||||
Date: Wed, 14 Jun 2017 15:37:01 +0200
|
||||
Subject: virtio-scsi: Reject scsi-cd if data plane enabled [RHEL only]
|
||||
|
||||
RH-Author: Fam Zheng <famz@redhat.com>
|
||||
Message-id: <20170614153701.14757-1-famz@redhat.com>
|
||||
Patchwork-id: 75613
|
||||
O-Subject: [RHV-7.4 qemu-kvm-rhev PATCH v3] virtio-scsi: Reject scsi-cd if data plane enabled [RHEL only]
|
||||
Bugzilla: 1378816
|
||||
RH-Acked-by: Paolo Bonzini <pbonzini@redhat.com>
|
||||
RH-Acked-by: Stefan Hajnoczi <stefanha@redhat.com>
|
||||
RH-Acked-by: Max Reitz <mreitz@redhat.com>
|
||||
|
||||
We need a fix for RHEL 7.4 and 7.3.z, but unfortunately upstream isn't
|
||||
ready. If it were, the changes will be too invasive. To have an idea:
|
||||
|
||||
https://lists.gnu.org/archive/html/qemu-devel/2017-05/msg05400.html
|
||||
|
||||
is an incomplete attempt to fix part of the issue, and the remaining
|
||||
work unfortunately involve even more complex changes.
|
||||
|
||||
As a band-aid, this partially reverts the effect of ef8875b
|
||||
(virtio-scsi: Remove op blocker for dataplane, since v2.7). We cannot
|
||||
simply revert that commit as a whole because we already shipped it in
|
||||
qemu-kvm-rhev 7.3, since when, block jobs has been possible. We should
|
||||
only block what has been broken. Also, faithfully reverting the above
|
||||
commit means adding back the removed op blocker, but that is not enough,
|
||||
because it still crashes when inserting media into an initially empty
|
||||
scsi-cd.
|
||||
|
||||
All in all, scsi-cd on virtio-scsi-dataplane has basically been unusable
|
||||
unless the scsi-cd never enters an empty state, so, disable it
|
||||
altogether. Otherwise it would be much more difficult to avoid
|
||||
crashing.
|
||||
|
||||
Signed-off-by: Fam Zheng <famz@redhat.com>
|
||||
Signed-off-by: Miroslav Rezanina <mrezanin@redhat.com>
|
||||
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
|
||||
---
|
||||
hw/scsi/virtio-scsi.c | 9 +++++++++
|
||||
1 file changed, 9 insertions(+)
|
||||
|
||||
diff --git a/hw/scsi/virtio-scsi.c b/hw/scsi/virtio-scsi.c
|
||||
index 5a3057d..52a3c1d 100644
|
||||
--- a/hw/scsi/virtio-scsi.c
|
||||
+++ b/hw/scsi/virtio-scsi.c
|
||||
@@ -790,6 +790,15 @@ static void virtio_scsi_hotplug(HotplugHandler *hotplug_dev, DeviceState *dev,
|
||||
VirtIOSCSI *s = VIRTIO_SCSI(vdev);
|
||||
SCSIDevice *sd = SCSI_DEVICE(dev);
|
||||
|
||||
+ /* XXX: Remove this check once block backend is capable of handling
|
||||
+ * AioContext change upon eject/insert.
|
||||
+ * s->ctx is NULL if ioeventfd is off, s->ctx is qemu_get_aio_context() if
|
||||
+ * data plane is not used, both cases are safe for scsi-cd. */
|
||||
+ if (s->ctx && s->ctx != qemu_get_aio_context() &&
|
||||
+ object_dynamic_cast(OBJECT(dev), "scsi-cd")) {
|
||||
+ error_setg(errp, "scsi-cd is not supported by data plane");
|
||||
+ return;
|
||||
+ }
|
||||
if (s->ctx && !s->dataplane_fenced) {
|
||||
if (blk_op_is_blocked(sd->conf.blk, BLOCK_OP_TYPE_DATAPLANE, errp)) {
|
||||
return;
|
||||
--
|
||||
1.8.3.1
|
||||
|
72
0012-linux-headers-asm-s390-kvm.h-header-sync.patch
Normal file
72
0012-linux-headers-asm-s390-kvm.h-header-sync.patch
Normal file
@ -0,0 +1,72 @@
|
||||
From 811173cac3e80b6235de885b7b2ec4f9be3b4e31 Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Huth <thuth@redhat.com>
|
||||
Date: Thu, 9 Aug 2018 10:15:08 +0000
|
||||
Subject: linux-headers: asm-s390/kvm.h header sync
|
||||
|
||||
RH-Author: Thomas Huth <thuth@redhat.com>
|
||||
Message-id: <1533813309-9643-2-git-send-email-thuth@redhat.com>
|
||||
Patchwork-id: 81688
|
||||
O-Subject: [RHEL-8.0 qemu-kvm PATCH 1/2] linux-headers: asm-s390/kvm.h header sync
|
||||
Bugzilla: 1612938
|
||||
RH-Acked-by: David Hildenbrand <david@redhat.com>
|
||||
RH-Acked-by: Cornelia Huck <cohuck@redhat.com>
|
||||
RH-Acked-by: Jens Freimann <jfreiman@redhat.com>
|
||||
|
||||
This is a header sync with the linux uapi header. The corresponding
|
||||
kernel commit id is a3da7b4a3be51f37f434f14e11e60491f098b6ea (in
|
||||
the kvm/next branch)
|
||||
|
||||
Signed-off-by: Thomas Huth <thuth@redhat.com>
|
||||
|
||||
Merged patches (3.0.0):
|
||||
- 57332f1 linux-headers: Update to include KVM_CAP_S390_HPAGE_1M
|
||||
---
|
||||
linux-headers/asm-s390/kvm.h | 5 ++++-
|
||||
linux-headers/linux/kvm.h | 1 +
|
||||
2 files changed, 5 insertions(+), 1 deletion(-)
|
||||
|
||||
diff --git a/linux-headers/asm-s390/kvm.h b/linux-headers/asm-s390/kvm.h
|
||||
index 11def14..1ab9901 100644
|
||||
--- a/linux-headers/asm-s390/kvm.h
|
||||
+++ b/linux-headers/asm-s390/kvm.h
|
||||
@@ -4,7 +4,7 @@
|
||||
/*
|
||||
* KVM s390 specific structures and definitions
|
||||
*
|
||||
- * Copyright IBM Corp. 2008
|
||||
+ * Copyright IBM Corp. 2008, 2018
|
||||
*
|
||||
* Author(s): Carsten Otte <cotte@de.ibm.com>
|
||||
* Christian Borntraeger <borntraeger@de.ibm.com>
|
||||
@@ -225,6 +225,7 @@ struct kvm_guest_debug_arch {
|
||||
#define KVM_SYNC_FPRS (1UL << 8)
|
||||
#define KVM_SYNC_GSCB (1UL << 9)
|
||||
#define KVM_SYNC_BPBC (1UL << 10)
|
||||
+#define KVM_SYNC_ETOKEN (1UL << 11)
|
||||
/* length and alignment of the sdnx as a power of two */
|
||||
#define SDNXC 8
|
||||
#define SDNXL (1UL << SDNXC)
|
||||
@@ -258,6 +259,8 @@ struct kvm_sync_regs {
|
||||
struct {
|
||||
__u64 reserved1[2];
|
||||
__u64 gscb[4];
|
||||
+ __u64 etoken;
|
||||
+ __u64 etoken_extension;
|
||||
};
|
||||
};
|
||||
};
|
||||
diff --git a/linux-headers/linux/kvm.h b/linux-headers/linux/kvm.h
|
||||
index 98f389a..2aae948 100644
|
||||
--- a/linux-headers/linux/kvm.h
|
||||
+++ b/linux-headers/linux/kvm.h
|
||||
@@ -949,6 +949,7 @@ struct kvm_ppc_resize_hpt {
|
||||
#define KVM_CAP_GET_MSR_FEATURES 153
|
||||
#define KVM_CAP_HYPERV_EVENTFD 154
|
||||
#define KVM_CAP_HYPERV_TLBFLUSH 155
|
||||
+#define KVM_CAP_S390_HPAGE_1M 156
|
||||
|
||||
#ifdef KVM_CAP_IRQ_ROUTING
|
||||
|
||||
--
|
||||
1.8.3.1
|
||||
|
114
0013-s390x-Enable-KVM-huge-page-backing-support.patch
Normal file
114
0013-s390x-Enable-KVM-huge-page-backing-support.patch
Normal file
@ -0,0 +1,114 @@
|
||||
From fa8eda01f21298e6bc50abb78775390b4bf3f954 Mon Sep 17 00:00:00 2001
|
||||
From: David Hildenbrand <david@redhat.com>
|
||||
Date: Mon, 6 Aug 2018 14:18:41 +0100
|
||||
Subject: s390x: Enable KVM huge page backing support
|
||||
|
||||
RH-Author: David Hildenbrand <david@redhat.com>
|
||||
Message-id: <20180806141842.23963-3-david@redhat.com>
|
||||
Patchwork-id: 81645
|
||||
O-Subject: [RHEL-8.0 qemu-kvm PATCH v2 2/3] s390x: Enable KVM huge page backing support
|
||||
Bugzilla: 1610906
|
||||
RH-Acked-by: Thomas Huth <thuth@redhat.com>
|
||||
RH-Acked-by: Cornelia Huck <cohuck@redhat.com>
|
||||
RH-Acked-by: Paolo Bonzini <pbonzini@redhat.com>
|
||||
|
||||
BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1610906
|
||||
Brew: https://brewweb.engineering.redhat.com/brew/taskinfo?taskID=17624600
|
||||
Upstream: N/A
|
||||
|
||||
Kernel part is in kvm/next, scheduled for 4.19. Patch has been reviewed
|
||||
upstream but cannot get picked up yet due to the outstanding linux
|
||||
header sync. Conflict to upstream patch: We have no units.h, therefore
|
||||
we have to unfold "4*KiB" and "1*MiB".
|
||||
|
||||
QEMU has had huge page support for a longer time already, but KVM
|
||||
memory management under s390x needed some changes to work with huge
|
||||
backings.
|
||||
|
||||
Now that we have support, let's enable it if requested and
|
||||
available. Otherwise we now properly tell the user if there is no
|
||||
support and back out instead of failing to run the VM later on.
|
||||
|
||||
Signed-off-by: Janosch Frank <frankja@linux.ibm.com>
|
||||
Reviewed-by: David Hildenbrand <david@redhat.com>
|
||||
Reviewed-by: Thomas Huth <thuth@redhat.com>
|
||||
Signed-off-by: David Hildenbrand <david@redhat.com>
|
||||
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
|
||||
---
|
||||
target/s390x/kvm.c | 34 ++++++++++++++++++++++++++++++++--
|
||||
1 file changed, 32 insertions(+), 2 deletions(-)
|
||||
|
||||
diff --git a/target/s390x/kvm.c b/target/s390x/kvm.c
|
||||
index bbcbeed..c36ff36f 100644
|
||||
--- a/target/s390x/kvm.c
|
||||
+++ b/target/s390x/kvm.c
|
||||
@@ -34,6 +34,7 @@
|
||||
#include "qapi/error.h"
|
||||
#include "qemu/error-report.h"
|
||||
#include "qemu/timer.h"
|
||||
+#include "qemu/mmap-alloc.h"
|
||||
#include "sysemu/sysemu.h"
|
||||
#include "sysemu/hw_accel.h"
|
||||
#include "hw/hw.h"
|
||||
@@ -139,6 +140,7 @@ static int cap_mem_op;
|
||||
static int cap_s390_irq;
|
||||
static int cap_ri;
|
||||
static int cap_gs;
|
||||
+static int cap_hpage_1m;
|
||||
|
||||
static int active_cmma;
|
||||
|
||||
@@ -220,9 +222,9 @@ static void kvm_s390_enable_cmma(void)
|
||||
.attr = KVM_S390_VM_MEM_ENABLE_CMMA,
|
||||
};
|
||||
|
||||
- if (mem_path) {
|
||||
+ if (cap_hpage_1m) {
|
||||
warn_report("CMM will not be enabled because it is not "
|
||||
- "compatible with hugetlbfs.");
|
||||
+ "compatible with huge memory backings.");
|
||||
return;
|
||||
}
|
||||
rc = kvm_vm_ioctl(kvm_state, KVM_SET_DEVICE_ATTR, &attr);
|
||||
@@ -281,10 +283,38 @@ void kvm_s390_crypto_reset(void)
|
||||
}
|
||||
}
|
||||
|
||||
+static int kvm_s390_configure_mempath_backing(KVMState *s)
|
||||
+{
|
||||
+ size_t path_psize = qemu_mempath_getpagesize(mem_path);
|
||||
+
|
||||
+ if (path_psize == 4 * 1024) {
|
||||
+ return 0;
|
||||
+ }
|
||||
+
|
||||
+ if (path_psize != 1024 * 1024) {
|
||||
+ error_report("Memory backing with 2G pages was specified, "
|
||||
+ "but KVM does not support this memory backing");
|
||||
+ return -EINVAL;
|
||||
+ }
|
||||
+
|
||||
+ if (kvm_vm_enable_cap(s, KVM_CAP_S390_HPAGE_1M, 0)) {
|
||||
+ error_report("Memory backing with 1M pages was specified, "
|
||||
+ "but KVM does not support this memory backing");
|
||||
+ return -EINVAL;
|
||||
+ }
|
||||
+
|
||||
+ cap_hpage_1m = 1;
|
||||
+ return 0;
|
||||
+}
|
||||
+
|
||||
int kvm_arch_init(MachineState *ms, KVMState *s)
|
||||
{
|
||||
MachineClass *mc = MACHINE_GET_CLASS(ms);
|
||||
|
||||
+ if (mem_path && kvm_s390_configure_mempath_backing(s)) {
|
||||
+ return -EINVAL;
|
||||
+ }
|
||||
+
|
||||
mc->default_cpu_type = S390_CPU_TYPE_NAME("host");
|
||||
cap_sync_regs = kvm_check_extension(s, KVM_CAP_SYNC_REGS);
|
||||
cap_async_pf = kvm_check_extension(s, KVM_CAP_ASYNC_PF);
|
||||
--
|
||||
1.8.3.1
|
||||
|
190
0014-s390x-kvm-add-etoken-facility.patch
Normal file
190
0014-s390x-kvm-add-etoken-facility.patch
Normal file
@ -0,0 +1,190 @@
|
||||
From 4b36866031e559bc895e64ecb20417323cb03e3d Mon Sep 17 00:00:00 2001
|
||||
From: Thomas Huth <thuth@redhat.com>
|
||||
Date: Thu, 9 Aug 2018 10:15:09 +0000
|
||||
Subject: s390x/kvm: add etoken facility
|
||||
|
||||
RH-Author: Thomas Huth <thuth@redhat.com>
|
||||
Message-id: <1533813309-9643-3-git-send-email-thuth@redhat.com>
|
||||
Patchwork-id: 81687
|
||||
O-Subject: [RHEL-8.0 qemu-kvm PATCH 2/2] s390x/kvm: add etoken facility
|
||||
Bugzilla: 1612938
|
||||
RH-Acked-by: David Hildenbrand <david@redhat.com>
|
||||
RH-Acked-by: Cornelia Huck <cohuck@redhat.com>
|
||||
RH-Acked-by: Jens Freimann <jfreiman@redhat.com>
|
||||
|
||||
Provide the etoken facility. We need to handle cpu model, migration and
|
||||
clear reset.
|
||||
|
||||
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
|
||||
Acked-by: Janosch Frank <frankja@linux.ibm.com>
|
||||
Signed-off-by: Thomas Huth <thuth@redhat.com>
|
||||
---
|
||||
target/s390x/cpu.h | 3 +++
|
||||
target/s390x/cpu_features.c | 3 ++-
|
||||
target/s390x/cpu_features_def.h | 3 ++-
|
||||
target/s390x/gen-features.c | 3 ++-
|
||||
target/s390x/kvm.c | 11 +++++++++++
|
||||
target/s390x/machine.c | 20 +++++++++++++++++++-
|
||||
6 files changed, 39 insertions(+), 4 deletions(-)
|
||||
|
||||
diff --git a/target/s390x/cpu.h b/target/s390x/cpu.h
|
||||
index 2c3dd2d..21b2f21 100644
|
||||
--- a/target/s390x/cpu.h
|
||||
+++ b/target/s390x/cpu.h
|
||||
@@ -2,6 +2,7 @@
|
||||
* S/390 virtual CPU header
|
||||
*
|
||||
* Copyright (c) 2009 Ulrich Hecht
|
||||
+ * Copyright IBM Corp. 2012, 2018
|
||||
*
|
||||
* This library is free software; you can redistribute it and/or
|
||||
* modify it under the terms of the GNU Lesser General Public
|
||||
@@ -68,6 +69,8 @@ struct CPUS390XState {
|
||||
uint32_t aregs[16]; /* access registers */
|
||||
uint8_t riccb[64]; /* runtime instrumentation control */
|
||||
uint64_t gscb[4]; /* guarded storage control */
|
||||
+ uint64_t etoken; /* etoken */
|
||||
+ uint64_t etoken_extension; /* etoken extension */
|
||||
|
||||
/* Fields up to this point are not cleared by initial CPU reset */
|
||||
struct {} start_initial_reset_fields;
|
||||
diff --git a/target/s390x/cpu_features.c b/target/s390x/cpu_features.c
|
||||
index 3b9e274..e05e6aa 100644
|
||||
--- a/target/s390x/cpu_features.c
|
||||
+++ b/target/s390x/cpu_features.c
|
||||
@@ -1,7 +1,7 @@
|
||||
/*
|
||||
* CPU features/facilities for s390x
|
||||
*
|
||||
- * Copyright 2016 IBM Corp.
|
||||
+ * Copyright IBM Corp. 2016, 2018
|
||||
*
|
||||
* Author(s): David Hildenbrand <dahi@linux.vnet.ibm.com>
|
||||
*
|
||||
@@ -106,6 +106,7 @@ static const S390FeatDef s390_features[] = {
|
||||
FEAT_INIT("irbm", S390_FEAT_TYPE_STFL, 145, "Insert-reference-bits-multiple facility"),
|
||||
FEAT_INIT("msa8-base", S390_FEAT_TYPE_STFL, 146, "Message-security-assist-extension-8 facility (excluding subfunctions)"),
|
||||
FEAT_INIT("cmmnt", S390_FEAT_TYPE_STFL, 147, "CMM: ESSA-enhancement (no translate) facility"),
|
||||
+ FEAT_INIT("etoken", S390_FEAT_TYPE_STFL, 156, "Etoken facility"),
|
||||
|
||||
/* SCLP SCCB Byte 80 - 98 (bit numbers relative to byte-80) */
|
||||
FEAT_INIT("gsls", S390_FEAT_TYPE_SCLP_CONF_CHAR, 40, "SIE: Guest-storage-limit-suppression facility"),
|
||||
diff --git a/target/s390x/cpu_features_def.h b/target/s390x/cpu_features_def.h
|
||||
index 7c5915c..ac2c947 100644
|
||||
--- a/target/s390x/cpu_features_def.h
|
||||
+++ b/target/s390x/cpu_features_def.h
|
||||
@@ -1,7 +1,7 @@
|
||||
/*
|
||||
* CPU features/facilities for s390
|
||||
*
|
||||
- * Copyright 2016 IBM Corp.
|
||||
+ * Copyright IBM Corp. 2016, 2018
|
||||
*
|
||||
* Author(s): Michael Mueller <mimu@linux.vnet.ibm.com>
|
||||
* David Hildenbrand <dahi@linux.vnet.ibm.com>
|
||||
@@ -93,6 +93,7 @@ typedef enum {
|
||||
S390_FEAT_INSERT_REFERENCE_BITS_MULT,
|
||||
S390_FEAT_MSA_EXT_8,
|
||||
S390_FEAT_CMM_NT,
|
||||
+ S390_FEAT_ETOKEN,
|
||||
|
||||
/* Sclp Conf Char */
|
||||
S390_FEAT_SIE_GSLS,
|
||||
diff --git a/target/s390x/gen-features.c b/target/s390x/gen-features.c
|
||||
index 6626b6f..5af042c 100644
|
||||
--- a/target/s390x/gen-features.c
|
||||
+++ b/target/s390x/gen-features.c
|
||||
@@ -1,7 +1,7 @@
|
||||
/*
|
||||
* S390 feature list generator
|
||||
*
|
||||
- * Copyright 2016 IBM Corp.
|
||||
+ * Copyright IBM Corp. 2016, 2018
|
||||
*
|
||||
* Author(s): Michael Mueller <mimu@linux.vnet.ibm.com>
|
||||
* David Hildenbrand <dahi@linux.vnet.ibm.com>
|
||||
@@ -471,6 +471,7 @@ static uint16_t full_GEN14_GA1[] = {
|
||||
S390_FEAT_GROUP_MSA_EXT_7,
|
||||
S390_FEAT_GROUP_MSA_EXT_8,
|
||||
S390_FEAT_CMM_NT,
|
||||
+ S390_FEAT_ETOKEN,
|
||||
S390_FEAT_HPMA2,
|
||||
S390_FEAT_SIE_KSS,
|
||||
S390_FEAT_GROUP_MULTIPLE_EPOCH_PTFF,
|
||||
diff --git a/target/s390x/kvm.c b/target/s390x/kvm.c
|
||||
index c36ff36f..71d90f2 100644
|
||||
--- a/target/s390x/kvm.c
|
||||
+++ b/target/s390x/kvm.c
|
||||
@@ -523,6 +523,12 @@ int kvm_arch_put_registers(CPUState *cs, int level)
|
||||
cs->kvm_run->kvm_dirty_regs |= KVM_SYNC_BPBC;
|
||||
}
|
||||
|
||||
+ if (can_sync_regs(cs, KVM_SYNC_ETOKEN)) {
|
||||
+ cs->kvm_run->s.regs.etoken = env->etoken;
|
||||
+ cs->kvm_run->s.regs.etoken_extension = env->etoken_extension;
|
||||
+ cs->kvm_run->kvm_dirty_regs |= KVM_SYNC_ETOKEN;
|
||||
+ }
|
||||
+
|
||||
/* Finally the prefix */
|
||||
if (can_sync_regs(cs, KVM_SYNC_PREFIX)) {
|
||||
cs->kvm_run->s.regs.prefix = env->psa;
|
||||
@@ -637,6 +643,11 @@ int kvm_arch_get_registers(CPUState *cs)
|
||||
env->bpbc = cs->kvm_run->s.regs.bpbc;
|
||||
}
|
||||
|
||||
+ if (can_sync_regs(cs, KVM_SYNC_ETOKEN)) {
|
||||
+ env->etoken = cs->kvm_run->s.regs.etoken;
|
||||
+ env->etoken_extension = cs->kvm_run->s.regs.etoken_extension;
|
||||
+ }
|
||||
+
|
||||
/* pfault parameters */
|
||||
if (can_sync_regs(cs, KVM_SYNC_PFAULT)) {
|
||||
env->pfault_token = cs->kvm_run->s.regs.pft;
|
||||
diff --git a/target/s390x/machine.c b/target/s390x/machine.c
|
||||
index bd3230d..cb792aa 100644
|
||||
--- a/target/s390x/machine.c
|
||||
+++ b/target/s390x/machine.c
|
||||
@@ -1,7 +1,7 @@
|
||||
/*
|
||||
* S390x machine definitions and functions
|
||||
*
|
||||
- * Copyright IBM Corp. 2014
|
||||
+ * Copyright IBM Corp. 2014, 2018
|
||||
*
|
||||
* Authors:
|
||||
* Thomas Huth <thuth@linux.vnet.ibm.com>
|
||||
@@ -216,6 +216,23 @@ const VMStateDescription vmstate_bpbc = {
|
||||
}
|
||||
};
|
||||
|
||||
+static bool etoken_needed(void *opaque)
|
||||
+{
|
||||
+ return s390_has_feat(S390_FEAT_ETOKEN);
|
||||
+}
|
||||
+
|
||||
+const VMStateDescription vmstate_etoken = {
|
||||
+ .name = "cpu/etoken",
|
||||
+ .version_id = 1,
|
||||
+ .minimum_version_id = 1,
|
||||
+ .needed = etoken_needed,
|
||||
+ .fields = (VMStateField[]) {
|
||||
+ VMSTATE_UINT64(env.etoken, S390CPU),
|
||||
+ VMSTATE_UINT64(env.etoken_extension, S390CPU),
|
||||
+ VMSTATE_END_OF_LIST()
|
||||
+ }
|
||||
+};
|
||||
+
|
||||
const VMStateDescription vmstate_s390_cpu = {
|
||||
.name = "cpu",
|
||||
.post_load = cpu_post_load,
|
||||
@@ -251,6 +268,7 @@ const VMStateDescription vmstate_s390_cpu = {
|
||||
&vmstate_exval,
|
||||
&vmstate_gscb,
|
||||
&vmstate_bpbc,
|
||||
+ &vmstate_etoken,
|
||||
NULL
|
||||
},
|
||||
};
|
||||
--
|
||||
1.8.3.1
|
||||
|
@ -0,0 +1,51 @@
|
||||
From 79d0599b21b64f8a8107855e844b347d2cc138d9 Mon Sep 17 00:00:00 2001
|
||||
From: Cornelia Huck <cohuck@redhat.com>
|
||||
Date: Tue, 7 Aug 2018 09:05:54 +0000
|
||||
Subject: s390x/cpumodel: default enable bpb and ppa15 for z196 and later
|
||||
|
||||
RH-Author: Cornelia Huck <cohuck@redhat.com>
|
||||
Message-id: <20180807100554.29643-3-cohuck@redhat.com>
|
||||
Patchwork-id: 81660
|
||||
O-Subject: [qemu-kvm RHEL8/virt212 PATCH 2/2] s390x/cpumodel: default enable bpb and ppa15 for z196 and later
|
||||
Bugzilla: 1595718
|
||||
RH-Acked-by: David Hildenbrand <david@redhat.com>
|
||||
RH-Acked-by: Thomas Huth <thuth@redhat.com>
|
||||
RH-Acked-by: Jens Freimann <jfreiman@redhat.com>
|
||||
|
||||
Upstream: downstream version of 8727315111 ("s390x/cpumodel: default
|
||||
enable bpb and ppa15 for z196 and later"); downstream does
|
||||
not have the upstream machine types, instead we need to
|
||||
turn off the bits for the RHEL 7.5 machine
|
||||
|
||||
Most systems and host kernels provide the necessary building blocks for
|
||||
bpb and ppa15. We can reverse the logic and default enable those
|
||||
features, while still allowing to disable it via cpu model.
|
||||
|
||||
So let us add bpb and ppa15 to z196 and later default CPU model for the
|
||||
qemu rhel7.6.0 machine. (like -cpu z13). Older machine types (i.e.
|
||||
s390-ccw-virtio-rhel7.5.0) will retain the old value and not provide those
|
||||
bits in the default model.
|
||||
|
||||
Signed-off-by: Cornelia Huck <cohuck@redhat.com>
|
||||
---
|
||||
hw/s390x/s390-virtio-ccw.c | 4 ++++
|
||||
1 file changed, 4 insertions(+)
|
||||
|
||||
diff --git a/hw/s390x/s390-virtio-ccw.c b/hw/s390x/s390-virtio-ccw.c
|
||||
index 0f135c9..cdf4558 100644
|
||||
--- a/hw/s390x/s390-virtio-ccw.c
|
||||
+++ b/hw/s390x/s390-virtio-ccw.c
|
||||
@@ -931,6 +931,10 @@ static void ccw_machine_rhel750_instance_options(MachineState *machine)
|
||||
/* before 2.12 we emulated the very first z900, and RHEL 7.5 is
|
||||
based on 2.10 */
|
||||
s390_set_qemu_cpu_model(0x2064, 7, 1, qemu_cpu_feat);
|
||||
+
|
||||
+ /* bpb and ppa15 were only in the full model in RHEL 7.5 */
|
||||
+ s390_cpudef_featoff_greater(11, 1, S390_FEAT_PPA15);
|
||||
+ s390_cpudef_featoff_greater(11, 1, S390_FEAT_BPB);
|
||||
}
|
||||
|
||||
static void ccw_machine_rhel750_class_options(MachineClass *mc)
|
||||
--
|
||||
1.8.3.1
|
||||
|
87
0016-i386-Fix-arch_query_cpu_model_expansion-leak.patch
Normal file
87
0016-i386-Fix-arch_query_cpu_model_expansion-leak.patch
Normal file
@ -0,0 +1,87 @@
|
||||
From 786fb991b644eddb9f52fd04d377cc7a62685d59 Mon Sep 17 00:00:00 2001
|
||||
From: Markus Armbruster <armbru@redhat.com>
|
||||
Date: Fri, 31 Aug 2018 13:59:22 +0100
|
||||
Subject: i386: Fix arch_query_cpu_model_expansion() leak
|
||||
|
||||
RH-Author: Markus Armbruster <armbru@redhat.com>
|
||||
Message-id: <20180831135922.6073-3-armbru@redhat.com>
|
||||
Patchwork-id: 81980
|
||||
O-Subject: [qemu-kvm RHEL8/virt212 PATCH 2/2] i386: Fix arch_query_cpu_model_expansion() leak
|
||||
Bugzilla: 1615717
|
||||
RH-Acked-by: Eduardo Habkost <ehabkost@redhat.com>
|
||||
RH-Acked-by: Laszlo Ersek <lersek@redhat.com>
|
||||
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
|
||||
|
||||
From: Eduardo Habkost <ehabkost@redhat.com>
|
||||
|
||||
Reported by Coverity:
|
||||
|
||||
Error: RESOURCE_LEAK (CWE-772): [#def439]
|
||||
qemu-2.12.0/target/i386/cpu.c:3179: alloc_fn: Storage is returned from allocation function "qdict_new".
|
||||
qemu-2.12.0/qobject/qdict.c:34:5: alloc_fn: Storage is returned from allocation function "g_malloc0".
|
||||
qemu-2.12.0/qobject/qdict.c:34:5: var_assign: Assigning: "qdict" = "g_malloc0(4120UL)".
|
||||
qemu-2.12.0/qobject/qdict.c:37:5: return_alloc: Returning allocated memory "qdict".
|
||||
qemu-2.12.0/target/i386/cpu.c:3179: var_assign: Assigning: "props" = storage returned from "qdict_new()".
|
||||
qemu-2.12.0/target/i386/cpu.c:3217: leaked_storage: Variable "props" going out of scope leaks the storage it points to.
|
||||
|
||||
This was introduced by commit b8097deb359b ("i386: Improve
|
||||
query-cpu-model-expansion full mode").
|
||||
|
||||
The leak is only theoretical: if ret->model->props is set to
|
||||
props, the qapi_free_CpuModelExpansionInfo() call will free props
|
||||
too in case of errors. The only way for this to not happen is if
|
||||
we enter the default branch of the switch statement, which would
|
||||
never happen because all CpuModelExpansionType values are being
|
||||
handled.
|
||||
|
||||
It's still worth to change this to make the allocation logic
|
||||
easier to follow and make the Coverity error go away. To make
|
||||
everything simpler, initialize ret->model and ret->model->props
|
||||
earlier in the function.
|
||||
|
||||
While at it, remove redundant check for !prop because prop is
|
||||
always initialized at the beginning of the function.
|
||||
|
||||
Fixes: b8097deb359bbbd92592b9670adfe9e245b2d0bd
|
||||
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
||||
Message-Id: <20180816183509.8231-1-ehabkost@redhat.com>
|
||||
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
|
||||
(cherry picked from commit e38bf612477fca62b205ebd909b1372a7e45a8c0)
|
||||
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
|
||||
---
|
||||
target/i386/cpu.c | 9 +++------
|
||||
1 file changed, 3 insertions(+), 6 deletions(-)
|
||||
|
||||
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
|
||||
index 051018a..71e2808 100644
|
||||
--- a/target/i386/cpu.c
|
||||
+++ b/target/i386/cpu.c
|
||||
@@ -3784,6 +3784,9 @@ arch_query_cpu_model_expansion(CpuModelExpansionType type,
|
||||
}
|
||||
|
||||
props = qdict_new();
|
||||
+ ret->model = g_new0(CpuModelInfo, 1);
|
||||
+ ret->model->props = QOBJECT(props);
|
||||
+ ret->model->has_props = true;
|
||||
|
||||
switch (type) {
|
||||
case CPU_MODEL_EXPANSION_TYPE_STATIC:
|
||||
@@ -3804,15 +3807,9 @@ arch_query_cpu_model_expansion(CpuModelExpansionType type,
|
||||
goto out;
|
||||
}
|
||||
|
||||
- if (!props) {
|
||||
- props = qdict_new();
|
||||
- }
|
||||
x86_cpu_to_dict(xc, props);
|
||||
|
||||
- ret->model = g_new0(CpuModelInfo, 1);
|
||||
ret->model->name = g_strdup(base_name);
|
||||
- ret->model->props = QOBJECT(props);
|
||||
- ret->model->has_props = true;
|
||||
|
||||
out:
|
||||
object_unref(OBJECT(xc));
|
||||
--
|
||||
1.8.3.1
|
||||
|
54
0017-i386-Disable-TOPOEXT-by-default-on-cpu-host.patch
Normal file
54
0017-i386-Disable-TOPOEXT-by-default-on-cpu-host.patch
Normal file
@ -0,0 +1,54 @@
|
||||
From 25abf99ebc7004999e79fa5e5b1370e4dfdaeed2 Mon Sep 17 00:00:00 2001
|
||||
From: Eduardo Habkost <ehabkost@redhat.com>
|
||||
Date: Tue, 21 Aug 2018 19:15:41 +0100
|
||||
Subject: i386: Disable TOPOEXT by default on "-cpu host"
|
||||
|
||||
RH-Author: Eduardo Habkost <ehabkost@redhat.com>
|
||||
Message-id: <20180821191541.31916-2-ehabkost@redhat.com>
|
||||
Patchwork-id: 81904
|
||||
O-Subject: [qemu-kvm RHEL8/virt212 PATCH v2 1/1] i386: Disable TOPOEXT by default on "-cpu host"
|
||||
Bugzilla: 1619804
|
||||
RH-Acked-by: Miroslav Rezanina <mrezanin@redhat.com>
|
||||
RH-Acked-by: Paolo Bonzini <pbonzini@redhat.com>
|
||||
RH-Acked-by: Igor Mammedov <imammedo@redhat.com>
|
||||
|
||||
Enabling TOPOEXT is always allowed, but it can't be enabled
|
||||
blindly by "-cpu host" because it may make guests crash if the
|
||||
rest of the cache topology information isn't provided or isn't
|
||||
consistent.
|
||||
|
||||
This addresses the bug reported at:
|
||||
https://bugzilla.redhat.com/show_bug.cgi?id=1613277
|
||||
|
||||
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
||||
Message-Id: <20180809221852.15285-1-ehabkost@redhat.com>
|
||||
Tested-by: Richard W.M. Jones <rjones@redhat.com>
|
||||
Reviewed-by: Babu Moger <babu.moger@amd.com>
|
||||
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
||||
(cherry picked from commit 7210a02c58572b2686a3a8d610c6628f87864aed)
|
||||
Signed-off-by: Eduardo Habkost <ehabkost@redhat.com>
|
||||
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
|
||||
---
|
||||
target/i386/cpu.c | 6 ++++++
|
||||
1 file changed, 6 insertions(+)
|
||||
|
||||
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
|
||||
index 71e2808..198d578 100644
|
||||
--- a/target/i386/cpu.c
|
||||
+++ b/target/i386/cpu.c
|
||||
@@ -849,6 +849,12 @@ static FeatureWordInfo feature_word_info[FEATURE_WORDS] = {
|
||||
},
|
||||
.cpuid_eax = 0x80000001, .cpuid_reg = R_ECX,
|
||||
.tcg_features = TCG_EXT3_FEATURES,
|
||||
+ /*
|
||||
+ * TOPOEXT is always allowed but can't be enabled blindly by
|
||||
+ * "-cpu host", as it requires consistent cache topology info
|
||||
+ * to be provided so it doesn't confuse guests.
|
||||
+ */
|
||||
+ .no_autoenable_flags = CPUID_EXT3_TOPOEXT,
|
||||
},
|
||||
[FEAT_C000_0001_EDX] = {
|
||||
.feat_names = {
|
||||
--
|
||||
1.8.3.1
|
||||
|
@ -0,0 +1,77 @@
|
||||
From 49d4861ffc56cb233dacc1abcb2a5ec608e599ab Mon Sep 17 00:00:00 2001
|
||||
From: Jeffrey Cody <jcody@redhat.com>
|
||||
Date: Wed, 26 Sep 2018 04:08:14 +0100
|
||||
Subject: curl: Make sslverify=off disable host as well as peer verification.
|
||||
|
||||
RH-Author: Jeffrey Cody <jcody@redhat.com>
|
||||
Message-id: <543d2f667af465dd809329fcba5175bc974d58d4.1537933576.git.jcody@redhat.com>
|
||||
Patchwork-id: 82293
|
||||
O-Subject: [RHEL8/rhel qemu-kvm PATCH 1/1] curl: Make sslverify=off disable host as well as peer verification.
|
||||
Bugzilla: 1575925
|
||||
RH-Acked-by: Richard Jones <rjones@redhat.com>
|
||||
RH-Acked-by: John Snow <jsnow@redhat.com>
|
||||
RH-Acked-by: Max Reitz <mreitz@redhat.com>
|
||||
|
||||
From: "Richard W.M. Jones" <rjones@redhat.com>
|
||||
|
||||
The sslverify setting is supposed to turn off all TLS certificate
|
||||
checks in libcurl. However because of the way we use it, it only
|
||||
turns off peer certificate authenticity checks
|
||||
(CURLOPT_SSL_VERIFYPEER). This patch makes it also turn off the check
|
||||
that the server name in the certificate is the same as the server
|
||||
you're connecting to (CURLOPT_SSL_VERIFYHOST).
|
||||
|
||||
We can use Google's server at 8.8.8.8 which happens to have a bad TLS
|
||||
certificate to demonstrate this:
|
||||
|
||||
$ ./qemu-img create -q -f qcow2 -b 'json: { "file.sslverify": "off", "file.driver": "https", "file.url": "https://8.8.8.8/foo" }' /var/tmp/file.qcow2
|
||||
qemu-img: /var/tmp/file.qcow2: CURL: Error opening file: SSL: no alternative certificate subject name matches target host name '8.8.8.8'
|
||||
Could not open backing image to determine size.
|
||||
|
||||
With this patch applied, qemu-img connects to the server regardless of
|
||||
the bad certificate:
|
||||
|
||||
$ ./qemu-img create -q -f qcow2 -b 'json: { "file.sslverify": "off", "file.driver": "https", "file.url": "https://8.8.8.8/foo" }' /var/tmp/file.qcow2
|
||||
qemu-img: /var/tmp/file.qcow2: CURL: Error opening file: The requested URL returned error: 404 Not Found
|
||||
|
||||
(The 404 error is expected because 8.8.8.8 is not actually serving a
|
||||
file called "/foo".)
|
||||
|
||||
Of course the default (without sslverify=off) remains to always check
|
||||
the certificate:
|
||||
|
||||
$ ./qemu-img create -q -f qcow2 -b 'json: { "file.driver": "https", "file.url": "https://8.8.8.8/foo" }' /var/tmp/file.qcow2
|
||||
qemu-img: /var/tmp/file.qcow2: CURL: Error opening file: SSL: no alternative certificate subject name matches target host name '8.8.8.8'
|
||||
Could not open backing image to determine size.
|
||||
|
||||
Further information about the two settings is available here:
|
||||
|
||||
https://curl.haxx.se/libcurl/c/CURLOPT_SSL_VERIFYPEER.html
|
||||
https://curl.haxx.se/libcurl/c/CURLOPT_SSL_VERIFYHOST.html
|
||||
|
||||
Signed-off-by: Richard W.M. Jones <rjones@redhat.com>
|
||||
Message-id: 20180914095622.19698-1-rjones@redhat.com
|
||||
Signed-off-by: Jeff Cody <jcody@redhat.com>
|
||||
(cherry picked from commit 637fa44ab80c6b317adf1d117494325a95daad60)
|
||||
Signed-off-by: Jeff Cody <jcody@redhat.com>
|
||||
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
|
||||
---
|
||||
block/curl.c | 2 ++
|
||||
1 file changed, 2 insertions(+)
|
||||
|
||||
diff --git a/block/curl.c b/block/curl.c
|
||||
index 229bb84..fabb2b4 100644
|
||||
--- a/block/curl.c
|
||||
+++ b/block/curl.c
|
||||
@@ -483,6 +483,8 @@ static int curl_init_state(BDRVCURLState *s, CURLState *state)
|
||||
curl_easy_setopt(state->curl, CURLOPT_URL, s->url);
|
||||
curl_easy_setopt(state->curl, CURLOPT_SSL_VERIFYPEER,
|
||||
(long) s->sslverify);
|
||||
+ curl_easy_setopt(state->curl, CURLOPT_SSL_VERIFYHOST,
|
||||
+ s->sslverify ? 2L : 0L);
|
||||
if (s->cookie) {
|
||||
curl_easy_setopt(state->curl, CURLOPT_COOKIE, s->cookie);
|
||||
}
|
||||
--
|
||||
1.8.3.1
|
||||
|
51
0019-migration-postcopy-Clear-have_listen_thread.patch
Normal file
51
0019-migration-postcopy-Clear-have_listen_thread.patch
Normal file
@ -0,0 +1,51 @@
|
||||
From 324493e716a2e5fa60b6b013d5df831b03f2a678 Mon Sep 17 00:00:00 2001
|
||||
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
|
||||
Date: Mon, 1 Oct 2018 10:54:48 +0100
|
||||
Subject: migration/postcopy: Clear have_listen_thread
|
||||
|
||||
RH-Author: Dr. David Alan Gilbert <dgilbert@redhat.com>
|
||||
Message-id: <20181001105449.41090-2-dgilbert@redhat.com>
|
||||
Patchwork-id: 82326
|
||||
O-Subject: [RHEL-8.0 qemu-kvm PATCH 1/2] migration/postcopy: Clear have_listen_thread
|
||||
Bugzilla: 1608765
|
||||
RH-Acked-by: Pankaj Gupta <pagupta@redhat.com>
|
||||
RH-Acked-by: Laszlo Ersek <lersek@redhat.com>
|
||||
RH-Acked-by: Laurent Vivier <lvivier@redhat.com>
|
||||
|
||||
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
|
||||
|
||||
Clear have_listen_thread when we exit the thread.
|
||||
The fallout from this was that various things thought there was
|
||||
an ongoing postcopy after the postcopy had finished.
|
||||
|
||||
The case that failed was postcopy->savevm->loadvm.
|
||||
|
||||
This corresponds to RH bug https://bugzilla.redhat.com/show_bug.cgi?id=1608765
|
||||
|
||||
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
|
||||
Message-Id: <20180914170430.54271-2-dgilbert@redhat.com>
|
||||
Reviewed-by: Peter Xu <peterx@redhat.com>
|
||||
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
|
||||
(cherry picked from commit 9cf4bb8730c669c40550e635a9e2b8ee4f1664ca)
|
||||
Manual merge due to context
|
||||
|
||||
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
|
||||
---
|
||||
migration/savevm.c | 1 +
|
||||
1 file changed, 1 insertion(+)
|
||||
|
||||
diff --git a/migration/savevm.c b/migration/savevm.c
|
||||
index 7f92567..762c4b2 100644
|
||||
--- a/migration/savevm.c
|
||||
+++ b/migration/savevm.c
|
||||
@@ -1676,6 +1676,7 @@ static void *postcopy_ram_listen_thread(void *opaque)
|
||||
migration_incoming_state_destroy();
|
||||
qemu_loadvm_state_cleanup();
|
||||
|
||||
+ mis->have_listen_thread = false;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
--
|
||||
1.8.3.1
|
||||
|
52
0020-migration-cleanup-in-error-paths-in-loadvm.patch
Normal file
52
0020-migration-cleanup-in-error-paths-in-loadvm.patch
Normal file
@ -0,0 +1,52 @@
|
||||
From 005c4cb023ffdcb8888c7453d263cab95d5b1b1c Mon Sep 17 00:00:00 2001
|
||||
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
|
||||
Date: Mon, 1 Oct 2018 10:54:49 +0100
|
||||
Subject: migration: cleanup in error paths in loadvm
|
||||
|
||||
RH-Author: Dr. David Alan Gilbert <dgilbert@redhat.com>
|
||||
Message-id: <20181001105449.41090-3-dgilbert@redhat.com>
|
||||
Patchwork-id: 82325
|
||||
O-Subject: [RHEL-8.0 qemu-kvm PATCH 2/2] migration: cleanup in error paths in loadvm
|
||||
Bugzilla: 1608765
|
||||
RH-Acked-by: Pankaj Gupta <pagupta@redhat.com>
|
||||
RH-Acked-by: Laszlo Ersek <lersek@redhat.com>
|
||||
RH-Acked-by: Laurent Vivier <lvivier@redhat.com>
|
||||
|
||||
From: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
|
||||
|
||||
There's a couple of error paths in qemu_loadvm_state
|
||||
which happen early on but after we've initialised the
|
||||
load state; that needs to be cleaned up otherwise
|
||||
we can hit asserts if the state gets reinitialised later.
|
||||
|
||||
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
|
||||
Message-Id: <20180914170430.54271-3-dgilbert@redhat.com>
|
||||
Reviewed-by: Peter Xu <peterx@redhat.com>
|
||||
Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
|
||||
(cherry picked from commit 096c83b7219c5a2145435afc8be750281e9cb447)
|
||||
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
|
||||
---
|
||||
migration/savevm.c | 2 ++
|
||||
1 file changed, 2 insertions(+)
|
||||
|
||||
diff --git a/migration/savevm.c b/migration/savevm.c
|
||||
index 762c4b2..27e054d 100644
|
||||
--- a/migration/savevm.c
|
||||
+++ b/migration/savevm.c
|
||||
@@ -2328,11 +2328,13 @@ int qemu_loadvm_state(QEMUFile *f)
|
||||
if (migrate_get_current()->send_configuration) {
|
||||
if (qemu_get_byte(f) != QEMU_VM_CONFIGURATION) {
|
||||
error_report("Configuration section missing");
|
||||
+ qemu_loadvm_state_cleanup();
|
||||
return -EINVAL;
|
||||
}
|
||||
ret = vmstate_load_state(f, &vmstate_configuration, &savevm_state, 0);
|
||||
|
||||
if (ret) {
|
||||
+ qemu_loadvm_state_cleanup();
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
--
|
||||
1.8.3.1
|
||||
|
372
0021-jobs-change-start-callback-to-run-callback.patch
Normal file
372
0021-jobs-change-start-callback-to-run-callback.patch
Normal file
@ -0,0 +1,372 @@
|
||||
From 287cb50c08d64773470732be8a6a566bcdde4b75 Mon Sep 17 00:00:00 2001
|
||||
From: John Snow <jsnow@redhat.com>
|
||||
Date: Tue, 25 Sep 2018 22:34:07 +0100
|
||||
Subject: jobs: change start callback to run callback
|
||||
|
||||
RH-Author: John Snow <jsnow@redhat.com>
|
||||
Message-id: <20180925223431.24791-2-jsnow@redhat.com>
|
||||
Patchwork-id: 82261
|
||||
O-Subject: [RHEL8/rhel qemu-kvm PATCH 01/25] jobs: change start callback to run callback
|
||||
Bugzilla: 1632939
|
||||
RH-Acked-by: Jeffrey Cody <jcody@redhat.com>
|
||||
RH-Acked-by: Max Reitz <mreitz@redhat.com>
|
||||
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
|
||||
|
||||
Presently we codify the entry point for a job as the "start" callback,
|
||||
but a more apt name would be "run" to clarify the idea that when this
|
||||
function returns we consider the job to have "finished," except for
|
||||
any cleanup which occurs in separate callbacks later.
|
||||
|
||||
As part of this clarification, change the signature to include an error
|
||||
object and a return code. The error ptr is not yet used, and the return
|
||||
code while captured, will be overwritten by actions in the job_completed
|
||||
function.
|
||||
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
||||
Message-id: 20180830015734.19765-2-jsnow@redhat.com
|
||||
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
||||
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
||||
(cherry picked from commit f67432a2019caf05b57a146bf45c1024a5cb608e)
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
|
||||
---
|
||||
block/backup.c | 7 ++++---
|
||||
block/commit.c | 7 ++++---
|
||||
block/create.c | 8 +++++---
|
||||
block/mirror.c | 10 ++++++----
|
||||
block/stream.c | 7 ++++---
|
||||
include/qemu/job.h | 2 +-
|
||||
job.c | 6 +++---
|
||||
tests/test-bdrv-drain.c | 7 ++++---
|
||||
tests/test-blockjob-txn.c | 16 ++++++++--------
|
||||
tests/test-blockjob.c | 7 ++++---
|
||||
10 files changed, 43 insertions(+), 34 deletions(-)
|
||||
|
||||
diff --git a/block/backup.c b/block/backup.c
|
||||
index 8630d32..5d47781 100644
|
||||
--- a/block/backup.c
|
||||
+++ b/block/backup.c
|
||||
@@ -480,9 +480,9 @@ static void backup_incremental_init_copy_bitmap(BackupBlockJob *job)
|
||||
bdrv_dirty_iter_free(dbi);
|
||||
}
|
||||
|
||||
-static void coroutine_fn backup_run(void *opaque)
|
||||
+static int coroutine_fn backup_run(Job *opaque_job, Error **errp)
|
||||
{
|
||||
- BackupBlockJob *job = opaque;
|
||||
+ BackupBlockJob *job = container_of(opaque_job, BackupBlockJob, common.job);
|
||||
BackupCompleteData *data;
|
||||
BlockDriverState *bs = blk_bs(job->common.blk);
|
||||
int64_t offset, nb_clusters;
|
||||
@@ -587,6 +587,7 @@ static void coroutine_fn backup_run(void *opaque)
|
||||
data = g_malloc(sizeof(*data));
|
||||
data->ret = ret;
|
||||
job_defer_to_main_loop(&job->common.job, backup_complete, data);
|
||||
+ return ret;
|
||||
}
|
||||
|
||||
static const BlockJobDriver backup_job_driver = {
|
||||
@@ -596,7 +597,7 @@ static const BlockJobDriver backup_job_driver = {
|
||||
.free = block_job_free,
|
||||
.user_resume = block_job_user_resume,
|
||||
.drain = block_job_drain,
|
||||
- .start = backup_run,
|
||||
+ .run = backup_run,
|
||||
.commit = backup_commit,
|
||||
.abort = backup_abort,
|
||||
.clean = backup_clean,
|
||||
diff --git a/block/commit.c b/block/commit.c
|
||||
index e1814d9..905a1c5 100644
|
||||
--- a/block/commit.c
|
||||
+++ b/block/commit.c
|
||||
@@ -134,9 +134,9 @@ static void commit_complete(Job *job, void *opaque)
|
||||
bdrv_unref(top);
|
||||
}
|
||||
|
||||
-static void coroutine_fn commit_run(void *opaque)
|
||||
+static int coroutine_fn commit_run(Job *job, Error **errp)
|
||||
{
|
||||
- CommitBlockJob *s = opaque;
|
||||
+ CommitBlockJob *s = container_of(job, CommitBlockJob, common.job);
|
||||
CommitCompleteData *data;
|
||||
int64_t offset;
|
||||
uint64_t delay_ns = 0;
|
||||
@@ -213,6 +213,7 @@ out:
|
||||
data = g_malloc(sizeof(*data));
|
||||
data->ret = ret;
|
||||
job_defer_to_main_loop(&s->common.job, commit_complete, data);
|
||||
+ return ret;
|
||||
}
|
||||
|
||||
static const BlockJobDriver commit_job_driver = {
|
||||
@@ -222,7 +223,7 @@ static const BlockJobDriver commit_job_driver = {
|
||||
.free = block_job_free,
|
||||
.user_resume = block_job_user_resume,
|
||||
.drain = block_job_drain,
|
||||
- .start = commit_run,
|
||||
+ .run = commit_run,
|
||||
},
|
||||
};
|
||||
|
||||
diff --git a/block/create.c b/block/create.c
|
||||
index 915cd41..04733c3 100644
|
||||
--- a/block/create.c
|
||||
+++ b/block/create.c
|
||||
@@ -45,9 +45,9 @@ static void blockdev_create_complete(Job *job, void *opaque)
|
||||
job_completed(job, s->ret, s->err);
|
||||
}
|
||||
|
||||
-static void coroutine_fn blockdev_create_run(void *opaque)
|
||||
+static int coroutine_fn blockdev_create_run(Job *job, Error **errp)
|
||||
{
|
||||
- BlockdevCreateJob *s = opaque;
|
||||
+ BlockdevCreateJob *s = container_of(job, BlockdevCreateJob, common);
|
||||
|
||||
job_progress_set_remaining(&s->common, 1);
|
||||
s->ret = s->drv->bdrv_co_create(s->opts, &s->err);
|
||||
@@ -55,12 +55,14 @@ static void coroutine_fn blockdev_create_run(void *opaque)
|
||||
|
||||
qapi_free_BlockdevCreateOptions(s->opts);
|
||||
job_defer_to_main_loop(&s->common, blockdev_create_complete, NULL);
|
||||
+
|
||||
+ return s->ret;
|
||||
}
|
||||
|
||||
static const JobDriver blockdev_create_job_driver = {
|
||||
.instance_size = sizeof(BlockdevCreateJob),
|
||||
.job_type = JOB_TYPE_CREATE,
|
||||
- .start = blockdev_create_run,
|
||||
+ .run = blockdev_create_run,
|
||||
};
|
||||
|
||||
void qmp_blockdev_create(const char *job_id, BlockdevCreateOptions *options,
|
||||
diff --git a/block/mirror.c b/block/mirror.c
|
||||
index b48c3f8..b3363e9 100644
|
||||
--- a/block/mirror.c
|
||||
+++ b/block/mirror.c
|
||||
@@ -812,9 +812,9 @@ static int mirror_flush(MirrorBlockJob *s)
|
||||
return ret;
|
||||
}
|
||||
|
||||
-static void coroutine_fn mirror_run(void *opaque)
|
||||
+static int coroutine_fn mirror_run(Job *job, Error **errp)
|
||||
{
|
||||
- MirrorBlockJob *s = opaque;
|
||||
+ MirrorBlockJob *s = container_of(job, MirrorBlockJob, common.job);
|
||||
MirrorExitData *data;
|
||||
BlockDriverState *bs = s->mirror_top_bs->backing->bs;
|
||||
BlockDriverState *target_bs = blk_bs(s->target);
|
||||
@@ -1041,7 +1041,9 @@ immediate_exit:
|
||||
if (need_drain) {
|
||||
bdrv_drained_begin(bs);
|
||||
}
|
||||
+
|
||||
job_defer_to_main_loop(&s->common.job, mirror_exit, data);
|
||||
+ return ret;
|
||||
}
|
||||
|
||||
static void mirror_complete(Job *job, Error **errp)
|
||||
@@ -1138,7 +1140,7 @@ static const BlockJobDriver mirror_job_driver = {
|
||||
.free = block_job_free,
|
||||
.user_resume = block_job_user_resume,
|
||||
.drain = block_job_drain,
|
||||
- .start = mirror_run,
|
||||
+ .run = mirror_run,
|
||||
.pause = mirror_pause,
|
||||
.complete = mirror_complete,
|
||||
},
|
||||
@@ -1154,7 +1156,7 @@ static const BlockJobDriver commit_active_job_driver = {
|
||||
.free = block_job_free,
|
||||
.user_resume = block_job_user_resume,
|
||||
.drain = block_job_drain,
|
||||
- .start = mirror_run,
|
||||
+ .run = mirror_run,
|
||||
.pause = mirror_pause,
|
||||
.complete = mirror_complete,
|
||||
},
|
||||
diff --git a/block/stream.c b/block/stream.c
|
||||
index 9264b68..b4b987d 100644
|
||||
--- a/block/stream.c
|
||||
+++ b/block/stream.c
|
||||
@@ -97,9 +97,9 @@ out:
|
||||
g_free(data);
|
||||
}
|
||||
|
||||
-static void coroutine_fn stream_run(void *opaque)
|
||||
+static int coroutine_fn stream_run(Job *job, Error **errp)
|
||||
{
|
||||
- StreamBlockJob *s = opaque;
|
||||
+ StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
|
||||
StreamCompleteData *data;
|
||||
BlockBackend *blk = s->common.blk;
|
||||
BlockDriverState *bs = blk_bs(blk);
|
||||
@@ -206,6 +206,7 @@ out:
|
||||
data = g_malloc(sizeof(*data));
|
||||
data->ret = ret;
|
||||
job_defer_to_main_loop(&s->common.job, stream_complete, data);
|
||||
+ return ret;
|
||||
}
|
||||
|
||||
static const BlockJobDriver stream_job_driver = {
|
||||
@@ -213,7 +214,7 @@ static const BlockJobDriver stream_job_driver = {
|
||||
.instance_size = sizeof(StreamBlockJob),
|
||||
.job_type = JOB_TYPE_STREAM,
|
||||
.free = block_job_free,
|
||||
- .start = stream_run,
|
||||
+ .run = stream_run,
|
||||
.user_resume = block_job_user_resume,
|
||||
.drain = block_job_drain,
|
||||
},
|
||||
diff --git a/include/qemu/job.h b/include/qemu/job.h
|
||||
index 18c9223..9cf463d 100644
|
||||
--- a/include/qemu/job.h
|
||||
+++ b/include/qemu/job.h
|
||||
@@ -169,7 +169,7 @@ struct JobDriver {
|
||||
JobType job_type;
|
||||
|
||||
/** Mandatory: Entrypoint for the Coroutine. */
|
||||
- CoroutineEntry *start;
|
||||
+ int coroutine_fn (*run)(Job *job, Error **errp);
|
||||
|
||||
/**
|
||||
* If the callback is not NULL, it will be invoked when the job transitions
|
||||
diff --git a/job.c b/job.c
|
||||
index fa671b4..898260b 100644
|
||||
--- a/job.c
|
||||
+++ b/job.c
|
||||
@@ -544,16 +544,16 @@ static void coroutine_fn job_co_entry(void *opaque)
|
||||
{
|
||||
Job *job = opaque;
|
||||
|
||||
- assert(job && job->driver && job->driver->start);
|
||||
+ assert(job && job->driver && job->driver->run);
|
||||
job_pause_point(job);
|
||||
- job->driver->start(job);
|
||||
+ job->ret = job->driver->run(job, NULL);
|
||||
}
|
||||
|
||||
|
||||
void job_start(Job *job)
|
||||
{
|
||||
assert(job && !job_started(job) && job->paused &&
|
||||
- job->driver && job->driver->start);
|
||||
+ job->driver && job->driver->run);
|
||||
job->co = qemu_coroutine_create(job_co_entry, job);
|
||||
job->pause_count--;
|
||||
job->busy = true;
|
||||
diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c
|
||||
index 17bb850..a753386 100644
|
||||
--- a/tests/test-bdrv-drain.c
|
||||
+++ b/tests/test-bdrv-drain.c
|
||||
@@ -757,9 +757,9 @@ static void test_job_completed(Job *job, void *opaque)
|
||||
job_completed(job, 0, NULL);
|
||||
}
|
||||
|
||||
-static void coroutine_fn test_job_start(void *opaque)
|
||||
+static int coroutine_fn test_job_run(Job *job, Error **errp)
|
||||
{
|
||||
- TestBlockJob *s = opaque;
|
||||
+ TestBlockJob *s = container_of(job, TestBlockJob, common.job);
|
||||
|
||||
job_transition_to_ready(&s->common.job);
|
||||
while (!s->should_complete) {
|
||||
@@ -771,6 +771,7 @@ static void coroutine_fn test_job_start(void *opaque)
|
||||
}
|
||||
|
||||
job_defer_to_main_loop(&s->common.job, test_job_completed, NULL);
|
||||
+ return 0;
|
||||
}
|
||||
|
||||
static void test_job_complete(Job *job, Error **errp)
|
||||
@@ -785,7 +786,7 @@ BlockJobDriver test_job_driver = {
|
||||
.free = block_job_free,
|
||||
.user_resume = block_job_user_resume,
|
||||
.drain = block_job_drain,
|
||||
- .start = test_job_start,
|
||||
+ .run = test_job_run,
|
||||
.complete = test_job_complete,
|
||||
},
|
||||
};
|
||||
diff --git a/tests/test-blockjob-txn.c b/tests/test-blockjob-txn.c
|
||||
index 58d9b87..3194924 100644
|
||||
--- a/tests/test-blockjob-txn.c
|
||||
+++ b/tests/test-blockjob-txn.c
|
||||
@@ -38,25 +38,25 @@ static void test_block_job_complete(Job *job, void *opaque)
|
||||
bdrv_unref(bs);
|
||||
}
|
||||
|
||||
-static void coroutine_fn test_block_job_run(void *opaque)
|
||||
+static int coroutine_fn test_block_job_run(Job *job, Error **errp)
|
||||
{
|
||||
- TestBlockJob *s = opaque;
|
||||
- BlockJob *job = &s->common;
|
||||
+ TestBlockJob *s = container_of(job, TestBlockJob, common.job);
|
||||
|
||||
while (s->iterations--) {
|
||||
if (s->use_timer) {
|
||||
- job_sleep_ns(&job->job, 0);
|
||||
+ job_sleep_ns(job, 0);
|
||||
} else {
|
||||
- job_yield(&job->job);
|
||||
+ job_yield(job);
|
||||
}
|
||||
|
||||
- if (job_is_cancelled(&job->job)) {
|
||||
+ if (job_is_cancelled(job)) {
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
- job_defer_to_main_loop(&job->job, test_block_job_complete,
|
||||
+ job_defer_to_main_loop(job, test_block_job_complete,
|
||||
(void *)(intptr_t)s->rc);
|
||||
+ return s->rc;
|
||||
}
|
||||
|
||||
typedef struct {
|
||||
@@ -80,7 +80,7 @@ static const BlockJobDriver test_block_job_driver = {
|
||||
.free = block_job_free,
|
||||
.user_resume = block_job_user_resume,
|
||||
.drain = block_job_drain,
|
||||
- .start = test_block_job_run,
|
||||
+ .run = test_block_job_run,
|
||||
},
|
||||
};
|
||||
|
||||
diff --git a/tests/test-blockjob.c b/tests/test-blockjob.c
|
||||
index cb42f06..b0462bf 100644
|
||||
--- a/tests/test-blockjob.c
|
||||
+++ b/tests/test-blockjob.c
|
||||
@@ -176,9 +176,9 @@ static void cancel_job_complete(Job *job, Error **errp)
|
||||
s->should_complete = true;
|
||||
}
|
||||
|
||||
-static void coroutine_fn cancel_job_start(void *opaque)
|
||||
+static int coroutine_fn cancel_job_run(Job *job, Error **errp)
|
||||
{
|
||||
- CancelJob *s = opaque;
|
||||
+ CancelJob *s = container_of(job, CancelJob, common.job);
|
||||
|
||||
while (!s->should_complete) {
|
||||
if (job_is_cancelled(&s->common.job)) {
|
||||
@@ -194,6 +194,7 @@ static void coroutine_fn cancel_job_start(void *opaque)
|
||||
|
||||
defer:
|
||||
job_defer_to_main_loop(&s->common.job, cancel_job_completed, s);
|
||||
+ return 0;
|
||||
}
|
||||
|
||||
static const BlockJobDriver test_cancel_driver = {
|
||||
@@ -202,7 +203,7 @@ static const BlockJobDriver test_cancel_driver = {
|
||||
.free = block_job_free,
|
||||
.user_resume = block_job_user_resume,
|
||||
.drain = block_job_drain,
|
||||
- .start = cancel_job_start,
|
||||
+ .run = cancel_job_run,
|
||||
.complete = cancel_job_complete,
|
||||
},
|
||||
};
|
||||
--
|
||||
1.8.3.1
|
||||
|
283
0022-jobs-canonize-Error-object.patch
Normal file
283
0022-jobs-canonize-Error-object.patch
Normal file
@ -0,0 +1,283 @@
|
||||
From 9dff1ec5bdde5e8bd8745d2e0697cc6e28c87214 Mon Sep 17 00:00:00 2001
|
||||
From: John Snow <jsnow@redhat.com>
|
||||
Date: Wed, 29 Aug 2018 21:57:27 -0400
|
||||
Subject: jobs: canonize Error object
|
||||
|
||||
RH-Author: John Snow <jsnow@redhat.com>
|
||||
Message-id: <20180925223431.24791-3-jsnow@redhat.com>
|
||||
Patchwork-id: 82262
|
||||
O-Subject: [RHEL8/rhel qemu-kvm PATCH 02/25] jobs: canonize Error object
|
||||
Bugzilla: 1632939
|
||||
RH-Acked-by: Jeffrey Cody <jcody@redhat.com>
|
||||
RH-Acked-by: Max Reitz <mreitz@redhat.com>
|
||||
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
|
||||
|
||||
Jobs presently use both an Error object in the case of the create job,
|
||||
and char strings in the case of generic errors elsewhere.
|
||||
|
||||
Unify the two paths as just j->err, and remove the extra argument from
|
||||
job_completed. The integer error code for job_completed is kept for now,
|
||||
to be removed shortly in a separate patch.
|
||||
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Message-id: 20180830015734.19765-3-jsnow@redhat.com
|
||||
[mreitz: Dropped a superfluous g_strdup()]
|
||||
Reviewed-by: Eric Blake <eblake@redhat.com>
|
||||
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
||||
(cherry picked from commit 3d1f8b07a4c241f81949eff507d9f3a8fd73b87b)
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
---
|
||||
block/backup.c | 2 +-
|
||||
block/commit.c | 2 +-
|
||||
block/create.c | 5 ++---
|
||||
block/mirror.c | 2 +-
|
||||
block/stream.c | 2 +-
|
||||
include/qemu/job.h | 14 ++++++++------
|
||||
job-qmp.c | 5 +++--
|
||||
job.c | 18 ++++++------------
|
||||
tests/test-bdrv-drain.c | 2 +-
|
||||
tests/test-blockjob-txn.c | 2 +-
|
||||
tests/test-blockjob.c | 2 +-
|
||||
11 files changed, 26 insertions(+), 30 deletions(-)
|
||||
|
||||
diff --git a/block/backup.c b/block/backup.c
|
||||
index 5d47781..1e965d5 100644
|
||||
--- a/block/backup.c
|
||||
+++ b/block/backup.c
|
||||
@@ -388,7 +388,7 @@ static void backup_complete(Job *job, void *opaque)
|
||||
{
|
||||
BackupCompleteData *data = opaque;
|
||||
|
||||
- job_completed(job, data->ret, NULL);
|
||||
+ job_completed(job, data->ret);
|
||||
g_free(data);
|
||||
}
|
||||
|
||||
diff --git a/block/commit.c b/block/commit.c
|
||||
index 905a1c5..af7579d 100644
|
||||
--- a/block/commit.c
|
||||
+++ b/block/commit.c
|
||||
@@ -117,7 +117,7 @@ static void commit_complete(Job *job, void *opaque)
|
||||
* bdrv_set_backing_hd() to fail. */
|
||||
block_job_remove_all_bdrv(bjob);
|
||||
|
||||
- job_completed(job, ret, NULL);
|
||||
+ job_completed(job, ret);
|
||||
g_free(data);
|
||||
|
||||
/* If bdrv_drop_intermediate() didn't already do that, remove the commit
|
||||
diff --git a/block/create.c b/block/create.c
|
||||
index 04733c3..26a385c 100644
|
||||
--- a/block/create.c
|
||||
+++ b/block/create.c
|
||||
@@ -35,14 +35,13 @@ typedef struct BlockdevCreateJob {
|
||||
BlockDriver *drv;
|
||||
BlockdevCreateOptions *opts;
|
||||
int ret;
|
||||
- Error *err;
|
||||
} BlockdevCreateJob;
|
||||
|
||||
static void blockdev_create_complete(Job *job, void *opaque)
|
||||
{
|
||||
BlockdevCreateJob *s = container_of(job, BlockdevCreateJob, common);
|
||||
|
||||
- job_completed(job, s->ret, s->err);
|
||||
+ job_completed(job, s->ret);
|
||||
}
|
||||
|
||||
static int coroutine_fn blockdev_create_run(Job *job, Error **errp)
|
||||
@@ -50,7 +49,7 @@ static int coroutine_fn blockdev_create_run(Job *job, Error **errp)
|
||||
BlockdevCreateJob *s = container_of(job, BlockdevCreateJob, common);
|
||||
|
||||
job_progress_set_remaining(&s->common, 1);
|
||||
- s->ret = s->drv->bdrv_co_create(s->opts, &s->err);
|
||||
+ s->ret = s->drv->bdrv_co_create(s->opts, errp);
|
||||
job_progress_update(&s->common, 1);
|
||||
|
||||
qapi_free_BlockdevCreateOptions(s->opts);
|
||||
diff --git a/block/mirror.c b/block/mirror.c
|
||||
index b3363e9..6637f2b 100644
|
||||
--- a/block/mirror.c
|
||||
+++ b/block/mirror.c
|
||||
@@ -710,7 +710,7 @@ static void mirror_exit(Job *job, void *opaque)
|
||||
blk_insert_bs(bjob->blk, mirror_top_bs, &error_abort);
|
||||
|
||||
bs_opaque->job = NULL;
|
||||
- job_completed(job, data->ret, NULL);
|
||||
+ job_completed(job, data->ret);
|
||||
|
||||
g_free(data);
|
||||
bdrv_drained_end(src);
|
||||
diff --git a/block/stream.c b/block/stream.c
|
||||
index b4b987d..26a7753 100644
|
||||
--- a/block/stream.c
|
||||
+++ b/block/stream.c
|
||||
@@ -93,7 +93,7 @@ out:
|
||||
}
|
||||
|
||||
g_free(s->backing_file_str);
|
||||
- job_completed(job, data->ret, NULL);
|
||||
+ job_completed(job, data->ret);
|
||||
g_free(data);
|
||||
}
|
||||
|
||||
diff --git a/include/qemu/job.h b/include/qemu/job.h
|
||||
index 9cf463d..e0e9987 100644
|
||||
--- a/include/qemu/job.h
|
||||
+++ b/include/qemu/job.h
|
||||
@@ -124,12 +124,16 @@ typedef struct Job {
|
||||
/** Estimated progress_current value at the completion of the job */
|
||||
int64_t progress_total;
|
||||
|
||||
- /** Error string for a failed job (NULL if, and only if, job->ret == 0) */
|
||||
- char *error;
|
||||
-
|
||||
/** ret code passed to job_completed. */
|
||||
int ret;
|
||||
|
||||
+ /**
|
||||
+ * Error object for a failed job.
|
||||
+ * If job->ret is nonzero and an error object was not set, it will be set
|
||||
+ * to strerror(-job->ret) during job_completed.
|
||||
+ */
|
||||
+ Error *err;
|
||||
+
|
||||
/** The completion function that will be called when the job completes. */
|
||||
BlockCompletionFunc *cb;
|
||||
|
||||
@@ -484,15 +488,13 @@ void job_transition_to_ready(Job *job);
|
||||
/**
|
||||
* @job: The job being completed.
|
||||
* @ret: The status code.
|
||||
- * @error: The error message for a failing job (only with @ret < 0). If @ret is
|
||||
- * negative, but NULL is given for @error, strerror() is used.
|
||||
*
|
||||
* Marks @job as completed. If @ret is non-zero, the job transaction it is part
|
||||
* of is aborted. If @ret is zero, the job moves into the WAITING state. If it
|
||||
* is the last job to complete in its transaction, all jobs in the transaction
|
||||
* move from WAITING to PENDING.
|
||||
*/
|
||||
-void job_completed(Job *job, int ret, Error *error);
|
||||
+void job_completed(Job *job, int ret);
|
||||
|
||||
/** Asynchronously complete the specified @job. */
|
||||
void job_complete(Job *job, Error **errp);
|
||||
diff --git a/job-qmp.c b/job-qmp.c
|
||||
index 410775d..a969b2b 100644
|
||||
--- a/job-qmp.c
|
||||
+++ b/job-qmp.c
|
||||
@@ -146,8 +146,9 @@ static JobInfo *job_query_single(Job *job, Error **errp)
|
||||
.status = job->status,
|
||||
.current_progress = job->progress_current,
|
||||
.total_progress = job->progress_total,
|
||||
- .has_error = !!job->error,
|
||||
- .error = g_strdup(job->error),
|
||||
+ .has_error = !!job->err,
|
||||
+ .error = job->err ? \
|
||||
+ g_strdup(error_get_pretty(job->err)) : NULL,
|
||||
};
|
||||
|
||||
return info;
|
||||
diff --git a/job.c b/job.c
|
||||
index 898260b..276024a 100644
|
||||
--- a/job.c
|
||||
+++ b/job.c
|
||||
@@ -369,7 +369,7 @@ void job_unref(Job *job)
|
||||
|
||||
QLIST_REMOVE(job, job_list);
|
||||
|
||||
- g_free(job->error);
|
||||
+ error_free(job->err);
|
||||
g_free(job->id);
|
||||
g_free(job);
|
||||
}
|
||||
@@ -546,7 +546,7 @@ static void coroutine_fn job_co_entry(void *opaque)
|
||||
|
||||
assert(job && job->driver && job->driver->run);
|
||||
job_pause_point(job);
|
||||
- job->ret = job->driver->run(job, NULL);
|
||||
+ job->ret = job->driver->run(job, &job->err);
|
||||
}
|
||||
|
||||
|
||||
@@ -666,8 +666,8 @@ static void job_update_rc(Job *job)
|
||||
job->ret = -ECANCELED;
|
||||
}
|
||||
if (job->ret) {
|
||||
- if (!job->error) {
|
||||
- job->error = g_strdup(strerror(-job->ret));
|
||||
+ if (!job->err) {
|
||||
+ error_setg(&job->err, "%s", strerror(-job->ret));
|
||||
}
|
||||
job_state_transition(job, JOB_STATUS_ABORTING);
|
||||
}
|
||||
@@ -865,17 +865,11 @@ static void job_completed_txn_success(Job *job)
|
||||
}
|
||||
}
|
||||
|
||||
-void job_completed(Job *job, int ret, Error *error)
|
||||
+void job_completed(Job *job, int ret)
|
||||
{
|
||||
assert(job && job->txn && !job_is_completed(job));
|
||||
|
||||
job->ret = ret;
|
||||
- if (error) {
|
||||
- assert(job->ret < 0);
|
||||
- job->error = g_strdup(error_get_pretty(error));
|
||||
- error_free(error);
|
||||
- }
|
||||
-
|
||||
job_update_rc(job);
|
||||
trace_job_completed(job, ret, job->ret);
|
||||
if (job->ret) {
|
||||
@@ -893,7 +887,7 @@ void job_cancel(Job *job, bool force)
|
||||
}
|
||||
job_cancel_async(job, force);
|
||||
if (!job_started(job)) {
|
||||
- job_completed(job, -ECANCELED, NULL);
|
||||
+ job_completed(job, -ECANCELED);
|
||||
} else if (job->deferred_to_main_loop) {
|
||||
job_completed_txn_abort(job);
|
||||
} else {
|
||||
diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c
|
||||
index a753386..00604df 100644
|
||||
--- a/tests/test-bdrv-drain.c
|
||||
+++ b/tests/test-bdrv-drain.c
|
||||
@@ -754,7 +754,7 @@ typedef struct TestBlockJob {
|
||||
|
||||
static void test_job_completed(Job *job, void *opaque)
|
||||
{
|
||||
- job_completed(job, 0, NULL);
|
||||
+ job_completed(job, 0);
|
||||
}
|
||||
|
||||
static int coroutine_fn test_job_run(Job *job, Error **errp)
|
||||
diff --git a/tests/test-blockjob-txn.c b/tests/test-blockjob-txn.c
|
||||
index 3194924..82cedee 100644
|
||||
--- a/tests/test-blockjob-txn.c
|
||||
+++ b/tests/test-blockjob-txn.c
|
||||
@@ -34,7 +34,7 @@ static void test_block_job_complete(Job *job, void *opaque)
|
||||
rc = -ECANCELED;
|
||||
}
|
||||
|
||||
- job_completed(job, rc, NULL);
|
||||
+ job_completed(job, rc);
|
||||
bdrv_unref(bs);
|
||||
}
|
||||
|
||||
diff --git a/tests/test-blockjob.c b/tests/test-blockjob.c
|
||||
index b0462bf..408a226 100644
|
||||
--- a/tests/test-blockjob.c
|
||||
+++ b/tests/test-blockjob.c
|
||||
@@ -167,7 +167,7 @@ static void cancel_job_completed(Job *job, void *opaque)
|
||||
{
|
||||
CancelJob *s = opaque;
|
||||
s->completed = true;
|
||||
- job_completed(job, 0, NULL);
|
||||
+ job_completed(job, 0);
|
||||
}
|
||||
|
||||
static void cancel_job_complete(Job *job, Error **errp)
|
||||
--
|
||||
1.8.3.1
|
||||
|
108
0023-jobs-add-exit-shim.patch
Normal file
108
0023-jobs-add-exit-shim.patch
Normal file
@ -0,0 +1,108 @@
|
||||
From 29ae3509885eaa6d24ee82aa4cae47ddeda086db Mon Sep 17 00:00:00 2001
|
||||
From: John Snow <jsnow@redhat.com>
|
||||
Date: Tue, 25 Sep 2018 22:34:09 +0100
|
||||
Subject: jobs: add exit shim
|
||||
|
||||
RH-Author: John Snow <jsnow@redhat.com>
|
||||
Message-id: <20180925223431.24791-4-jsnow@redhat.com>
|
||||
Patchwork-id: 82273
|
||||
O-Subject: [RHEL8/rhel qemu-kvm PATCH 03/25] jobs: add exit shim
|
||||
Bugzilla: 1632939
|
||||
RH-Acked-by: Jeffrey Cody <jcody@redhat.com>
|
||||
RH-Acked-by: Max Reitz <mreitz@redhat.com>
|
||||
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
|
||||
|
||||
All jobs do the same thing when they leave their running loop:
|
||||
- Store the return code in a structure
|
||||
- wait to receive this structure in the main thread
|
||||
- signal job completion via job_completed
|
||||
|
||||
Few jobs do anything beyond exactly this. Consolidate this exit
|
||||
logic for a net reduction in SLOC.
|
||||
|
||||
More seriously, when we utilize job_defer_to_main_loop_bh to call
|
||||
a function that calls job_completed, job_finalize_single will run
|
||||
in a context where it has recursively taken the aio_context lock,
|
||||
which can cause hangs if it puts down a reference that causes a flush.
|
||||
|
||||
You can observe this in practice by looking at mirror_exit's careful
|
||||
placement of job_completed and bdrv_unref calls.
|
||||
|
||||
If we centralize job exiting, we can signal job completion from outside
|
||||
of the aio_context, which should allow for job cleanup code to run with
|
||||
only one lock, which makes cleanup callbacks less tricky to write.
|
||||
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
||||
Message-id: 20180830015734.19765-4-jsnow@redhat.com
|
||||
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
||||
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
||||
(cherry picked from commit 00359a71d45a414ee47d8e423104dc0afd24ec65)
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
|
||||
---
|
||||
include/qemu/job.h | 11 +++++++++++
|
||||
job.c | 18 ++++++++++++++++++
|
||||
2 files changed, 29 insertions(+)
|
||||
|
||||
diff --git a/include/qemu/job.h b/include/qemu/job.h
|
||||
index e0e9987..1144d67 100644
|
||||
--- a/include/qemu/job.h
|
||||
+++ b/include/qemu/job.h
|
||||
@@ -209,6 +209,17 @@ struct JobDriver {
|
||||
void (*drain)(Job *job);
|
||||
|
||||
/**
|
||||
+ * If the callback is not NULL, exit will be invoked from the main thread
|
||||
+ * when the job's coroutine has finished, but before transactional
|
||||
+ * convergence; before @prepare or @abort.
|
||||
+ *
|
||||
+ * FIXME TODO: This callback is only temporary to transition remaining jobs
|
||||
+ * to prepare/commit/abort/clean callbacks and will be removed before 3.1.
|
||||
+ * is released.
|
||||
+ */
|
||||
+ void (*exit)(Job *job);
|
||||
+
|
||||
+ /**
|
||||
* If the callback is not NULL, prepare will be invoked when all the jobs
|
||||
* belonging to the same transaction complete; or upon this job's completion
|
||||
* if it is not in a transaction.
|
||||
diff --git a/job.c b/job.c
|
||||
index 276024a..abe91af 100644
|
||||
--- a/job.c
|
||||
+++ b/job.c
|
||||
@@ -535,6 +535,18 @@ void job_drain(Job *job)
|
||||
}
|
||||
}
|
||||
|
||||
+static void job_exit(void *opaque)
|
||||
+{
|
||||
+ Job *job = (Job *)opaque;
|
||||
+ AioContext *aio_context = job->aio_context;
|
||||
+
|
||||
+ if (job->driver->exit) {
|
||||
+ aio_context_acquire(aio_context);
|
||||
+ job->driver->exit(job);
|
||||
+ aio_context_release(aio_context);
|
||||
+ }
|
||||
+ job_completed(job, job->ret);
|
||||
+}
|
||||
|
||||
/**
|
||||
* All jobs must allow a pause point before entering their job proper. This
|
||||
@@ -547,6 +559,12 @@ static void coroutine_fn job_co_entry(void *opaque)
|
||||
assert(job && job->driver && job->driver->run);
|
||||
job_pause_point(job);
|
||||
job->ret = job->driver->run(job, &job->err);
|
||||
+ if (!job->deferred_to_main_loop) {
|
||||
+ job->deferred_to_main_loop = true;
|
||||
+ aio_bh_schedule_oneshot(qemu_get_aio_context(),
|
||||
+ job_exit,
|
||||
+ job);
|
||||
+ }
|
||||
}
|
||||
|
||||
|
||||
--
|
||||
1.8.3.1
|
||||
|
115
0024-block-commit-utilize-job_exit-shim.patch
Normal file
115
0024-block-commit-utilize-job_exit-shim.patch
Normal file
@ -0,0 +1,115 @@
|
||||
From 2207ab7e71d5d3c3806d60b3f483988a62566292 Mon Sep 17 00:00:00 2001
|
||||
From: John Snow <jsnow@redhat.com>
|
||||
Date: Tue, 25 Sep 2018 22:34:10 +0100
|
||||
Subject: block/commit: utilize job_exit shim
|
||||
|
||||
RH-Author: John Snow <jsnow@redhat.com>
|
||||
Message-id: <20180925223431.24791-5-jsnow@redhat.com>
|
||||
Patchwork-id: 82265
|
||||
O-Subject: [RHEL8/rhel qemu-kvm PATCH 04/25] block/commit: utilize job_exit shim
|
||||
Bugzilla: 1632939
|
||||
RH-Acked-by: Jeffrey Cody <jcody@redhat.com>
|
||||
RH-Acked-by: Max Reitz <mreitz@redhat.com>
|
||||
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
|
||||
|
||||
Change the manual deferment to commit_complete into the implicit
|
||||
callback to job_exit, renaming commit_complete to commit_exit.
|
||||
|
||||
This conversion does change the timing of when job_completed is
|
||||
called to after the bdrv_replace_node and bdrv_unref calls, which
|
||||
could have implications for bjob->blk which will now be put down
|
||||
after this cleanup.
|
||||
|
||||
Kevin highlights that we did not take any permissions for that backend
|
||||
at job creation time, so it is safe to reorder these operations.
|
||||
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
||||
Message-id: 20180830015734.19765-5-jsnow@redhat.com
|
||||
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
||||
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
||||
(cherry picked from commit f369b48dc4095861223f9bc4329935599e03b1c5)
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
|
||||
---
|
||||
block/commit.c | 22 +++++-----------------
|
||||
1 file changed, 5 insertions(+), 17 deletions(-)
|
||||
|
||||
diff --git a/block/commit.c b/block/commit.c
|
||||
index af7579d..25b3cb8 100644
|
||||
--- a/block/commit.c
|
||||
+++ b/block/commit.c
|
||||
@@ -68,19 +68,13 @@ static int coroutine_fn commit_populate(BlockBackend *bs, BlockBackend *base,
|
||||
return 0;
|
||||
}
|
||||
|
||||
-typedef struct {
|
||||
- int ret;
|
||||
-} CommitCompleteData;
|
||||
-
|
||||
-static void commit_complete(Job *job, void *opaque)
|
||||
+static void commit_exit(Job *job)
|
||||
{
|
||||
CommitBlockJob *s = container_of(job, CommitBlockJob, common.job);
|
||||
BlockJob *bjob = &s->common;
|
||||
- CommitCompleteData *data = opaque;
|
||||
BlockDriverState *top = blk_bs(s->top);
|
||||
BlockDriverState *base = blk_bs(s->base);
|
||||
BlockDriverState *commit_top_bs = s->commit_top_bs;
|
||||
- int ret = data->ret;
|
||||
bool remove_commit_top_bs = false;
|
||||
|
||||
/* Make sure commit_top_bs and top stay around until bdrv_replace_node() */
|
||||
@@ -91,10 +85,10 @@ static void commit_complete(Job *job, void *opaque)
|
||||
* the normal backing chain can be restored. */
|
||||
blk_unref(s->base);
|
||||
|
||||
- if (!job_is_cancelled(job) && ret == 0) {
|
||||
+ if (!job_is_cancelled(job) && job->ret == 0) {
|
||||
/* success */
|
||||
- ret = bdrv_drop_intermediate(s->commit_top_bs, base,
|
||||
- s->backing_file_str);
|
||||
+ job->ret = bdrv_drop_intermediate(s->commit_top_bs, base,
|
||||
+ s->backing_file_str);
|
||||
} else {
|
||||
/* XXX Can (or should) we somehow keep 'consistent read' blocked even
|
||||
* after the failed/cancelled commit job is gone? If we already wrote
|
||||
@@ -117,9 +111,6 @@ static void commit_complete(Job *job, void *opaque)
|
||||
* bdrv_set_backing_hd() to fail. */
|
||||
block_job_remove_all_bdrv(bjob);
|
||||
|
||||
- job_completed(job, ret);
|
||||
- g_free(data);
|
||||
-
|
||||
/* If bdrv_drop_intermediate() didn't already do that, remove the commit
|
||||
* filter driver from the backing chain. Do this as the final step so that
|
||||
* the 'consistent read' permission can be granted. */
|
||||
@@ -137,7 +128,6 @@ static void commit_complete(Job *job, void *opaque)
|
||||
static int coroutine_fn commit_run(Job *job, Error **errp)
|
||||
{
|
||||
CommitBlockJob *s = container_of(job, CommitBlockJob, common.job);
|
||||
- CommitCompleteData *data;
|
||||
int64_t offset;
|
||||
uint64_t delay_ns = 0;
|
||||
int ret = 0;
|
||||
@@ -210,9 +200,6 @@ static int coroutine_fn commit_run(Job *job, Error **errp)
|
||||
out:
|
||||
qemu_vfree(buf);
|
||||
|
||||
- data = g_malloc(sizeof(*data));
|
||||
- data->ret = ret;
|
||||
- job_defer_to_main_loop(&s->common.job, commit_complete, data);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -224,6 +211,7 @@ static const BlockJobDriver commit_job_driver = {
|
||||
.user_resume = block_job_user_resume,
|
||||
.drain = block_job_drain,
|
||||
.run = commit_run,
|
||||
+ .exit = commit_exit,
|
||||
},
|
||||
};
|
||||
|
||||
--
|
||||
1.8.3.1
|
||||
|
152
0025-block-mirror-utilize-job_exit-shim.patch
Normal file
152
0025-block-mirror-utilize-job_exit-shim.patch
Normal file
@ -0,0 +1,152 @@
|
||||
From f96869810df10ac28030a31d8cb1e39825133e94 Mon Sep 17 00:00:00 2001
|
||||
From: John Snow <jsnow@redhat.com>
|
||||
Date: Wed, 29 Aug 2018 21:57:30 -0400
|
||||
Subject: block/mirror: utilize job_exit shim
|
||||
|
||||
RH-Author: John Snow <jsnow@redhat.com>
|
||||
Message-id: <20180925223431.24791-6-jsnow@redhat.com>
|
||||
Patchwork-id: 82269
|
||||
O-Subject: [RHEL8/rhel qemu-kvm PATCH 05/25] block/mirror: utilize job_exit
|
||||
Bugzilla: 1632939
|
||||
RH-Acked-by: Jeffrey Cody <jcody@redhat.com>
|
||||
RH-Acked-by: Max Reitz <mreitz@redhat.com>
|
||||
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
|
||||
|
||||
Change the manual deferment to mirror_exit into the implicit
|
||||
callback to job_exit and the mirror_exit callback.
|
||||
|
||||
This does change the order of some bdrv_unref calls and job_completed,
|
||||
but thanks to the new context in which we call .exit, this is safe to
|
||||
defer the possible flushing of any nodes to the job_finalize_single
|
||||
cleanup stage.
|
||||
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Message-id: 20180830015734.19765-6-jsnow@redhat.com
|
||||
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
||||
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
||||
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
||||
(cherry picked from commit 7b508f6b7a38a8d9729772fa6e525da883fb120b)
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
---
|
||||
block/mirror.c | 29 +++++++++++------------------
|
||||
1 file changed, 11 insertions(+), 18 deletions(-)
|
||||
|
||||
diff --git a/block/mirror.c b/block/mirror.c
|
||||
index 6637f2b..4a9558d 100644
|
||||
--- a/block/mirror.c
|
||||
+++ b/block/mirror.c
|
||||
@@ -607,26 +607,22 @@ static void mirror_wait_for_all_io(MirrorBlockJob *s)
|
||||
}
|
||||
}
|
||||
|
||||
-typedef struct {
|
||||
- int ret;
|
||||
-} MirrorExitData;
|
||||
-
|
||||
-static void mirror_exit(Job *job, void *opaque)
|
||||
+static void mirror_exit(Job *job)
|
||||
{
|
||||
MirrorBlockJob *s = container_of(job, MirrorBlockJob, common.job);
|
||||
BlockJob *bjob = &s->common;
|
||||
- MirrorExitData *data = opaque;
|
||||
MirrorBDSOpaque *bs_opaque = s->mirror_top_bs->opaque;
|
||||
AioContext *replace_aio_context = NULL;
|
||||
BlockDriverState *src = s->mirror_top_bs->backing->bs;
|
||||
BlockDriverState *target_bs = blk_bs(s->target);
|
||||
BlockDriverState *mirror_top_bs = s->mirror_top_bs;
|
||||
Error *local_err = NULL;
|
||||
+ int ret = job->ret;
|
||||
|
||||
bdrv_release_dirty_bitmap(src, s->dirty_bitmap);
|
||||
|
||||
- /* Make sure that the source BDS doesn't go away before we called
|
||||
- * job_completed(). */
|
||||
+ /* Make sure that the source BDS doesn't go away during bdrv_replace_node,
|
||||
+ * before we can call bdrv_drained_end */
|
||||
bdrv_ref(src);
|
||||
bdrv_ref(mirror_top_bs);
|
||||
bdrv_ref(target_bs);
|
||||
@@ -652,7 +648,7 @@ static void mirror_exit(Job *job, void *opaque)
|
||||
bdrv_set_backing_hd(target_bs, backing, &local_err);
|
||||
if (local_err) {
|
||||
error_report_err(local_err);
|
||||
- data->ret = -EPERM;
|
||||
+ ret = -EPERM;
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -662,7 +658,7 @@ static void mirror_exit(Job *job, void *opaque)
|
||||
aio_context_acquire(replace_aio_context);
|
||||
}
|
||||
|
||||
- if (s->should_complete && data->ret == 0) {
|
||||
+ if (s->should_complete && ret == 0) {
|
||||
BlockDriverState *to_replace = src;
|
||||
if (s->to_replace) {
|
||||
to_replace = s->to_replace;
|
||||
@@ -679,7 +675,7 @@ static void mirror_exit(Job *job, void *opaque)
|
||||
bdrv_drained_end(target_bs);
|
||||
if (local_err) {
|
||||
error_report_err(local_err);
|
||||
- data->ret = -EPERM;
|
||||
+ ret = -EPERM;
|
||||
}
|
||||
}
|
||||
if (s->to_replace) {
|
||||
@@ -710,12 +706,12 @@ static void mirror_exit(Job *job, void *opaque)
|
||||
blk_insert_bs(bjob->blk, mirror_top_bs, &error_abort);
|
||||
|
||||
bs_opaque->job = NULL;
|
||||
- job_completed(job, data->ret);
|
||||
|
||||
- g_free(data);
|
||||
bdrv_drained_end(src);
|
||||
bdrv_unref(mirror_top_bs);
|
||||
bdrv_unref(src);
|
||||
+
|
||||
+ job->ret = ret;
|
||||
}
|
||||
|
||||
static void mirror_throttle(MirrorBlockJob *s)
|
||||
@@ -815,7 +811,6 @@ static int mirror_flush(MirrorBlockJob *s)
|
||||
static int coroutine_fn mirror_run(Job *job, Error **errp)
|
||||
{
|
||||
MirrorBlockJob *s = container_of(job, MirrorBlockJob, common.job);
|
||||
- MirrorExitData *data;
|
||||
BlockDriverState *bs = s->mirror_top_bs->backing->bs;
|
||||
BlockDriverState *target_bs = blk_bs(s->target);
|
||||
bool need_drain = true;
|
||||
@@ -1035,14 +1030,10 @@ immediate_exit:
|
||||
g_free(s->in_flight_bitmap);
|
||||
bdrv_dirty_iter_free(s->dbi);
|
||||
|
||||
- data = g_malloc(sizeof(*data));
|
||||
- data->ret = ret;
|
||||
-
|
||||
if (need_drain) {
|
||||
bdrv_drained_begin(bs);
|
||||
}
|
||||
|
||||
- job_defer_to_main_loop(&s->common.job, mirror_exit, data);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -1141,6 +1132,7 @@ static const BlockJobDriver mirror_job_driver = {
|
||||
.user_resume = block_job_user_resume,
|
||||
.drain = block_job_drain,
|
||||
.run = mirror_run,
|
||||
+ .exit = mirror_exit,
|
||||
.pause = mirror_pause,
|
||||
.complete = mirror_complete,
|
||||
},
|
||||
@@ -1157,6 +1149,7 @@ static const BlockJobDriver commit_active_job_driver = {
|
||||
.user_resume = block_job_user_resume,
|
||||
.drain = block_job_drain,
|
||||
.run = mirror_run,
|
||||
+ .exit = mirror_exit,
|
||||
.pause = mirror_pause,
|
||||
.complete = mirror_complete,
|
||||
},
|
||||
--
|
||||
1.8.3.1
|
||||
|
307
0026-jobs-utilize-job_exit-shim.patch
Normal file
307
0026-jobs-utilize-job_exit-shim.patch
Normal file
@ -0,0 +1,307 @@
|
||||
From 5947e8781d9dffb069fcc570402f775f80068e63 Mon Sep 17 00:00:00 2001
|
||||
From: John Snow <jsnow@redhat.com>
|
||||
Date: Tue, 25 Sep 2018 22:34:12 +0100
|
||||
Subject: jobs: utilize job_exit shim
|
||||
|
||||
RH-Author: John Snow <jsnow@redhat.com>
|
||||
Message-id: <20180925223431.24791-7-jsnow@redhat.com>
|
||||
Patchwork-id: 82267
|
||||
O-Subject: [RHEL8/rhel qemu-kvm PATCH 06/25] jobs: utilize job_exit shim
|
||||
Bugzilla: 1632939
|
||||
RH-Acked-by: Jeffrey Cody <jcody@redhat.com>
|
||||
RH-Acked-by: Max Reitz <mreitz@redhat.com>
|
||||
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
|
||||
|
||||
Utilize the job_exit shim by not calling job_defer_to_main_loop, and
|
||||
where applicable, converting the deferred callback into the job_exit
|
||||
callback.
|
||||
|
||||
This converts backup, stream, create, and the unit tests all at once.
|
||||
Most of these jobs do not see any changes to the order in which they
|
||||
clean up their resources, except the test-blockjob-txn test, which
|
||||
now puts down its bs before job_completed is called.
|
||||
|
||||
This is safe for the same reason the reordering in the mirror job is
|
||||
safe, because job_completed no longer runs under two locks, making
|
||||
the unref safe even if it causes a flush.
|
||||
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
||||
Message-id: 20180830015734.19765-7-jsnow@redhat.com
|
||||
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
||||
(cherry picked from commit eb23654dbe43b549ea2a9ebff9d8edf544d34a73)
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
|
||||
---
|
||||
block/backup.c | 16 ----------------
|
||||
block/create.c | 14 +++-----------
|
||||
block/stream.c | 22 +++++++---------------
|
||||
tests/test-bdrv-drain.c | 6 ------
|
||||
tests/test-blockjob-txn.c | 11 ++---------
|
||||
tests/test-blockjob.c | 10 ++++------
|
||||
6 files changed, 16 insertions(+), 63 deletions(-)
|
||||
|
||||
diff --git a/block/backup.c b/block/backup.c
|
||||
index 1e965d5..a67b7fa 100644
|
||||
--- a/block/backup.c
|
||||
+++ b/block/backup.c
|
||||
@@ -380,18 +380,6 @@ static BlockErrorAction backup_error_action(BackupBlockJob *job,
|
||||
}
|
||||
}
|
||||
|
||||
-typedef struct {
|
||||
- int ret;
|
||||
-} BackupCompleteData;
|
||||
-
|
||||
-static void backup_complete(Job *job, void *opaque)
|
||||
-{
|
||||
- BackupCompleteData *data = opaque;
|
||||
-
|
||||
- job_completed(job, data->ret);
|
||||
- g_free(data);
|
||||
-}
|
||||
-
|
||||
static bool coroutine_fn yield_and_check(BackupBlockJob *job)
|
||||
{
|
||||
uint64_t delay_ns;
|
||||
@@ -483,7 +471,6 @@ static void backup_incremental_init_copy_bitmap(BackupBlockJob *job)
|
||||
static int coroutine_fn backup_run(Job *opaque_job, Error **errp)
|
||||
{
|
||||
BackupBlockJob *job = container_of(opaque_job, BackupBlockJob, common.job);
|
||||
- BackupCompleteData *data;
|
||||
BlockDriverState *bs = blk_bs(job->common.blk);
|
||||
int64_t offset, nb_clusters;
|
||||
int ret = 0;
|
||||
@@ -584,9 +571,6 @@ static int coroutine_fn backup_run(Job *opaque_job, Error **errp)
|
||||
qemu_co_rwlock_unlock(&job->flush_rwlock);
|
||||
hbitmap_free(job->copy_bitmap);
|
||||
|
||||
- data = g_malloc(sizeof(*data));
|
||||
- data->ret = ret;
|
||||
- job_defer_to_main_loop(&job->common.job, backup_complete, data);
|
||||
return ret;
|
||||
}
|
||||
|
||||
diff --git a/block/create.c b/block/create.c
|
||||
index 26a385c..9534121 100644
|
||||
--- a/block/create.c
|
||||
+++ b/block/create.c
|
||||
@@ -34,28 +34,20 @@ typedef struct BlockdevCreateJob {
|
||||
Job common;
|
||||
BlockDriver *drv;
|
||||
BlockdevCreateOptions *opts;
|
||||
- int ret;
|
||||
} BlockdevCreateJob;
|
||||
|
||||
-static void blockdev_create_complete(Job *job, void *opaque)
|
||||
-{
|
||||
- BlockdevCreateJob *s = container_of(job, BlockdevCreateJob, common);
|
||||
-
|
||||
- job_completed(job, s->ret);
|
||||
-}
|
||||
-
|
||||
static int coroutine_fn blockdev_create_run(Job *job, Error **errp)
|
||||
{
|
||||
BlockdevCreateJob *s = container_of(job, BlockdevCreateJob, common);
|
||||
+ int ret;
|
||||
|
||||
job_progress_set_remaining(&s->common, 1);
|
||||
- s->ret = s->drv->bdrv_co_create(s->opts, errp);
|
||||
+ ret = s->drv->bdrv_co_create(s->opts, errp);
|
||||
job_progress_update(&s->common, 1);
|
||||
|
||||
qapi_free_BlockdevCreateOptions(s->opts);
|
||||
- job_defer_to_main_loop(&s->common, blockdev_create_complete, NULL);
|
||||
|
||||
- return s->ret;
|
||||
+ return ret;
|
||||
}
|
||||
|
||||
static const JobDriver blockdev_create_job_driver = {
|
||||
diff --git a/block/stream.c b/block/stream.c
|
||||
index 26a7753..67e1e72 100644
|
||||
--- a/block/stream.c
|
||||
+++ b/block/stream.c
|
||||
@@ -54,20 +54,16 @@ static int coroutine_fn stream_populate(BlockBackend *blk,
|
||||
return blk_co_preadv(blk, offset, qiov.size, &qiov, BDRV_REQ_COPY_ON_READ);
|
||||
}
|
||||
|
||||
-typedef struct {
|
||||
- int ret;
|
||||
-} StreamCompleteData;
|
||||
-
|
||||
-static void stream_complete(Job *job, void *opaque)
|
||||
+static void stream_exit(Job *job)
|
||||
{
|
||||
StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
|
||||
BlockJob *bjob = &s->common;
|
||||
- StreamCompleteData *data = opaque;
|
||||
BlockDriverState *bs = blk_bs(bjob->blk);
|
||||
BlockDriverState *base = s->base;
|
||||
Error *local_err = NULL;
|
||||
+ int ret = job->ret;
|
||||
|
||||
- if (!job_is_cancelled(job) && bs->backing && data->ret == 0) {
|
||||
+ if (!job_is_cancelled(job) && bs->backing && ret == 0) {
|
||||
const char *base_id = NULL, *base_fmt = NULL;
|
||||
if (base) {
|
||||
base_id = s->backing_file_str;
|
||||
@@ -75,11 +71,11 @@ static void stream_complete(Job *job, void *opaque)
|
||||
base_fmt = base->drv->format_name;
|
||||
}
|
||||
}
|
||||
- data->ret = bdrv_change_backing_file(bs, base_id, base_fmt);
|
||||
+ ret = bdrv_change_backing_file(bs, base_id, base_fmt);
|
||||
bdrv_set_backing_hd(bs, base, &local_err);
|
||||
if (local_err) {
|
||||
error_report_err(local_err);
|
||||
- data->ret = -EPERM;
|
||||
+ ret = -EPERM;
|
||||
goto out;
|
||||
}
|
||||
}
|
||||
@@ -93,14 +89,12 @@ out:
|
||||
}
|
||||
|
||||
g_free(s->backing_file_str);
|
||||
- job_completed(job, data->ret);
|
||||
- g_free(data);
|
||||
+ job->ret = ret;
|
||||
}
|
||||
|
||||
static int coroutine_fn stream_run(Job *job, Error **errp)
|
||||
{
|
||||
StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
|
||||
- StreamCompleteData *data;
|
||||
BlockBackend *blk = s->common.blk;
|
||||
BlockDriverState *bs = blk_bs(blk);
|
||||
BlockDriverState *base = s->base;
|
||||
@@ -203,9 +197,6 @@ static int coroutine_fn stream_run(Job *job, Error **errp)
|
||||
|
||||
out:
|
||||
/* Modify backing chain and close BDSes in main loop */
|
||||
- data = g_malloc(sizeof(*data));
|
||||
- data->ret = ret;
|
||||
- job_defer_to_main_loop(&s->common.job, stream_complete, data);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@@ -215,6 +206,7 @@ static const BlockJobDriver stream_job_driver = {
|
||||
.job_type = JOB_TYPE_STREAM,
|
||||
.free = block_job_free,
|
||||
.run = stream_run,
|
||||
+ .exit = stream_exit,
|
||||
.user_resume = block_job_user_resume,
|
||||
.drain = block_job_drain,
|
||||
},
|
||||
diff --git a/tests/test-bdrv-drain.c b/tests/test-bdrv-drain.c
|
||||
index 00604df..9bcb3c7 100644
|
||||
--- a/tests/test-bdrv-drain.c
|
||||
+++ b/tests/test-bdrv-drain.c
|
||||
@@ -752,11 +752,6 @@ typedef struct TestBlockJob {
|
||||
bool should_complete;
|
||||
} TestBlockJob;
|
||||
|
||||
-static void test_job_completed(Job *job, void *opaque)
|
||||
-{
|
||||
- job_completed(job, 0);
|
||||
-}
|
||||
-
|
||||
static int coroutine_fn test_job_run(Job *job, Error **errp)
|
||||
{
|
||||
TestBlockJob *s = container_of(job, TestBlockJob, common.job);
|
||||
@@ -770,7 +765,6 @@ static int coroutine_fn test_job_run(Job *job, Error **errp)
|
||||
job_pause_point(&s->common.job);
|
||||
}
|
||||
|
||||
- job_defer_to_main_loop(&s->common.job, test_job_completed, NULL);
|
||||
return 0;
|
||||
}
|
||||
|
||||
diff --git a/tests/test-blockjob-txn.c b/tests/test-blockjob-txn.c
|
||||
index 82cedee..ef29f35 100644
|
||||
--- a/tests/test-blockjob-txn.c
|
||||
+++ b/tests/test-blockjob-txn.c
|
||||
@@ -24,17 +24,11 @@ typedef struct {
|
||||
int *result;
|
||||
} TestBlockJob;
|
||||
|
||||
-static void test_block_job_complete(Job *job, void *opaque)
|
||||
+static void test_block_job_exit(Job *job)
|
||||
{
|
||||
BlockJob *bjob = container_of(job, BlockJob, job);
|
||||
BlockDriverState *bs = blk_bs(bjob->blk);
|
||||
- int rc = (intptr_t)opaque;
|
||||
|
||||
- if (job_is_cancelled(job)) {
|
||||
- rc = -ECANCELED;
|
||||
- }
|
||||
-
|
||||
- job_completed(job, rc);
|
||||
bdrv_unref(bs);
|
||||
}
|
||||
|
||||
@@ -54,8 +48,6 @@ static int coroutine_fn test_block_job_run(Job *job, Error **errp)
|
||||
}
|
||||
}
|
||||
|
||||
- job_defer_to_main_loop(job, test_block_job_complete,
|
||||
- (void *)(intptr_t)s->rc);
|
||||
return s->rc;
|
||||
}
|
||||
|
||||
@@ -81,6 +73,7 @@ static const BlockJobDriver test_block_job_driver = {
|
||||
.user_resume = block_job_user_resume,
|
||||
.drain = block_job_drain,
|
||||
.run = test_block_job_run,
|
||||
+ .exit = test_block_job_exit,
|
||||
},
|
||||
};
|
||||
|
||||
diff --git a/tests/test-blockjob.c b/tests/test-blockjob.c
|
||||
index 408a226..ad4a65b 100644
|
||||
--- a/tests/test-blockjob.c
|
||||
+++ b/tests/test-blockjob.c
|
||||
@@ -163,11 +163,10 @@ typedef struct CancelJob {
|
||||
bool completed;
|
||||
} CancelJob;
|
||||
|
||||
-static void cancel_job_completed(Job *job, void *opaque)
|
||||
+static void cancel_job_exit(Job *job)
|
||||
{
|
||||
- CancelJob *s = opaque;
|
||||
+ CancelJob *s = container_of(job, CancelJob, common.job);
|
||||
s->completed = true;
|
||||
- job_completed(job, 0);
|
||||
}
|
||||
|
||||
static void cancel_job_complete(Job *job, Error **errp)
|
||||
@@ -182,7 +181,7 @@ static int coroutine_fn cancel_job_run(Job *job, Error **errp)
|
||||
|
||||
while (!s->should_complete) {
|
||||
if (job_is_cancelled(&s->common.job)) {
|
||||
- goto defer;
|
||||
+ return 0;
|
||||
}
|
||||
|
||||
if (!job_is_ready(&s->common.job) && s->should_converge) {
|
||||
@@ -192,8 +191,6 @@ static int coroutine_fn cancel_job_run(Job *job, Error **errp)
|
||||
job_sleep_ns(&s->common.job, 100000);
|
||||
}
|
||||
|
||||
- defer:
|
||||
- job_defer_to_main_loop(&s->common.job, cancel_job_completed, s);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@@ -204,6 +201,7 @@ static const BlockJobDriver test_cancel_driver = {
|
||||
.user_resume = block_job_user_resume,
|
||||
.drain = block_job_drain,
|
||||
.run = cancel_job_run,
|
||||
+ .exit = cancel_job_exit,
|
||||
.complete = cancel_job_complete,
|
||||
},
|
||||
};
|
||||
--
|
||||
1.8.3.1
|
||||
|
165
0027-block-backup-make-function-variables-consistently-na.patch
Normal file
165
0027-block-backup-make-function-variables-consistently-na.patch
Normal file
@ -0,0 +1,165 @@
|
||||
From 3e86b802541a7230eda88a6bd7f17b411deab9fa Mon Sep 17 00:00:00 2001
|
||||
From: John Snow <jsnow@redhat.com>
|
||||
Date: Tue, 25 Sep 2018 22:34:13 +0100
|
||||
Subject: block/backup: make function variables consistently named
|
||||
|
||||
RH-Author: John Snow <jsnow@redhat.com>
|
||||
Message-id: <20180925223431.24791-8-jsnow@redhat.com>
|
||||
Patchwork-id: 82272
|
||||
O-Subject: [RHEL8/rhel qemu-kvm PATCH 07/25] block/backup: make function variables consistently named
|
||||
Bugzilla: 1632939
|
||||
RH-Acked-by: Jeffrey Cody <jcody@redhat.com>
|
||||
RH-Acked-by: Max Reitz <mreitz@redhat.com>
|
||||
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
|
||||
|
||||
Rename opaque_job to job to be consistent with other job implementations.
|
||||
Rename 'job', the BackupBlockJob object, to 's' to also be consistent.
|
||||
|
||||
Suggested-by: Eric Blake <eblake@redhat.com>
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
||||
Message-id: 20180830015734.19765-8-jsnow@redhat.com
|
||||
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
||||
(cherry picked from commit 6870277535493fea31761d8d11ec23add2de0fb0)
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
|
||||
---
|
||||
block/backup.c | 62 +++++++++++++++++++++++++++++-----------------------------
|
||||
1 file changed, 31 insertions(+), 31 deletions(-)
|
||||
|
||||
diff --git a/block/backup.c b/block/backup.c
|
||||
index a67b7fa..4d084f6 100644
|
||||
--- a/block/backup.c
|
||||
+++ b/block/backup.c
|
||||
@@ -468,59 +468,59 @@ static void backup_incremental_init_copy_bitmap(BackupBlockJob *job)
|
||||
bdrv_dirty_iter_free(dbi);
|
||||
}
|
||||
|
||||
-static int coroutine_fn backup_run(Job *opaque_job, Error **errp)
|
||||
+static int coroutine_fn backup_run(Job *job, Error **errp)
|
||||
{
|
||||
- BackupBlockJob *job = container_of(opaque_job, BackupBlockJob, common.job);
|
||||
- BlockDriverState *bs = blk_bs(job->common.blk);
|
||||
+ BackupBlockJob *s = container_of(job, BackupBlockJob, common.job);
|
||||
+ BlockDriverState *bs = blk_bs(s->common.blk);
|
||||
int64_t offset, nb_clusters;
|
||||
int ret = 0;
|
||||
|
||||
- QLIST_INIT(&job->inflight_reqs);
|
||||
- qemu_co_rwlock_init(&job->flush_rwlock);
|
||||
+ QLIST_INIT(&s->inflight_reqs);
|
||||
+ qemu_co_rwlock_init(&s->flush_rwlock);
|
||||
|
||||
- nb_clusters = DIV_ROUND_UP(job->len, job->cluster_size);
|
||||
- job_progress_set_remaining(&job->common.job, job->len);
|
||||
+ nb_clusters = DIV_ROUND_UP(s->len, s->cluster_size);
|
||||
+ job_progress_set_remaining(job, s->len);
|
||||
|
||||
- job->copy_bitmap = hbitmap_alloc(nb_clusters, 0);
|
||||
- if (job->sync_mode == MIRROR_SYNC_MODE_INCREMENTAL) {
|
||||
- backup_incremental_init_copy_bitmap(job);
|
||||
+ s->copy_bitmap = hbitmap_alloc(nb_clusters, 0);
|
||||
+ if (s->sync_mode == MIRROR_SYNC_MODE_INCREMENTAL) {
|
||||
+ backup_incremental_init_copy_bitmap(s);
|
||||
} else {
|
||||
- hbitmap_set(job->copy_bitmap, 0, nb_clusters);
|
||||
+ hbitmap_set(s->copy_bitmap, 0, nb_clusters);
|
||||
}
|
||||
|
||||
|
||||
- job->before_write.notify = backup_before_write_notify;
|
||||
- bdrv_add_before_write_notifier(bs, &job->before_write);
|
||||
+ s->before_write.notify = backup_before_write_notify;
|
||||
+ bdrv_add_before_write_notifier(bs, &s->before_write);
|
||||
|
||||
- if (job->sync_mode == MIRROR_SYNC_MODE_NONE) {
|
||||
+ if (s->sync_mode == MIRROR_SYNC_MODE_NONE) {
|
||||
/* All bits are set in copy_bitmap to allow any cluster to be copied.
|
||||
* This does not actually require them to be copied. */
|
||||
- while (!job_is_cancelled(&job->common.job)) {
|
||||
+ while (!job_is_cancelled(job)) {
|
||||
/* Yield until the job is cancelled. We just let our before_write
|
||||
* notify callback service CoW requests. */
|
||||
- job_yield(&job->common.job);
|
||||
+ job_yield(job);
|
||||
}
|
||||
- } else if (job->sync_mode == MIRROR_SYNC_MODE_INCREMENTAL) {
|
||||
- ret = backup_run_incremental(job);
|
||||
+ } else if (s->sync_mode == MIRROR_SYNC_MODE_INCREMENTAL) {
|
||||
+ ret = backup_run_incremental(s);
|
||||
} else {
|
||||
/* Both FULL and TOP SYNC_MODE's require copying.. */
|
||||
- for (offset = 0; offset < job->len;
|
||||
- offset += job->cluster_size) {
|
||||
+ for (offset = 0; offset < s->len;
|
||||
+ offset += s->cluster_size) {
|
||||
bool error_is_read;
|
||||
int alloced = 0;
|
||||
|
||||
- if (yield_and_check(job)) {
|
||||
+ if (yield_and_check(s)) {
|
||||
break;
|
||||
}
|
||||
|
||||
- if (job->sync_mode == MIRROR_SYNC_MODE_TOP) {
|
||||
+ if (s->sync_mode == MIRROR_SYNC_MODE_TOP) {
|
||||
int i;
|
||||
int64_t n;
|
||||
|
||||
/* Check to see if these blocks are already in the
|
||||
* backing file. */
|
||||
|
||||
- for (i = 0; i < job->cluster_size;) {
|
||||
+ for (i = 0; i < s->cluster_size;) {
|
||||
/* bdrv_is_allocated() only returns true/false based
|
||||
* on the first set of sectors it comes across that
|
||||
* are are all in the same state.
|
||||
@@ -529,7 +529,7 @@ static int coroutine_fn backup_run(Job *opaque_job, Error **errp)
|
||||
* needed but at some point that is always the case. */
|
||||
alloced =
|
||||
bdrv_is_allocated(bs, offset + i,
|
||||
- job->cluster_size - i, &n);
|
||||
+ s->cluster_size - i, &n);
|
||||
i += n;
|
||||
|
||||
if (alloced || n == 0) {
|
||||
@@ -547,29 +547,29 @@ static int coroutine_fn backup_run(Job *opaque_job, Error **errp)
|
||||
if (alloced < 0) {
|
||||
ret = alloced;
|
||||
} else {
|
||||
- ret = backup_do_cow(job, offset, job->cluster_size,
|
||||
+ ret = backup_do_cow(s, offset, s->cluster_size,
|
||||
&error_is_read, false);
|
||||
}
|
||||
if (ret < 0) {
|
||||
/* Depending on error action, fail now or retry cluster */
|
||||
BlockErrorAction action =
|
||||
- backup_error_action(job, error_is_read, -ret);
|
||||
+ backup_error_action(s, error_is_read, -ret);
|
||||
if (action == BLOCK_ERROR_ACTION_REPORT) {
|
||||
break;
|
||||
} else {
|
||||
- offset -= job->cluster_size;
|
||||
+ offset -= s->cluster_size;
|
||||
continue;
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
- notifier_with_return_remove(&job->before_write);
|
||||
+ notifier_with_return_remove(&s->before_write);
|
||||
|
||||
/* wait until pending backup_do_cow() calls have completed */
|
||||
- qemu_co_rwlock_wrlock(&job->flush_rwlock);
|
||||
- qemu_co_rwlock_unlock(&job->flush_rwlock);
|
||||
- hbitmap_free(job->copy_bitmap);
|
||||
+ qemu_co_rwlock_wrlock(&s->flush_rwlock);
|
||||
+ qemu_co_rwlock_unlock(&s->flush_rwlock);
|
||||
+ hbitmap_free(s->copy_bitmap);
|
||||
|
||||
return ret;
|
||||
}
|
||||
--
|
||||
1.8.3.1
|
||||
|
153
0028-jobs-remove-ret-argument-to-job_completed-privatize-.patch
Normal file
153
0028-jobs-remove-ret-argument-to-job_completed-privatize-.patch
Normal file
@ -0,0 +1,153 @@
|
||||
From 3141614c15fbcf6aee7af19069380aa6d186656b Mon Sep 17 00:00:00 2001
|
||||
From: John Snow <jsnow@redhat.com>
|
||||
Date: Tue, 25 Sep 2018 22:34:14 +0100
|
||||
Subject: jobs: remove ret argument to job_completed; privatize it
|
||||
|
||||
RH-Author: John Snow <jsnow@redhat.com>
|
||||
Message-id: <20180925223431.24791-9-jsnow@redhat.com>
|
||||
Patchwork-id: 82271
|
||||
O-Subject: [RHEL8/rhel qemu-kvm PATCH 08/25] jobs: remove ret argument to job_completed; privatize it
|
||||
Bugzilla: 1632939
|
||||
RH-Acked-by: Jeffrey Cody <jcody@redhat.com>
|
||||
RH-Acked-by: Max Reitz <mreitz@redhat.com>
|
||||
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
|
||||
|
||||
Jobs are now expected to return their retcode on the stack, from the
|
||||
.run callback, so we can remove that argument.
|
||||
|
||||
job_cancel does not need to set -ECANCELED because job_completed will
|
||||
update the return code itself if the job was canceled.
|
||||
|
||||
While we're here, make job_completed static to job.c and remove it from
|
||||
job.h; move the documentation of return code to the .run() callback and
|
||||
to the job->ret property, accordingly.
|
||||
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Message-id: 20180830015734.19765-9-jsnow@redhat.com
|
||||
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
||||
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
||||
(cherry picked from commit 404ff28d6ae59fc1c24d631710d4063fc68aed03)
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
|
||||
---
|
||||
include/qemu/job.h | 28 +++++++++++++++-------------
|
||||
job.c | 11 ++++++-----
|
||||
trace-events | 2 +-
|
||||
3 files changed, 22 insertions(+), 19 deletions(-)
|
||||
|
||||
diff --git a/include/qemu/job.h b/include/qemu/job.h
|
||||
index 1144d67..23395c1 100644
|
||||
--- a/include/qemu/job.h
|
||||
+++ b/include/qemu/job.h
|
||||
@@ -124,7 +124,11 @@ typedef struct Job {
|
||||
/** Estimated progress_current value at the completion of the job */
|
||||
int64_t progress_total;
|
||||
|
||||
- /** ret code passed to job_completed. */
|
||||
+ /**
|
||||
+ * Return code from @run and/or @prepare callback(s).
|
||||
+ * Not final until the job has reached the CONCLUDED status.
|
||||
+ * 0 on success, -errno on failure.
|
||||
+ */
|
||||
int ret;
|
||||
|
||||
/**
|
||||
@@ -172,7 +176,16 @@ struct JobDriver {
|
||||
/** Enum describing the operation */
|
||||
JobType job_type;
|
||||
|
||||
- /** Mandatory: Entrypoint for the Coroutine. */
|
||||
+ /**
|
||||
+ * Mandatory: Entrypoint for the Coroutine.
|
||||
+ *
|
||||
+ * This callback will be invoked when moving from CREATED to RUNNING.
|
||||
+ *
|
||||
+ * If this callback returns nonzero, the job transaction it is part of is
|
||||
+ * aborted. If it returns zero, the job moves into the WAITING state. If it
|
||||
+ * is the last job to complete in its transaction, all jobs in the
|
||||
+ * transaction move from WAITING to PENDING.
|
||||
+ */
|
||||
int coroutine_fn (*run)(Job *job, Error **errp);
|
||||
|
||||
/**
|
||||
@@ -496,17 +509,6 @@ void job_early_fail(Job *job);
|
||||
/** Moves the @job from RUNNING to READY */
|
||||
void job_transition_to_ready(Job *job);
|
||||
|
||||
-/**
|
||||
- * @job: The job being completed.
|
||||
- * @ret: The status code.
|
||||
- *
|
||||
- * Marks @job as completed. If @ret is non-zero, the job transaction it is part
|
||||
- * of is aborted. If @ret is zero, the job moves into the WAITING state. If it
|
||||
- * is the last job to complete in its transaction, all jobs in the transaction
|
||||
- * move from WAITING to PENDING.
|
||||
- */
|
||||
-void job_completed(Job *job, int ret);
|
||||
-
|
||||
/** Asynchronously complete the specified @job. */
|
||||
void job_complete(Job *job, Error **errp);
|
||||
|
||||
diff --git a/job.c b/job.c
|
||||
index abe91af..61e091a 100644
|
||||
--- a/job.c
|
||||
+++ b/job.c
|
||||
@@ -535,6 +535,8 @@ void job_drain(Job *job)
|
||||
}
|
||||
}
|
||||
|
||||
+static void job_completed(Job *job);
|
||||
+
|
||||
static void job_exit(void *opaque)
|
||||
{
|
||||
Job *job = (Job *)opaque;
|
||||
@@ -545,7 +547,7 @@ static void job_exit(void *opaque)
|
||||
job->driver->exit(job);
|
||||
aio_context_release(aio_context);
|
||||
}
|
||||
- job_completed(job, job->ret);
|
||||
+ job_completed(job);
|
||||
}
|
||||
|
||||
/**
|
||||
@@ -883,13 +885,12 @@ static void job_completed_txn_success(Job *job)
|
||||
}
|
||||
}
|
||||
|
||||
-void job_completed(Job *job, int ret)
|
||||
+static void job_completed(Job *job)
|
||||
{
|
||||
assert(job && job->txn && !job_is_completed(job));
|
||||
|
||||
- job->ret = ret;
|
||||
job_update_rc(job);
|
||||
- trace_job_completed(job, ret, job->ret);
|
||||
+ trace_job_completed(job, job->ret);
|
||||
if (job->ret) {
|
||||
job_completed_txn_abort(job);
|
||||
} else {
|
||||
@@ -905,7 +906,7 @@ void job_cancel(Job *job, bool force)
|
||||
}
|
||||
job_cancel_async(job, force);
|
||||
if (!job_started(job)) {
|
||||
- job_completed(job, -ECANCELED);
|
||||
+ job_completed(job);
|
||||
} else if (job->deferred_to_main_loop) {
|
||||
job_completed_txn_abort(job);
|
||||
} else {
|
||||
diff --git a/trace-events b/trace-events
|
||||
index c445f54..4fd2cb4 100644
|
||||
--- a/trace-events
|
||||
+++ b/trace-events
|
||||
@@ -107,7 +107,7 @@ gdbstub_err_checksum_incorrect(uint8_t expected, uint8_t got) "got command packe
|
||||
# job.c
|
||||
job_state_transition(void *job, int ret, const char *legal, const char *s0, const char *s1) "job %p (ret: %d) attempting %s transition (%s-->%s)"
|
||||
job_apply_verb(void *job, const char *state, const char *verb, const char *legal) "job %p in state %s; applying verb %s (%s)"
|
||||
-job_completed(void *job, int ret, int jret) "job %p ret %d corrected ret %d"
|
||||
+job_completed(void *job, int ret) "job %p ret %d"
|
||||
|
||||
# job-qmp.c
|
||||
qmp_job_cancel(void *job) "job %p"
|
||||
--
|
||||
1.8.3.1
|
||||
|
119
0029-jobs-remove-job_defer_to_main_loop.patch
Normal file
119
0029-jobs-remove-job_defer_to_main_loop.patch
Normal file
@ -0,0 +1,119 @@
|
||||
From 73694b41a7e96fb364bdfd6fbad89c69dc2d1f73 Mon Sep 17 00:00:00 2001
|
||||
From: John Snow <jsnow@redhat.com>
|
||||
Date: Tue, 25 Sep 2018 22:34:15 +0100
|
||||
Subject: jobs: remove job_defer_to_main_loop
|
||||
|
||||
RH-Author: John Snow <jsnow@redhat.com>
|
||||
Message-id: <20180925223431.24791-10-jsnow@redhat.com>
|
||||
Patchwork-id: 82275
|
||||
O-Subject: [RHEL8/rhel qemu-kvm PATCH 09/25] jobs: remove job_defer_to_main_loop
|
||||
Bugzilla: 1632939
|
||||
RH-Acked-by: Jeffrey Cody <jcody@redhat.com>
|
||||
RH-Acked-by: Max Reitz <mreitz@redhat.com>
|
||||
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
|
||||
|
||||
Now that the job infrastructure is handling the job_completed call for
|
||||
all implemented jobs, we can remove the interface that allowed jobs to
|
||||
schedule their own completion.
|
||||
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
||||
Message-id: 20180830015734.19765-10-jsnow@redhat.com
|
||||
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
||||
(cherry picked from commit e21a1c9831fc80ae3f3c1affdfa43350035d8588)
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
|
||||
---
|
||||
include/qemu/job.h | 17 -----------------
|
||||
job.c | 40 ++--------------------------------------
|
||||
2 files changed, 2 insertions(+), 55 deletions(-)
|
||||
|
||||
diff --git a/include/qemu/job.h b/include/qemu/job.h
|
||||
index 23395c1..e0cff70 100644
|
||||
--- a/include/qemu/job.h
|
||||
+++ b/include/qemu/job.h
|
||||
@@ -568,23 +568,6 @@ void job_finalize(Job *job, Error **errp);
|
||||
*/
|
||||
void job_dismiss(Job **job, Error **errp);
|
||||
|
||||
-typedef void JobDeferToMainLoopFn(Job *job, void *opaque);
|
||||
-
|
||||
-/**
|
||||
- * @job: The job
|
||||
- * @fn: The function to run in the main loop
|
||||
- * @opaque: The opaque value that is passed to @fn
|
||||
- *
|
||||
- * This function must be called by the main job coroutine just before it
|
||||
- * returns. @fn is executed in the main loop with the job AioContext acquired.
|
||||
- *
|
||||
- * Block jobs must call bdrv_unref(), bdrv_close(), and anything that uses
|
||||
- * bdrv_drain_all() in the main loop.
|
||||
- *
|
||||
- * The @job AioContext is held while @fn executes.
|
||||
- */
|
||||
-void job_defer_to_main_loop(Job *job, JobDeferToMainLoopFn *fn, void *opaque);
|
||||
-
|
||||
/**
|
||||
* Synchronously finishes the given @job. If @finish is given, it is called to
|
||||
* trigger completion or cancellation of the job.
|
||||
diff --git a/job.c b/job.c
|
||||
index 61e091a..e8d7aee 100644
|
||||
--- a/job.c
|
||||
+++ b/job.c
|
||||
@@ -561,12 +561,8 @@ static void coroutine_fn job_co_entry(void *opaque)
|
||||
assert(job && job->driver && job->driver->run);
|
||||
job_pause_point(job);
|
||||
job->ret = job->driver->run(job, &job->err);
|
||||
- if (!job->deferred_to_main_loop) {
|
||||
- job->deferred_to_main_loop = true;
|
||||
- aio_bh_schedule_oneshot(qemu_get_aio_context(),
|
||||
- job_exit,
|
||||
- job);
|
||||
- }
|
||||
+ job->deferred_to_main_loop = true;
|
||||
+ aio_bh_schedule_oneshot(qemu_get_aio_context(), job_exit, job);
|
||||
}
|
||||
|
||||
|
||||
@@ -969,38 +965,6 @@ void job_complete(Job *job, Error **errp)
|
||||
job->driver->complete(job, errp);
|
||||
}
|
||||
|
||||
-
|
||||
-typedef struct {
|
||||
- Job *job;
|
||||
- JobDeferToMainLoopFn *fn;
|
||||
- void *opaque;
|
||||
-} JobDeferToMainLoopData;
|
||||
-
|
||||
-static void job_defer_to_main_loop_bh(void *opaque)
|
||||
-{
|
||||
- JobDeferToMainLoopData *data = opaque;
|
||||
- Job *job = data->job;
|
||||
- AioContext *aio_context = job->aio_context;
|
||||
-
|
||||
- aio_context_acquire(aio_context);
|
||||
- data->fn(data->job, data->opaque);
|
||||
- aio_context_release(aio_context);
|
||||
-
|
||||
- g_free(data);
|
||||
-}
|
||||
-
|
||||
-void job_defer_to_main_loop(Job *job, JobDeferToMainLoopFn *fn, void *opaque)
|
||||
-{
|
||||
- JobDeferToMainLoopData *data = g_malloc(sizeof(*data));
|
||||
- data->job = job;
|
||||
- data->fn = fn;
|
||||
- data->opaque = opaque;
|
||||
- job->deferred_to_main_loop = true;
|
||||
-
|
||||
- aio_bh_schedule_oneshot(qemu_get_aio_context(),
|
||||
- job_defer_to_main_loop_bh, data);
|
||||
-}
|
||||
-
|
||||
int job_finish_sync(Job *job, void (*finish)(Job *, Error **errp), Error **errp)
|
||||
{
|
||||
Error *local_err = NULL;
|
||||
--
|
||||
1.8.3.1
|
||||
|
110
0030-block-commit-add-block-job-creation-flags.patch
Normal file
110
0030-block-commit-add-block-job-creation-flags.patch
Normal file
@ -0,0 +1,110 @@
|
||||
From 8141d5f8ab70551c59fae63373a9562c99c8e00d Mon Sep 17 00:00:00 2001
|
||||
From: John Snow <jsnow@redhat.com>
|
||||
Date: Tue, 25 Sep 2018 22:34:16 +0100
|
||||
Subject: block/commit: add block job creation flags
|
||||
|
||||
RH-Author: John Snow <jsnow@redhat.com>
|
||||
Message-id: <20180925223431.24791-11-jsnow@redhat.com>
|
||||
Patchwork-id: 82264
|
||||
O-Subject: [RHEL8/rhel qemu-kvm PATCH 10/25] block/commit: add block job creation flags
|
||||
Bugzilla: 1632939
|
||||
RH-Acked-by: Jeffrey Cody <jcody@redhat.com>
|
||||
RH-Acked-by: Max Reitz <mreitz@redhat.com>
|
||||
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
|
||||
|
||||
Add support for taking and passing forward job creation flags.
|
||||
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
||||
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
||||
Message-id: 20180906130225.5118-2-jsnow@redhat.com
|
||||
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
||||
(cherry picked from commit 5360782d0827854383097d560715d8d8027ee590)
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
|
||||
---
|
||||
block/commit.c | 5 +++--
|
||||
blockdev.c | 7 ++++---
|
||||
include/block/block_int.h | 5 ++++-
|
||||
3 files changed, 11 insertions(+), 6 deletions(-)
|
||||
|
||||
diff --git a/block/commit.c b/block/commit.c
|
||||
index 25b3cb8..c737664 100644
|
||||
--- a/block/commit.c
|
||||
+++ b/block/commit.c
|
||||
@@ -254,7 +254,8 @@ static BlockDriver bdrv_commit_top = {
|
||||
};
|
||||
|
||||
void commit_start(const char *job_id, BlockDriverState *bs,
|
||||
- BlockDriverState *base, BlockDriverState *top, int64_t speed,
|
||||
+ BlockDriverState *base, BlockDriverState *top,
|
||||
+ int creation_flags, int64_t speed,
|
||||
BlockdevOnError on_error, const char *backing_file_str,
|
||||
const char *filter_node_name, Error **errp)
|
||||
{
|
||||
@@ -272,7 +273,7 @@ void commit_start(const char *job_id, BlockDriverState *bs,
|
||||
}
|
||||
|
||||
s = block_job_create(job_id, &commit_job_driver, NULL, bs, 0, BLK_PERM_ALL,
|
||||
- speed, JOB_DEFAULT, NULL, NULL, errp);
|
||||
+ speed, creation_flags, NULL, NULL, errp);
|
||||
if (!s) {
|
||||
return;
|
||||
}
|
||||
diff --git a/blockdev.c b/blockdev.c
|
||||
index dcf8c8d..88ad8d9 100644
|
||||
--- a/blockdev.c
|
||||
+++ b/blockdev.c
|
||||
@@ -3324,6 +3324,7 @@ void qmp_block_commit(bool has_job_id, const char *job_id, const char *device,
|
||||
* BlockdevOnError change for blkmirror makes it in
|
||||
*/
|
||||
BlockdevOnError on_error = BLOCKDEV_ON_ERROR_REPORT;
|
||||
+ int job_flags = JOB_DEFAULT;
|
||||
|
||||
if (!has_speed) {
|
||||
speed = 0;
|
||||
@@ -3405,15 +3406,15 @@ void qmp_block_commit(bool has_job_id, const char *job_id, const char *device,
|
||||
goto out;
|
||||
}
|
||||
commit_active_start(has_job_id ? job_id : NULL, bs, base_bs,
|
||||
- JOB_DEFAULT, speed, on_error,
|
||||
+ job_flags, speed, on_error,
|
||||
filter_node_name, NULL, NULL, false, &local_err);
|
||||
} else {
|
||||
BlockDriverState *overlay_bs = bdrv_find_overlay(bs, top_bs);
|
||||
if (bdrv_op_is_blocked(overlay_bs, BLOCK_OP_TYPE_COMMIT_TARGET, errp)) {
|
||||
goto out;
|
||||
}
|
||||
- commit_start(has_job_id ? job_id : NULL, bs, base_bs, top_bs, speed,
|
||||
- on_error, has_backing_file ? backing_file : NULL,
|
||||
+ commit_start(has_job_id ? job_id : NULL, bs, base_bs, top_bs, job_flags,
|
||||
+ speed, on_error, has_backing_file ? backing_file : NULL,
|
||||
filter_node_name, &local_err);
|
||||
}
|
||||
if (local_err != NULL) {
|
||||
diff --git a/include/block/block_int.h b/include/block/block_int.h
|
||||
index 903b9c1..ffab0b4 100644
|
||||
--- a/include/block/block_int.h
|
||||
+++ b/include/block/block_int.h
|
||||
@@ -980,6 +980,8 @@ void stream_start(const char *job_id, BlockDriverState *bs,
|
||||
* @bs: Active block device.
|
||||
* @top: Top block device to be committed.
|
||||
* @base: Block device that will be written into, and become the new top.
|
||||
+ * @creation_flags: Flags that control the behavior of the Job lifetime.
|
||||
+ * See @BlockJobCreateFlags
|
||||
* @speed: The maximum speed, in bytes per second, or 0 for unlimited.
|
||||
* @on_error: The action to take upon error.
|
||||
* @backing_file_str: String to use as the backing file in @top's overlay
|
||||
@@ -990,7 +992,8 @@ void stream_start(const char *job_id, BlockDriverState *bs,
|
||||
*
|
||||
*/
|
||||
void commit_start(const char *job_id, BlockDriverState *bs,
|
||||
- BlockDriverState *base, BlockDriverState *top, int64_t speed,
|
||||
+ BlockDriverState *base, BlockDriverState *top,
|
||||
+ int creation_flags, int64_t speed,
|
||||
BlockdevOnError on_error, const char *backing_file_str,
|
||||
const char *filter_node_name, Error **errp);
|
||||
/**
|
||||
--
|
||||
1.8.3.1
|
||||
|
100
0031-block-mirror-add-block-job-creation-flags.patch
Normal file
100
0031-block-mirror-add-block-job-creation-flags.patch
Normal file
@ -0,0 +1,100 @@
|
||||
From 8ac0fb4e4202e6321d57f1be01f4ca6e51a98687 Mon Sep 17 00:00:00 2001
|
||||
From: John Snow <jsnow@redhat.com>
|
||||
Date: Tue, 25 Sep 2018 22:34:17 +0100
|
||||
Subject: block/mirror: add block job creation flags
|
||||
|
||||
RH-Author: John Snow <jsnow@redhat.com>
|
||||
Message-id: <20180925223431.24791-12-jsnow@redhat.com>
|
||||
Patchwork-id: 82268
|
||||
O-Subject: [RHEL8/rhel qemu-kvm PATCH 11/25] block/mirror: add block job creation flags
|
||||
Bugzilla: 1632939
|
||||
RH-Acked-by: Jeffrey Cody <jcody@redhat.com>
|
||||
RH-Acked-by: Max Reitz <mreitz@redhat.com>
|
||||
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
|
||||
|
||||
Add support for taking and passing forward job creation flags.
|
||||
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
||||
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
||||
Message-id: 20180906130225.5118-3-jsnow@redhat.com
|
||||
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
||||
(cherry picked from commit a1999b33488daba68a1bcd7c6fdf314ddeacc6a2)
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
|
||||
---
|
||||
block/mirror.c | 5 +++--
|
||||
blockdev.c | 3 ++-
|
||||
include/block/block_int.h | 5 ++++-
|
||||
3 files changed, 9 insertions(+), 4 deletions(-)
|
||||
|
||||
diff --git a/block/mirror.c b/block/mirror.c
|
||||
index 4a9558d..cd13835 100644
|
||||
--- a/block/mirror.c
|
||||
+++ b/block/mirror.c
|
||||
@@ -1639,7 +1639,8 @@ fail:
|
||||
|
||||
void mirror_start(const char *job_id, BlockDriverState *bs,
|
||||
BlockDriverState *target, const char *replaces,
|
||||
- int64_t speed, uint32_t granularity, int64_t buf_size,
|
||||
+ int creation_flags, int64_t speed,
|
||||
+ uint32_t granularity, int64_t buf_size,
|
||||
MirrorSyncMode mode, BlockMirrorBackingMode backing_mode,
|
||||
BlockdevOnError on_source_error,
|
||||
BlockdevOnError on_target_error,
|
||||
@@ -1655,7 +1656,7 @@ void mirror_start(const char *job_id, BlockDriverState *bs,
|
||||
}
|
||||
is_none_mode = mode == MIRROR_SYNC_MODE_NONE;
|
||||
base = mode == MIRROR_SYNC_MODE_TOP ? backing_bs(bs) : NULL;
|
||||
- mirror_start_job(job_id, bs, JOB_DEFAULT, target, replaces,
|
||||
+ mirror_start_job(job_id, bs, creation_flags, target, replaces,
|
||||
speed, granularity, buf_size, backing_mode,
|
||||
on_source_error, on_target_error, unmap, NULL, NULL,
|
||||
&mirror_job_driver, is_none_mode, base, false,
|
||||
diff --git a/blockdev.c b/blockdev.c
|
||||
index 88ad8d9..d31750b 100644
|
||||
--- a/blockdev.c
|
||||
+++ b/blockdev.c
|
||||
@@ -3700,6 +3700,7 @@ static void blockdev_mirror_common(const char *job_id, BlockDriverState *bs,
|
||||
bool has_copy_mode, MirrorCopyMode copy_mode,
|
||||
Error **errp)
|
||||
{
|
||||
+ int job_flags = JOB_DEFAULT;
|
||||
|
||||
if (!has_speed) {
|
||||
speed = 0;
|
||||
@@ -3752,7 +3753,7 @@ static void blockdev_mirror_common(const char *job_id, BlockDriverState *bs,
|
||||
* and will allow to check whether the node still exist at mirror completion
|
||||
*/
|
||||
mirror_start(job_id, bs, target,
|
||||
- has_replaces ? replaces : NULL,
|
||||
+ has_replaces ? replaces : NULL, job_flags,
|
||||
speed, granularity, buf_size, sync, backing_mode,
|
||||
on_source_error, on_target_error, unmap, filter_node_name,
|
||||
copy_mode, errp);
|
||||
diff --git a/include/block/block_int.h b/include/block/block_int.h
|
||||
index ffab0b4..b40f0bf 100644
|
||||
--- a/include/block/block_int.h
|
||||
+++ b/include/block/block_int.h
|
||||
@@ -1029,6 +1029,8 @@ void commit_active_start(const char *job_id, BlockDriverState *bs,
|
||||
* @target: Block device to write to.
|
||||
* @replaces: Block graph node name to replace once the mirror is done. Can
|
||||
* only be used when full mirroring is selected.
|
||||
+ * @creation_flags: Flags that control the behavior of the Job lifetime.
|
||||
+ * See @BlockJobCreateFlags
|
||||
* @speed: The maximum speed, in bytes per second, or 0 for unlimited.
|
||||
* @granularity: The chosen granularity for the dirty bitmap.
|
||||
* @buf_size: The amount of data that can be in flight at one time.
|
||||
@@ -1050,7 +1052,8 @@ void commit_active_start(const char *job_id, BlockDriverState *bs,
|
||||
*/
|
||||
void mirror_start(const char *job_id, BlockDriverState *bs,
|
||||
BlockDriverState *target, const char *replaces,
|
||||
- int64_t speed, uint32_t granularity, int64_t buf_size,
|
||||
+ int creation_flags, int64_t speed,
|
||||
+ uint32_t granularity, int64_t buf_size,
|
||||
MirrorSyncMode mode, BlockMirrorBackingMode backing_mode,
|
||||
BlockdevOnError on_source_error,
|
||||
BlockdevOnError on_target_error,
|
||||
--
|
||||
1.8.3.1
|
||||
|
100
0032-block-stream-add-block-job-creation-flags.patch
Normal file
100
0032-block-stream-add-block-job-creation-flags.patch
Normal file
@ -0,0 +1,100 @@
|
||||
From 64569465b360642820193586116aa51ed0b356bd Mon Sep 17 00:00:00 2001
|
||||
From: John Snow <jsnow@redhat.com>
|
||||
Date: Tue, 25 Sep 2018 22:34:18 +0100
|
||||
Subject: block/stream: add block job creation flags
|
||||
|
||||
RH-Author: John Snow <jsnow@redhat.com>
|
||||
Message-id: <20180925223431.24791-13-jsnow@redhat.com>
|
||||
Patchwork-id: 82263
|
||||
O-Subject: [RHEL8/rhel qemu-kvm PATCH 12/25] block/stream: add block job creation flags
|
||||
Bugzilla: 1632939
|
||||
RH-Acked-by: Jeffrey Cody <jcody@redhat.com>
|
||||
RH-Acked-by: Max Reitz <mreitz@redhat.com>
|
||||
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
|
||||
|
||||
Add support for taking and passing forward job creation flags.
|
||||
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
||||
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
||||
Message-id: 20180906130225.5118-4-jsnow@redhat.com
|
||||
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
||||
(cherry picked from commit cf6320df581e6cbde6a95075266859a8f9ba9d55)
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
|
||||
---
|
||||
block/stream.c | 5 +++--
|
||||
blockdev.c | 3 ++-
|
||||
include/block/block_int.h | 5 ++++-
|
||||
3 files changed, 9 insertions(+), 4 deletions(-)
|
||||
|
||||
diff --git a/block/stream.c b/block/stream.c
|
||||
index 67e1e72..700eb23 100644
|
||||
--- a/block/stream.c
|
||||
+++ b/block/stream.c
|
||||
@@ -214,7 +214,8 @@ static const BlockJobDriver stream_job_driver = {
|
||||
|
||||
void stream_start(const char *job_id, BlockDriverState *bs,
|
||||
BlockDriverState *base, const char *backing_file_str,
|
||||
- int64_t speed, BlockdevOnError on_error, Error **errp)
|
||||
+ int creation_flags, int64_t speed,
|
||||
+ BlockdevOnError on_error, Error **errp)
|
||||
{
|
||||
StreamBlockJob *s;
|
||||
BlockDriverState *iter;
|
||||
@@ -236,7 +237,7 @@ void stream_start(const char *job_id, BlockDriverState *bs,
|
||||
BLK_PERM_GRAPH_MOD,
|
||||
BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE_UNCHANGED |
|
||||
BLK_PERM_WRITE,
|
||||
- speed, JOB_DEFAULT, NULL, NULL, errp);
|
||||
+ speed, creation_flags, NULL, NULL, errp);
|
||||
if (!s) {
|
||||
goto fail;
|
||||
}
|
||||
diff --git a/blockdev.c b/blockdev.c
|
||||
index d31750b..c2e6402 100644
|
||||
--- a/blockdev.c
|
||||
+++ b/blockdev.c
|
||||
@@ -3233,6 +3233,7 @@ void qmp_block_stream(bool has_job_id, const char *job_id, const char *device,
|
||||
AioContext *aio_context;
|
||||
Error *local_err = NULL;
|
||||
const char *base_name = NULL;
|
||||
+ int job_flags = JOB_DEFAULT;
|
||||
|
||||
if (!has_on_error) {
|
||||
on_error = BLOCKDEV_ON_ERROR_REPORT;
|
||||
@@ -3295,7 +3296,7 @@ void qmp_block_stream(bool has_job_id, const char *job_id, const char *device,
|
||||
base_name = has_backing_file ? backing_file : base_name;
|
||||
|
||||
stream_start(has_job_id ? job_id : NULL, bs, base_bs, base_name,
|
||||
- has_speed ? speed : 0, on_error, &local_err);
|
||||
+ job_flags, has_speed ? speed : 0, on_error, &local_err);
|
||||
if (local_err) {
|
||||
error_propagate(errp, local_err);
|
||||
goto out;
|
||||
diff --git a/include/block/block_int.h b/include/block/block_int.h
|
||||
index b40f0bf..4000d2a 100644
|
||||
--- a/include/block/block_int.h
|
||||
+++ b/include/block/block_int.h
|
||||
@@ -958,6 +958,8 @@ int is_windows_drive(const char *filename);
|
||||
* flatten the whole backing file chain onto @bs.
|
||||
* @backing_file_str: The file name that will be written to @bs as the
|
||||
* the new backing file if the job completes. Ignored if @base is %NULL.
|
||||
+ * @creation_flags: Flags that control the behavior of the Job lifetime.
|
||||
+ * See @BlockJobCreateFlags
|
||||
* @speed: The maximum speed, in bytes per second, or 0 for unlimited.
|
||||
* @on_error: The action to take upon error.
|
||||
* @errp: Error object.
|
||||
@@ -971,7 +973,8 @@ int is_windows_drive(const char *filename);
|
||||
*/
|
||||
void stream_start(const char *job_id, BlockDriverState *bs,
|
||||
BlockDriverState *base, const char *backing_file_str,
|
||||
- int64_t speed, BlockdevOnError on_error, Error **errp);
|
||||
+ int creation_flags, int64_t speed,
|
||||
+ BlockdevOnError on_error, Error **errp);
|
||||
|
||||
/**
|
||||
* commit_start:
|
||||
--
|
||||
1.8.3.1
|
||||
|
180
0033-block-commit-refactor-commit-to-use-job-callbacks.patch
Normal file
180
0033-block-commit-refactor-commit-to-use-job-callbacks.patch
Normal file
@ -0,0 +1,180 @@
|
||||
From b0ac95edde586e808a1118c4b04c1608de8b5b6c Mon Sep 17 00:00:00 2001
|
||||
From: John Snow <jsnow@redhat.com>
|
||||
Date: Tue, 25 Sep 2018 22:34:19 +0100
|
||||
Subject: block/commit: refactor commit to use job callbacks
|
||||
|
||||
RH-Author: John Snow <jsnow@redhat.com>
|
||||
Message-id: <20180925223431.24791-14-jsnow@redhat.com>
|
||||
Patchwork-id: 82279
|
||||
O-Subject: [RHEL8/rhel qemu-kvm PATCH 13/25] block/commit: refactor commit to use job callbacks
|
||||
Bugzilla: 1632939
|
||||
RH-Acked-by: Jeffrey Cody <jcody@redhat.com>
|
||||
RH-Acked-by: Max Reitz <mreitz@redhat.com>
|
||||
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
|
||||
|
||||
Use the component callbacks; prepare, abort, and clean.
|
||||
|
||||
NB: prepare is only called when the job has not yet failed;
|
||||
and abort can be called after prepare.
|
||||
|
||||
complete -> prepare -> abort -> clean
|
||||
complete -> abort -> clean
|
||||
|
||||
During refactor, a potential problem with bdrv_drop_intermediate
|
||||
was identified, the patched behavior is no worse than the pre-patch
|
||||
behavior, so leave a FIXME for now to be fixed in a future patch.
|
||||
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
||||
Message-id: 20180906130225.5118-5-jsnow@redhat.com
|
||||
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
||||
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
||||
(cherry picked from commit 22dffcbec62ba918db690ed44beba4bd4e970bb9)
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
|
||||
---
|
||||
block/commit.c | 92 ++++++++++++++++++++++++++++++++--------------------------
|
||||
1 file changed, 51 insertions(+), 41 deletions(-)
|
||||
|
||||
diff --git a/block/commit.c b/block/commit.c
|
||||
index c737664..b387765 100644
|
||||
--- a/block/commit.c
|
||||
+++ b/block/commit.c
|
||||
@@ -36,6 +36,7 @@ typedef struct CommitBlockJob {
|
||||
BlockDriverState *commit_top_bs;
|
||||
BlockBackend *top;
|
||||
BlockBackend *base;
|
||||
+ BlockDriverState *base_bs;
|
||||
BlockdevOnError on_error;
|
||||
int base_flags;
|
||||
char *backing_file_str;
|
||||
@@ -68,61 +69,67 @@ static int coroutine_fn commit_populate(BlockBackend *bs, BlockBackend *base,
|
||||
return 0;
|
||||
}
|
||||
|
||||
-static void commit_exit(Job *job)
|
||||
+static int commit_prepare(Job *job)
|
||||
{
|
||||
CommitBlockJob *s = container_of(job, CommitBlockJob, common.job);
|
||||
- BlockJob *bjob = &s->common;
|
||||
- BlockDriverState *top = blk_bs(s->top);
|
||||
- BlockDriverState *base = blk_bs(s->base);
|
||||
- BlockDriverState *commit_top_bs = s->commit_top_bs;
|
||||
- bool remove_commit_top_bs = false;
|
||||
-
|
||||
- /* Make sure commit_top_bs and top stay around until bdrv_replace_node() */
|
||||
- bdrv_ref(top);
|
||||
- bdrv_ref(commit_top_bs);
|
||||
|
||||
/* Remove base node parent that still uses BLK_PERM_WRITE/RESIZE before
|
||||
* the normal backing chain can be restored. */
|
||||
blk_unref(s->base);
|
||||
+ s->base = NULL;
|
||||
+
|
||||
+ /* FIXME: bdrv_drop_intermediate treats total failures and partial failures
|
||||
+ * identically. Further work is needed to disambiguate these cases. */
|
||||
+ return bdrv_drop_intermediate(s->commit_top_bs, s->base_bs,
|
||||
+ s->backing_file_str);
|
||||
+}
|
||||
|
||||
- if (!job_is_cancelled(job) && job->ret == 0) {
|
||||
- /* success */
|
||||
- job->ret = bdrv_drop_intermediate(s->commit_top_bs, base,
|
||||
- s->backing_file_str);
|
||||
- } else {
|
||||
- /* XXX Can (or should) we somehow keep 'consistent read' blocked even
|
||||
- * after the failed/cancelled commit job is gone? If we already wrote
|
||||
- * something to base, the intermediate images aren't valid any more. */
|
||||
- remove_commit_top_bs = true;
|
||||
+static void commit_abort(Job *job)
|
||||
+{
|
||||
+ CommitBlockJob *s = container_of(job, CommitBlockJob, common.job);
|
||||
+ BlockDriverState *top_bs = blk_bs(s->top);
|
||||
+
|
||||
+ /* Make sure commit_top_bs and top stay around until bdrv_replace_node() */
|
||||
+ bdrv_ref(top_bs);
|
||||
+ bdrv_ref(s->commit_top_bs);
|
||||
+
|
||||
+ if (s->base) {
|
||||
+ blk_unref(s->base);
|
||||
}
|
||||
|
||||
+ /* free the blockers on the intermediate nodes so that bdrv_replace_nodes
|
||||
+ * can succeed */
|
||||
+ block_job_remove_all_bdrv(&s->common);
|
||||
+
|
||||
+ /* If bdrv_drop_intermediate() failed (or was not invoked), remove the
|
||||
+ * commit filter driver from the backing chain now. Do this as the final
|
||||
+ * step so that the 'consistent read' permission can be granted.
|
||||
+ *
|
||||
+ * XXX Can (or should) we somehow keep 'consistent read' blocked even
|
||||
+ * after the failed/cancelled commit job is gone? If we already wrote
|
||||
+ * something to base, the intermediate images aren't valid any more. */
|
||||
+ bdrv_child_try_set_perm(s->commit_top_bs->backing, 0, BLK_PERM_ALL,
|
||||
+ &error_abort);
|
||||
+ bdrv_replace_node(s->commit_top_bs, backing_bs(s->commit_top_bs),
|
||||
+ &error_abort);
|
||||
+
|
||||
+ bdrv_unref(s->commit_top_bs);
|
||||
+ bdrv_unref(top_bs);
|
||||
+}
|
||||
+
|
||||
+static void commit_clean(Job *job)
|
||||
+{
|
||||
+ CommitBlockJob *s = container_of(job, CommitBlockJob, common.job);
|
||||
+
|
||||
/* restore base open flags here if appropriate (e.g., change the base back
|
||||
* to r/o). These reopens do not need to be atomic, since we won't abort
|
||||
* even on failure here */
|
||||
- if (s->base_flags != bdrv_get_flags(base)) {
|
||||
- bdrv_reopen(base, s->base_flags, NULL);
|
||||
+ if (s->base_flags != bdrv_get_flags(s->base_bs)) {
|
||||
+ bdrv_reopen(s->base_bs, s->base_flags, NULL);
|
||||
}
|
||||
+
|
||||
g_free(s->backing_file_str);
|
||||
blk_unref(s->top);
|
||||
-
|
||||
- /* If there is more than one reference to the job (e.g. if called from
|
||||
- * job_finish_sync()), job_completed() won't free it and therefore the
|
||||
- * blockers on the intermediate nodes remain. This would cause
|
||||
- * bdrv_set_backing_hd() to fail. */
|
||||
- block_job_remove_all_bdrv(bjob);
|
||||
-
|
||||
- /* If bdrv_drop_intermediate() didn't already do that, remove the commit
|
||||
- * filter driver from the backing chain. Do this as the final step so that
|
||||
- * the 'consistent read' permission can be granted. */
|
||||
- if (remove_commit_top_bs) {
|
||||
- bdrv_child_try_set_perm(commit_top_bs->backing, 0, BLK_PERM_ALL,
|
||||
- &error_abort);
|
||||
- bdrv_replace_node(commit_top_bs, backing_bs(commit_top_bs),
|
||||
- &error_abort);
|
||||
- }
|
||||
-
|
||||
- bdrv_unref(commit_top_bs);
|
||||
- bdrv_unref(top);
|
||||
}
|
||||
|
||||
static int coroutine_fn commit_run(Job *job, Error **errp)
|
||||
@@ -211,7 +218,9 @@ static const BlockJobDriver commit_job_driver = {
|
||||
.user_resume = block_job_user_resume,
|
||||
.drain = block_job_drain,
|
||||
.run = commit_run,
|
||||
- .exit = commit_exit,
|
||||
+ .prepare = commit_prepare,
|
||||
+ .abort = commit_abort,
|
||||
+ .clean = commit_clean
|
||||
},
|
||||
};
|
||||
|
||||
@@ -350,6 +359,7 @@ void commit_start(const char *job_id, BlockDriverState *bs,
|
||||
if (ret < 0) {
|
||||
goto fail;
|
||||
}
|
||||
+ s->base_bs = base;
|
||||
|
||||
/* Required permissions are already taken with block_job_add_bdrv() */
|
||||
s->top = blk_new(0, BLK_PERM_ALL);
|
||||
--
|
||||
1.8.3.1
|
||||
|
45
0034-block-mirror-don-t-install-backing-chain-on-abort.patch
Normal file
45
0034-block-mirror-don-t-install-backing-chain-on-abort.patch
Normal file
@ -0,0 +1,45 @@
|
||||
From 7f155f96e9db0be97501f90e482a29d51779f887 Mon Sep 17 00:00:00 2001
|
||||
From: John Snow <jsnow@redhat.com>
|
||||
Date: Tue, 25 Sep 2018 22:34:20 +0100
|
||||
Subject: block/mirror: don't install backing chain on abort
|
||||
|
||||
RH-Author: John Snow <jsnow@redhat.com>
|
||||
Message-id: <20180925223431.24791-15-jsnow@redhat.com>
|
||||
Patchwork-id: 82277
|
||||
O-Subject: [RHEL8/rhel qemu-kvm PATCH 14/25] block/mirror: don't install backing chain on abort
|
||||
Bugzilla: 1632939
|
||||
RH-Acked-by: Jeffrey Cody <jcody@redhat.com>
|
||||
RH-Acked-by: Max Reitz <mreitz@redhat.com>
|
||||
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
|
||||
|
||||
In cases where we abort the block/mirror job, there's no point in
|
||||
installing the new backing chain before we finish aborting.
|
||||
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Message-id: 20180906130225.5118-6-jsnow@redhat.com
|
||||
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
||||
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
||||
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
||||
(cherry picked from commit c2924ceaa7f1866148e2847c969fc1902a2524fa)
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
|
||||
---
|
||||
block/mirror.c | 2 +-
|
||||
1 file changed, 1 insertion(+), 1 deletion(-)
|
||||
|
||||
diff --git a/block/mirror.c b/block/mirror.c
|
||||
index cd13835..19b57b8 100644
|
||||
--- a/block/mirror.c
|
||||
+++ b/block/mirror.c
|
||||
@@ -642,7 +642,7 @@ static void mirror_exit(Job *job)
|
||||
* required before it could become a backing file of target_bs. */
|
||||
bdrv_child_try_set_perm(mirror_top_bs->backing, 0, BLK_PERM_ALL,
|
||||
&error_abort);
|
||||
- if (s->backing_mode == MIRROR_SOURCE_BACKING_CHAIN) {
|
||||
+ if (ret == 0 && s->backing_mode == MIRROR_SOURCE_BACKING_CHAIN) {
|
||||
BlockDriverState *backing = s->is_none_mode ? src : s->base;
|
||||
if (backing_bs(target_bs) != backing) {
|
||||
bdrv_set_backing_hd(target_bs, backing, &local_err);
|
||||
--
|
||||
1.8.3.1
|
||||
|
136
0035-block-mirror-conservative-mirror_exit-refactor.patch
Normal file
136
0035-block-mirror-conservative-mirror_exit-refactor.patch
Normal file
@ -0,0 +1,136 @@
|
||||
From 8b394ff523e607060c80c6b647dbb89a2f73571d Mon Sep 17 00:00:00 2001
|
||||
From: John Snow <jsnow@redhat.com>
|
||||
Date: Thu, 6 Sep 2018 09:02:15 -0400
|
||||
Subject: block/mirror: conservative mirror_exit refactor
|
||||
|
||||
RH-Author: John Snow <jsnow@redhat.com>
|
||||
Message-id: <20180925223431.24791-16-jsnow@redhat.com>
|
||||
Patchwork-id: 82270
|
||||
O-Subject: [RHEL8/rhel qemu-kvm PATCH 15/25] block/mirror: conservative mirr
|
||||
Bugzilla: 1632939
|
||||
RH-Acked-by: Jeffrey Cody <jcody@redhat.com>
|
||||
RH-Acked-by: Max Reitz <mreitz@redhat.com>
|
||||
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
|
||||
|
||||
For purposes of minimum code movement, refactor the mirror_exit
|
||||
callback to use the post-finalization callbacks in a trivial way.
|
||||
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Message-id: 20180906130225.5118-7-jsnow@redhat.com
|
||||
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
||||
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
||||
[mreitz: Added comment for the mirror_exit() function]
|
||||
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
||||
(cherry picked from commit 737efc1eda23b904fbe0e66b37715fb0e5c3e58b)
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
---
|
||||
block/mirror.c | 44 +++++++++++++++++++++++++++++++++-----------
|
||||
1 file changed, 33 insertions(+), 11 deletions(-)
|
||||
|
||||
diff --git a/block/mirror.c b/block/mirror.c
|
||||
index 19b57b8..7efba77 100644
|
||||
--- a/block/mirror.c
|
||||
+++ b/block/mirror.c
|
||||
@@ -79,6 +79,7 @@ typedef struct MirrorBlockJob {
|
||||
int max_iov;
|
||||
bool initial_zeroing_ongoing;
|
||||
int in_active_write_counter;
|
||||
+ bool prepared;
|
||||
} MirrorBlockJob;
|
||||
|
||||
typedef struct MirrorBDSOpaque {
|
||||
@@ -607,7 +608,12 @@ static void mirror_wait_for_all_io(MirrorBlockJob *s)
|
||||
}
|
||||
}
|
||||
|
||||
-static void mirror_exit(Job *job)
|
||||
+/**
|
||||
+ * mirror_exit_common: handle both abort() and prepare() cases.
|
||||
+ * for .prepare, returns 0 on success and -errno on failure.
|
||||
+ * for .abort cases, denoted by abort = true, MUST return 0.
|
||||
+ */
|
||||
+static int mirror_exit_common(Job *job)
|
||||
{
|
||||
MirrorBlockJob *s = container_of(job, MirrorBlockJob, common.job);
|
||||
BlockJob *bjob = &s->common;
|
||||
@@ -617,7 +623,13 @@ static void mirror_exit(Job *job)
|
||||
BlockDriverState *target_bs = blk_bs(s->target);
|
||||
BlockDriverState *mirror_top_bs = s->mirror_top_bs;
|
||||
Error *local_err = NULL;
|
||||
- int ret = job->ret;
|
||||
+ bool abort = job->ret < 0;
|
||||
+ int ret = 0;
|
||||
+
|
||||
+ if (s->prepared) {
|
||||
+ return 0;
|
||||
+ }
|
||||
+ s->prepared = true;
|
||||
|
||||
bdrv_release_dirty_bitmap(src, s->dirty_bitmap);
|
||||
|
||||
@@ -642,7 +654,7 @@ static void mirror_exit(Job *job)
|
||||
* required before it could become a backing file of target_bs. */
|
||||
bdrv_child_try_set_perm(mirror_top_bs->backing, 0, BLK_PERM_ALL,
|
||||
&error_abort);
|
||||
- if (ret == 0 && s->backing_mode == MIRROR_SOURCE_BACKING_CHAIN) {
|
||||
+ if (!abort && s->backing_mode == MIRROR_SOURCE_BACKING_CHAIN) {
|
||||
BlockDriverState *backing = s->is_none_mode ? src : s->base;
|
||||
if (backing_bs(target_bs) != backing) {
|
||||
bdrv_set_backing_hd(target_bs, backing, &local_err);
|
||||
@@ -658,11 +670,8 @@ static void mirror_exit(Job *job)
|
||||
aio_context_acquire(replace_aio_context);
|
||||
}
|
||||
|
||||
- if (s->should_complete && ret == 0) {
|
||||
- BlockDriverState *to_replace = src;
|
||||
- if (s->to_replace) {
|
||||
- to_replace = s->to_replace;
|
||||
- }
|
||||
+ if (s->should_complete && !abort) {
|
||||
+ BlockDriverState *to_replace = s->to_replace ?: src;
|
||||
|
||||
if (bdrv_get_flags(target_bs) != bdrv_get_flags(to_replace)) {
|
||||
bdrv_reopen(target_bs, bdrv_get_flags(to_replace), NULL);
|
||||
@@ -711,7 +720,18 @@ static void mirror_exit(Job *job)
|
||||
bdrv_unref(mirror_top_bs);
|
||||
bdrv_unref(src);
|
||||
|
||||
- job->ret = ret;
|
||||
+ return ret;
|
||||
+}
|
||||
+
|
||||
+static int mirror_prepare(Job *job)
|
||||
+{
|
||||
+ return mirror_exit_common(job);
|
||||
+}
|
||||
+
|
||||
+static void mirror_abort(Job *job)
|
||||
+{
|
||||
+ int ret = mirror_exit_common(job);
|
||||
+ assert(ret == 0);
|
||||
}
|
||||
|
||||
static void mirror_throttle(MirrorBlockJob *s)
|
||||
@@ -1132,7 +1152,8 @@ static const BlockJobDriver mirror_job_driver = {
|
||||
.user_resume = block_job_user_resume,
|
||||
.drain = block_job_drain,
|
||||
.run = mirror_run,
|
||||
- .exit = mirror_exit,
|
||||
+ .prepare = mirror_prepare,
|
||||
+ .abort = mirror_abort,
|
||||
.pause = mirror_pause,
|
||||
.complete = mirror_complete,
|
||||
},
|
||||
@@ -1149,7 +1170,8 @@ static const BlockJobDriver commit_active_job_driver = {
|
||||
.user_resume = block_job_user_resume,
|
||||
.drain = block_job_drain,
|
||||
.run = mirror_run,
|
||||
- .exit = mirror_exit,
|
||||
+ .prepare = mirror_prepare,
|
||||
+ .abort = mirror_abort,
|
||||
.pause = mirror_pause,
|
||||
.complete = mirror_complete,
|
||||
},
|
||||
--
|
||||
1.8.3.1
|
||||
|
94
0036-block-stream-refactor-stream-to-use-job-callbacks.patch
Normal file
94
0036-block-stream-refactor-stream-to-use-job-callbacks.patch
Normal file
@ -0,0 +1,94 @@
|
||||
From 533c77ee076c0050b4c4deb26fda54c085a994ce Mon Sep 17 00:00:00 2001
|
||||
From: John Snow <jsnow@redhat.com>
|
||||
Date: Tue, 25 Sep 2018 22:34:22 +0100
|
||||
Subject: block/stream: refactor stream to use job callbacks
|
||||
|
||||
RH-Author: John Snow <jsnow@redhat.com>
|
||||
Message-id: <20180925223431.24791-17-jsnow@redhat.com>
|
||||
Patchwork-id: 82280
|
||||
O-Subject: [RHEL8/rhel qemu-kvm PATCH 16/25] block/stream: refactor stream to use job callbacks
|
||||
Bugzilla: 1632939
|
||||
RH-Acked-by: Jeffrey Cody <jcody@redhat.com>
|
||||
RH-Acked-by: Max Reitz <mreitz@redhat.com>
|
||||
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
|
||||
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
||||
Message-id: 20180906130225.5118-8-jsnow@redhat.com
|
||||
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
||||
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
||||
(cherry picked from commit 1b57488acf1beba157bcd8c926e596342bcb5c60)
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
|
||||
---
|
||||
block/stream.c | 23 +++++++++++++++--------
|
||||
1 file changed, 15 insertions(+), 8 deletions(-)
|
||||
|
||||
diff --git a/block/stream.c b/block/stream.c
|
||||
index 700eb23..81a7ec8 100644
|
||||
--- a/block/stream.c
|
||||
+++ b/block/stream.c
|
||||
@@ -54,16 +54,16 @@ static int coroutine_fn stream_populate(BlockBackend *blk,
|
||||
return blk_co_preadv(blk, offset, qiov.size, &qiov, BDRV_REQ_COPY_ON_READ);
|
||||
}
|
||||
|
||||
-static void stream_exit(Job *job)
|
||||
+static int stream_prepare(Job *job)
|
||||
{
|
||||
StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
|
||||
BlockJob *bjob = &s->common;
|
||||
BlockDriverState *bs = blk_bs(bjob->blk);
|
||||
BlockDriverState *base = s->base;
|
||||
Error *local_err = NULL;
|
||||
- int ret = job->ret;
|
||||
+ int ret = 0;
|
||||
|
||||
- if (!job_is_cancelled(job) && bs->backing && ret == 0) {
|
||||
+ if (bs->backing) {
|
||||
const char *base_id = NULL, *base_fmt = NULL;
|
||||
if (base) {
|
||||
base_id = s->backing_file_str;
|
||||
@@ -75,12 +75,19 @@ static void stream_exit(Job *job)
|
||||
bdrv_set_backing_hd(bs, base, &local_err);
|
||||
if (local_err) {
|
||||
error_report_err(local_err);
|
||||
- ret = -EPERM;
|
||||
- goto out;
|
||||
+ return -EPERM;
|
||||
}
|
||||
}
|
||||
|
||||
-out:
|
||||
+ return ret;
|
||||
+}
|
||||
+
|
||||
+static void stream_clean(Job *job)
|
||||
+{
|
||||
+ StreamBlockJob *s = container_of(job, StreamBlockJob, common.job);
|
||||
+ BlockJob *bjob = &s->common;
|
||||
+ BlockDriverState *bs = blk_bs(bjob->blk);
|
||||
+
|
||||
/* Reopen the image back in read-only mode if necessary */
|
||||
if (s->bs_flags != bdrv_get_flags(bs)) {
|
||||
/* Give up write permissions before making it read-only */
|
||||
@@ -89,7 +96,6 @@ out:
|
||||
}
|
||||
|
||||
g_free(s->backing_file_str);
|
||||
- job->ret = ret;
|
||||
}
|
||||
|
||||
static int coroutine_fn stream_run(Job *job, Error **errp)
|
||||
@@ -206,7 +212,8 @@ static const BlockJobDriver stream_job_driver = {
|
||||
.job_type = JOB_TYPE_STREAM,
|
||||
.free = block_job_free,
|
||||
.run = stream_run,
|
||||
- .exit = stream_exit,
|
||||
+ .prepare = stream_prepare,
|
||||
+ .clean = stream_clean,
|
||||
.user_resume = block_job_user_resume,
|
||||
.drain = block_job_drain,
|
||||
},
|
||||
--
|
||||
1.8.3.1
|
||||
|
233
0037-tests-blockjob-replace-Blockjob-with-Job.patch
Normal file
233
0037-tests-blockjob-replace-Blockjob-with-Job.patch
Normal file
@ -0,0 +1,233 @@
|
||||
From ac945e63cca25c453d472834c64aa3a4192729f9 Mon Sep 17 00:00:00 2001
|
||||
From: John Snow <jsnow@redhat.com>
|
||||
Date: Tue, 25 Sep 2018 22:34:23 +0100
|
||||
Subject: tests/blockjob: replace Blockjob with Job
|
||||
|
||||
RH-Author: John Snow <jsnow@redhat.com>
|
||||
Message-id: <20180925223431.24791-18-jsnow@redhat.com>
|
||||
Patchwork-id: 82281
|
||||
O-Subject: [RHEL8/rhel qemu-kvm PATCH 17/25] tests/blockjob: replace Blockjob with Job
|
||||
Bugzilla: 1632939
|
||||
RH-Acked-by: Jeffrey Cody <jcody@redhat.com>
|
||||
RH-Acked-by: Max Reitz <mreitz@redhat.com>
|
||||
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
|
||||
|
||||
These tests don't actually test blockjobs anymore, they test
|
||||
generic Job lifetimes. Change the types accordingly.
|
||||
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
||||
Message-id: 20180906130225.5118-9-jsnow@redhat.com
|
||||
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
||||
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
||||
(cherry picked from commit 0cc4643b01a0138543e886db8e3bf8a3f74ff8f9)
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
|
||||
---
|
||||
tests/test-blockjob.c | 98 ++++++++++++++++++++++++++-------------------------
|
||||
1 file changed, 50 insertions(+), 48 deletions(-)
|
||||
|
||||
diff --git a/tests/test-blockjob.c b/tests/test-blockjob.c
|
||||
index ad4a65b..8e8b680 100644
|
||||
--- a/tests/test-blockjob.c
|
||||
+++ b/tests/test-blockjob.c
|
||||
@@ -206,18 +206,20 @@ static const BlockJobDriver test_cancel_driver = {
|
||||
},
|
||||
};
|
||||
|
||||
-static CancelJob *create_common(BlockJob **pjob)
|
||||
+static CancelJob *create_common(Job **pjob)
|
||||
{
|
||||
BlockBackend *blk;
|
||||
- BlockJob *job;
|
||||
+ Job *job;
|
||||
+ BlockJob *bjob;
|
||||
CancelJob *s;
|
||||
|
||||
blk = create_blk(NULL);
|
||||
- job = mk_job(blk, "Steve", &test_cancel_driver, true,
|
||||
- JOB_MANUAL_FINALIZE | JOB_MANUAL_DISMISS);
|
||||
- job_ref(&job->job);
|
||||
- assert(job->job.status == JOB_STATUS_CREATED);
|
||||
- s = container_of(job, CancelJob, common);
|
||||
+ bjob = mk_job(blk, "Steve", &test_cancel_driver, true,
|
||||
+ JOB_MANUAL_FINALIZE | JOB_MANUAL_DISMISS);
|
||||
+ job = &bjob->job;
|
||||
+ job_ref(job);
|
||||
+ assert(job->status == JOB_STATUS_CREATED);
|
||||
+ s = container_of(bjob, CancelJob, common);
|
||||
s->blk = blk;
|
||||
|
||||
*pjob = job;
|
||||
@@ -242,7 +244,7 @@ static void cancel_common(CancelJob *s)
|
||||
|
||||
static void test_cancel_created(void)
|
||||
{
|
||||
- BlockJob *job;
|
||||
+ Job *job;
|
||||
CancelJob *s;
|
||||
|
||||
s = create_common(&job);
|
||||
@@ -251,119 +253,119 @@ static void test_cancel_created(void)
|
||||
|
||||
static void test_cancel_running(void)
|
||||
{
|
||||
- BlockJob *job;
|
||||
+ Job *job;
|
||||
CancelJob *s;
|
||||
|
||||
s = create_common(&job);
|
||||
|
||||
- job_start(&job->job);
|
||||
- assert(job->job.status == JOB_STATUS_RUNNING);
|
||||
+ job_start(job);
|
||||
+ assert(job->status == JOB_STATUS_RUNNING);
|
||||
|
||||
cancel_common(s);
|
||||
}
|
||||
|
||||
static void test_cancel_paused(void)
|
||||
{
|
||||
- BlockJob *job;
|
||||
+ Job *job;
|
||||
CancelJob *s;
|
||||
|
||||
s = create_common(&job);
|
||||
|
||||
- job_start(&job->job);
|
||||
- assert(job->job.status == JOB_STATUS_RUNNING);
|
||||
+ job_start(job);
|
||||
+ assert(job->status == JOB_STATUS_RUNNING);
|
||||
|
||||
- job_user_pause(&job->job, &error_abort);
|
||||
- job_enter(&job->job);
|
||||
- assert(job->job.status == JOB_STATUS_PAUSED);
|
||||
+ job_user_pause(job, &error_abort);
|
||||
+ job_enter(job);
|
||||
+ assert(job->status == JOB_STATUS_PAUSED);
|
||||
|
||||
cancel_common(s);
|
||||
}
|
||||
|
||||
static void test_cancel_ready(void)
|
||||
{
|
||||
- BlockJob *job;
|
||||
+ Job *job;
|
||||
CancelJob *s;
|
||||
|
||||
s = create_common(&job);
|
||||
|
||||
- job_start(&job->job);
|
||||
- assert(job->job.status == JOB_STATUS_RUNNING);
|
||||
+ job_start(job);
|
||||
+ assert(job->status == JOB_STATUS_RUNNING);
|
||||
|
||||
s->should_converge = true;
|
||||
- job_enter(&job->job);
|
||||
- assert(job->job.status == JOB_STATUS_READY);
|
||||
+ job_enter(job);
|
||||
+ assert(job->status == JOB_STATUS_READY);
|
||||
|
||||
cancel_common(s);
|
||||
}
|
||||
|
||||
static void test_cancel_standby(void)
|
||||
{
|
||||
- BlockJob *job;
|
||||
+ Job *job;
|
||||
CancelJob *s;
|
||||
|
||||
s = create_common(&job);
|
||||
|
||||
- job_start(&job->job);
|
||||
- assert(job->job.status == JOB_STATUS_RUNNING);
|
||||
+ job_start(job);
|
||||
+ assert(job->status == JOB_STATUS_RUNNING);
|
||||
|
||||
s->should_converge = true;
|
||||
- job_enter(&job->job);
|
||||
- assert(job->job.status == JOB_STATUS_READY);
|
||||
+ job_enter(job);
|
||||
+ assert(job->status == JOB_STATUS_READY);
|
||||
|
||||
- job_user_pause(&job->job, &error_abort);
|
||||
- job_enter(&job->job);
|
||||
- assert(job->job.status == JOB_STATUS_STANDBY);
|
||||
+ job_user_pause(job, &error_abort);
|
||||
+ job_enter(job);
|
||||
+ assert(job->status == JOB_STATUS_STANDBY);
|
||||
|
||||
cancel_common(s);
|
||||
}
|
||||
|
||||
static void test_cancel_pending(void)
|
||||
{
|
||||
- BlockJob *job;
|
||||
+ Job *job;
|
||||
CancelJob *s;
|
||||
|
||||
s = create_common(&job);
|
||||
|
||||
- job_start(&job->job);
|
||||
- assert(job->job.status == JOB_STATUS_RUNNING);
|
||||
+ job_start(job);
|
||||
+ assert(job->status == JOB_STATUS_RUNNING);
|
||||
|
||||
s->should_converge = true;
|
||||
- job_enter(&job->job);
|
||||
- assert(job->job.status == JOB_STATUS_READY);
|
||||
+ job_enter(job);
|
||||
+ assert(job->status == JOB_STATUS_READY);
|
||||
|
||||
- job_complete(&job->job, &error_abort);
|
||||
- job_enter(&job->job);
|
||||
+ job_complete(job, &error_abort);
|
||||
+ job_enter(job);
|
||||
while (!s->completed) {
|
||||
aio_poll(qemu_get_aio_context(), true);
|
||||
}
|
||||
- assert(job->job.status == JOB_STATUS_PENDING);
|
||||
+ assert(job->status == JOB_STATUS_PENDING);
|
||||
|
||||
cancel_common(s);
|
||||
}
|
||||
|
||||
static void test_cancel_concluded(void)
|
||||
{
|
||||
- BlockJob *job;
|
||||
+ Job *job;
|
||||
CancelJob *s;
|
||||
|
||||
s = create_common(&job);
|
||||
|
||||
- job_start(&job->job);
|
||||
- assert(job->job.status == JOB_STATUS_RUNNING);
|
||||
+ job_start(job);
|
||||
+ assert(job->status == JOB_STATUS_RUNNING);
|
||||
|
||||
s->should_converge = true;
|
||||
- job_enter(&job->job);
|
||||
- assert(job->job.status == JOB_STATUS_READY);
|
||||
+ job_enter(job);
|
||||
+ assert(job->status == JOB_STATUS_READY);
|
||||
|
||||
- job_complete(&job->job, &error_abort);
|
||||
- job_enter(&job->job);
|
||||
+ job_complete(job, &error_abort);
|
||||
+ job_enter(job);
|
||||
while (!s->completed) {
|
||||
aio_poll(qemu_get_aio_context(), true);
|
||||
}
|
||||
- assert(job->job.status == JOB_STATUS_PENDING);
|
||||
+ assert(job->status == JOB_STATUS_PENDING);
|
||||
|
||||
- job_finalize(&job->job, &error_abort);
|
||||
- assert(job->job.status == JOB_STATUS_CONCLUDED);
|
||||
+ job_finalize(job, &error_abort);
|
||||
+ assert(job->status == JOB_STATUS_CONCLUDED);
|
||||
|
||||
cancel_common(s);
|
||||
}
|
||||
--
|
||||
1.8.3.1
|
||||
|
88
0038-tests-test-blockjob-remove-exit-callback.patch
Normal file
88
0038-tests-test-blockjob-remove-exit-callback.patch
Normal file
@ -0,0 +1,88 @@
|
||||
From 62fd56870fb6296f795c9fc7f5965d83a72dabac Mon Sep 17 00:00:00 2001
|
||||
From: John Snow <jsnow@redhat.com>
|
||||
Date: Tue, 25 Sep 2018 22:34:24 +0100
|
||||
Subject: tests/test-blockjob: remove exit callback
|
||||
|
||||
RH-Author: John Snow <jsnow@redhat.com>
|
||||
Message-id: <20180925223431.24791-19-jsnow@redhat.com>
|
||||
Patchwork-id: 82276
|
||||
O-Subject: [RHEL8/rhel qemu-kvm PATCH 18/25] tests/test-blockjob: remove exit callback
|
||||
Bugzilla: 1632939
|
||||
RH-Acked-by: Jeffrey Cody <jcody@redhat.com>
|
||||
RH-Acked-by: Max Reitz <mreitz@redhat.com>
|
||||
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
|
||||
|
||||
We remove the exit callback and the completed boolean along with it.
|
||||
We can simulate it just fine by waiting for the job to defer to the
|
||||
main loop, and then giving it one final kick to get the main loop
|
||||
portion to run.
|
||||
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
||||
Message-id: 20180906130225.5118-10-jsnow@redhat.com
|
||||
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
||||
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
||||
(cherry picked from commit 977d26fdbeb35d8d2d0f203f9556d44a353e0dfd)
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
|
||||
---
|
||||
tests/test-blockjob.c | 16 ++++++----------
|
||||
1 file changed, 6 insertions(+), 10 deletions(-)
|
||||
|
||||
diff --git a/tests/test-blockjob.c b/tests/test-blockjob.c
|
||||
index 8e8b680..de4c1c2 100644
|
||||
--- a/tests/test-blockjob.c
|
||||
+++ b/tests/test-blockjob.c
|
||||
@@ -160,15 +160,8 @@ typedef struct CancelJob {
|
||||
BlockBackend *blk;
|
||||
bool should_converge;
|
||||
bool should_complete;
|
||||
- bool completed;
|
||||
} CancelJob;
|
||||
|
||||
-static void cancel_job_exit(Job *job)
|
||||
-{
|
||||
- CancelJob *s = container_of(job, CancelJob, common.job);
|
||||
- s->completed = true;
|
||||
-}
|
||||
-
|
||||
static void cancel_job_complete(Job *job, Error **errp)
|
||||
{
|
||||
CancelJob *s = container_of(job, CancelJob, common.job);
|
||||
@@ -201,7 +194,6 @@ static const BlockJobDriver test_cancel_driver = {
|
||||
.user_resume = block_job_user_resume,
|
||||
.drain = block_job_drain,
|
||||
.run = cancel_job_run,
|
||||
- .exit = cancel_job_exit,
|
||||
.complete = cancel_job_complete,
|
||||
},
|
||||
};
|
||||
@@ -335,9 +327,11 @@ static void test_cancel_pending(void)
|
||||
|
||||
job_complete(job, &error_abort);
|
||||
job_enter(job);
|
||||
- while (!s->completed) {
|
||||
+ while (!job->deferred_to_main_loop) {
|
||||
aio_poll(qemu_get_aio_context(), true);
|
||||
}
|
||||
+ assert(job->status == JOB_STATUS_READY);
|
||||
+ aio_poll(qemu_get_aio_context(), true);
|
||||
assert(job->status == JOB_STATUS_PENDING);
|
||||
|
||||
cancel_common(s);
|
||||
@@ -359,9 +353,11 @@ static void test_cancel_concluded(void)
|
||||
|
||||
job_complete(job, &error_abort);
|
||||
job_enter(job);
|
||||
- while (!s->completed) {
|
||||
+ while (!job->deferred_to_main_loop) {
|
||||
aio_poll(qemu_get_aio_context(), true);
|
||||
}
|
||||
+ assert(job->status == JOB_STATUS_READY);
|
||||
+ aio_poll(qemu_get_aio_context(), true);
|
||||
assert(job->status == JOB_STATUS_PENDING);
|
||||
|
||||
job_finalize(job, &error_abort);
|
||||
--
|
||||
1.8.3.1
|
||||
|
53
0039-tests-test-blockjob-txn-move-.exit-to-.clean.patch
Normal file
53
0039-tests-test-blockjob-txn-move-.exit-to-.clean.patch
Normal file
@ -0,0 +1,53 @@
|
||||
From 6247c4b10e3fb6c677947a503ddad961cb71faff Mon Sep 17 00:00:00 2001
|
||||
From: John Snow <jsnow@redhat.com>
|
||||
Date: Tue, 25 Sep 2018 22:34:25 +0100
|
||||
Subject: tests/test-blockjob-txn: move .exit to .clean
|
||||
|
||||
RH-Author: John Snow <jsnow@redhat.com>
|
||||
Message-id: <20180925223431.24791-20-jsnow@redhat.com>
|
||||
Patchwork-id: 82282
|
||||
O-Subject: [RHEL8/rhel qemu-kvm PATCH 19/25] tests/test-blockjob-txn: move .exit to .clean
|
||||
Bugzilla: 1632939
|
||||
RH-Acked-by: Jeffrey Cody <jcody@redhat.com>
|
||||
RH-Acked-by: Max Reitz <mreitz@redhat.com>
|
||||
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
|
||||
|
||||
The exit callback in this test actually only performs cleanup.
|
||||
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
||||
Message-id: 20180906130225.5118-11-jsnow@redhat.com
|
||||
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
||||
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
||||
(cherry picked from commit e4dad4275d51b594c8abbe726a4927f6f388e427)
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
|
||||
---
|
||||
tests/test-blockjob-txn.c | 4 ++--
|
||||
1 file changed, 2 insertions(+), 2 deletions(-)
|
||||
|
||||
diff --git a/tests/test-blockjob-txn.c b/tests/test-blockjob-txn.c
|
||||
index ef29f35..86606f9 100644
|
||||
--- a/tests/test-blockjob-txn.c
|
||||
+++ b/tests/test-blockjob-txn.c
|
||||
@@ -24,7 +24,7 @@ typedef struct {
|
||||
int *result;
|
||||
} TestBlockJob;
|
||||
|
||||
-static void test_block_job_exit(Job *job)
|
||||
+static void test_block_job_clean(Job *job)
|
||||
{
|
||||
BlockJob *bjob = container_of(job, BlockJob, job);
|
||||
BlockDriverState *bs = blk_bs(bjob->blk);
|
||||
@@ -73,7 +73,7 @@ static const BlockJobDriver test_block_job_driver = {
|
||||
.user_resume = block_job_user_resume,
|
||||
.drain = block_job_drain,
|
||||
.run = test_block_job_run,
|
||||
- .exit = test_block_job_exit,
|
||||
+ .clean = test_block_job_clean,
|
||||
},
|
||||
};
|
||||
|
||||
--
|
||||
1.8.3.1
|
||||
|
156
0040-jobs-remove-.exit-callback.patch
Normal file
156
0040-jobs-remove-.exit-callback.patch
Normal file
@ -0,0 +1,156 @@
|
||||
From c2c10f4fac6757d292f8b3d9ac7723a718e596aa Mon Sep 17 00:00:00 2001
|
||||
From: John Snow <jsnow@redhat.com>
|
||||
Date: Tue, 25 Sep 2018 22:34:26 +0100
|
||||
Subject: jobs: remove .exit callback
|
||||
|
||||
RH-Author: John Snow <jsnow@redhat.com>
|
||||
Message-id: <20180925223431.24791-21-jsnow@redhat.com>
|
||||
Patchwork-id: 82283
|
||||
O-Subject: [RHEL8/rhel qemu-kvm PATCH 20/25] jobs: remove .exit callback
|
||||
Bugzilla: 1632939
|
||||
RH-Acked-by: Jeffrey Cody <jcody@redhat.com>
|
||||
RH-Acked-by: Max Reitz <mreitz@redhat.com>
|
||||
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
|
||||
|
||||
Now that all of the jobs use the component finalization callbacks,
|
||||
there's no use for the heavy-hammer .exit callback anymore.
|
||||
|
||||
job_exit becomes a glorified type shim so that we can call
|
||||
job_completed from aio_bh_schedule_oneshot.
|
||||
|
||||
Move these three functions down into job.c to eliminate a
|
||||
forward reference.
|
||||
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
||||
Message-id: 20180906130225.5118-12-jsnow@redhat.com
|
||||
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
||||
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
||||
(cherry picked from commit ccbfb3319aa265e71c16dac976ff857d0a5bcb4b)
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
|
||||
---
|
||||
include/qemu/job.h | 11 --------
|
||||
job.c | 77 ++++++++++++++++++++++++------------------------------
|
||||
2 files changed, 34 insertions(+), 54 deletions(-)
|
||||
|
||||
diff --git a/include/qemu/job.h b/include/qemu/job.h
|
||||
index e0cff70..5cb0681 100644
|
||||
--- a/include/qemu/job.h
|
||||
+++ b/include/qemu/job.h
|
||||
@@ -222,17 +222,6 @@ struct JobDriver {
|
||||
void (*drain)(Job *job);
|
||||
|
||||
/**
|
||||
- * If the callback is not NULL, exit will be invoked from the main thread
|
||||
- * when the job's coroutine has finished, but before transactional
|
||||
- * convergence; before @prepare or @abort.
|
||||
- *
|
||||
- * FIXME TODO: This callback is only temporary to transition remaining jobs
|
||||
- * to prepare/commit/abort/clean callbacks and will be removed before 3.1.
|
||||
- * is released.
|
||||
- */
|
||||
- void (*exit)(Job *job);
|
||||
-
|
||||
- /**
|
||||
* If the callback is not NULL, prepare will be invoked when all the jobs
|
||||
* belonging to the same transaction complete; or upon this job's completion
|
||||
* if it is not in a transaction.
|
||||
diff --git a/job.c b/job.c
|
||||
index e8d7aee..87c9aa4 100644
|
||||
--- a/job.c
|
||||
+++ b/job.c
|
||||
@@ -535,49 +535,6 @@ void job_drain(Job *job)
|
||||
}
|
||||
}
|
||||
|
||||
-static void job_completed(Job *job);
|
||||
-
|
||||
-static void job_exit(void *opaque)
|
||||
-{
|
||||
- Job *job = (Job *)opaque;
|
||||
- AioContext *aio_context = job->aio_context;
|
||||
-
|
||||
- if (job->driver->exit) {
|
||||
- aio_context_acquire(aio_context);
|
||||
- job->driver->exit(job);
|
||||
- aio_context_release(aio_context);
|
||||
- }
|
||||
- job_completed(job);
|
||||
-}
|
||||
-
|
||||
-/**
|
||||
- * All jobs must allow a pause point before entering their job proper. This
|
||||
- * ensures that jobs can be paused prior to being started, then resumed later.
|
||||
- */
|
||||
-static void coroutine_fn job_co_entry(void *opaque)
|
||||
-{
|
||||
- Job *job = opaque;
|
||||
-
|
||||
- assert(job && job->driver && job->driver->run);
|
||||
- job_pause_point(job);
|
||||
- job->ret = job->driver->run(job, &job->err);
|
||||
- job->deferred_to_main_loop = true;
|
||||
- aio_bh_schedule_oneshot(qemu_get_aio_context(), job_exit, job);
|
||||
-}
|
||||
-
|
||||
-
|
||||
-void job_start(Job *job)
|
||||
-{
|
||||
- assert(job && !job_started(job) && job->paused &&
|
||||
- job->driver && job->driver->run);
|
||||
- job->co = qemu_coroutine_create(job_co_entry, job);
|
||||
- job->pause_count--;
|
||||
- job->busy = true;
|
||||
- job->paused = false;
|
||||
- job_state_transition(job, JOB_STATUS_RUNNING);
|
||||
- aio_co_enter(job->aio_context, job->co);
|
||||
-}
|
||||
-
|
||||
/* Assumes the block_job_mutex is held */
|
||||
static bool job_timer_not_pending(Job *job)
|
||||
{
|
||||
@@ -894,6 +851,40 @@ static void job_completed(Job *job)
|
||||
}
|
||||
}
|
||||
|
||||
+/** Useful only as a type shim for aio_bh_schedule_oneshot. */
|
||||
+static void job_exit(void *opaque)
|
||||
+{
|
||||
+ Job *job = (Job *)opaque;
|
||||
+ job_completed(job);
|
||||
+}
|
||||
+
|
||||
+/**
|
||||
+ * All jobs must allow a pause point before entering their job proper. This
|
||||
+ * ensures that jobs can be paused prior to being started, then resumed later.
|
||||
+ */
|
||||
+static void coroutine_fn job_co_entry(void *opaque)
|
||||
+{
|
||||
+ Job *job = opaque;
|
||||
+
|
||||
+ assert(job && job->driver && job->driver->run);
|
||||
+ job_pause_point(job);
|
||||
+ job->ret = job->driver->run(job, &job->err);
|
||||
+ job->deferred_to_main_loop = true;
|
||||
+ aio_bh_schedule_oneshot(qemu_get_aio_context(), job_exit, job);
|
||||
+}
|
||||
+
|
||||
+void job_start(Job *job)
|
||||
+{
|
||||
+ assert(job && !job_started(job) && job->paused &&
|
||||
+ job->driver && job->driver->run);
|
||||
+ job->co = qemu_coroutine_create(job_co_entry, job);
|
||||
+ job->pause_count--;
|
||||
+ job->busy = true;
|
||||
+ job->paused = false;
|
||||
+ job_state_transition(job, JOB_STATUS_RUNNING);
|
||||
+ aio_co_enter(job->aio_context, job->co);
|
||||
+}
|
||||
+
|
||||
void job_cancel(Job *job, bool force)
|
||||
{
|
||||
if (job->status == JOB_STATUS_CONCLUDED) {
|
||||
--
|
||||
1.8.3.1
|
||||
|
90
0041-qapi-block-commit-expose-new-job-properties.patch
Normal file
90
0041-qapi-block-commit-expose-new-job-properties.patch
Normal file
@ -0,0 +1,90 @@
|
||||
From ce81bd3fa7316bcdee5e121e6ea71c7b2e1e81e1 Mon Sep 17 00:00:00 2001
|
||||
From: John Snow <jsnow@redhat.com>
|
||||
Date: Tue, 25 Sep 2018 22:34:27 +0100
|
||||
Subject: qapi/block-commit: expose new job properties
|
||||
|
||||
RH-Author: John Snow <jsnow@redhat.com>
|
||||
Message-id: <20180925223431.24791-22-jsnow@redhat.com>
|
||||
Patchwork-id: 82285
|
||||
O-Subject: [RHEL8/rhel qemu-kvm PATCH 21/25] qapi/block-commit: expose new job properties
|
||||
Bugzilla: 1632939
|
||||
RH-Acked-by: Jeffrey Cody <jcody@redhat.com>
|
||||
RH-Acked-by: Max Reitz <mreitz@redhat.com>
|
||||
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
|
||||
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
||||
Message-id: 20180906130225.5118-13-jsnow@redhat.com
|
||||
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
||||
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
||||
(cherry picked from commit 96fbf5345f60a87fab8e7ea79a2406f381027db9)
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
|
||||
---
|
||||
blockdev.c | 8 ++++++++
|
||||
qapi/block-core.json | 16 +++++++++++++++-
|
||||
2 files changed, 23 insertions(+), 1 deletion(-)
|
||||
|
||||
diff --git a/blockdev.c b/blockdev.c
|
||||
index c2e6402..8efc47e 100644
|
||||
--- a/blockdev.c
|
||||
+++ b/blockdev.c
|
||||
@@ -3314,6 +3314,8 @@ void qmp_block_commit(bool has_job_id, const char *job_id, const char *device,
|
||||
bool has_backing_file, const char *backing_file,
|
||||
bool has_speed, int64_t speed,
|
||||
bool has_filter_node_name, const char *filter_node_name,
|
||||
+ bool has_auto_finalize, bool auto_finalize,
|
||||
+ bool has_auto_dismiss, bool auto_dismiss,
|
||||
Error **errp)
|
||||
{
|
||||
BlockDriverState *bs;
|
||||
@@ -3333,6 +3335,12 @@ void qmp_block_commit(bool has_job_id, const char *job_id, const char *device,
|
||||
if (!has_filter_node_name) {
|
||||
filter_node_name = NULL;
|
||||
}
|
||||
+ if (has_auto_finalize && !auto_finalize) {
|
||||
+ job_flags |= JOB_MANUAL_FINALIZE;
|
||||
+ }
|
||||
+ if (has_auto_dismiss && !auto_dismiss) {
|
||||
+ job_flags |= JOB_MANUAL_DISMISS;
|
||||
+ }
|
||||
|
||||
/* Important Note:
|
||||
* libvirt relies on the DeviceNotFound error class in order to probe for
|
||||
diff --git a/qapi/block-core.json b/qapi/block-core.json
|
||||
index 5b9084a..ca7d1b3 100644
|
||||
--- a/qapi/block-core.json
|
||||
+++ b/qapi/block-core.json
|
||||
@@ -1498,6 +1498,19 @@
|
||||
# above @top. If this option is not given, a node name is
|
||||
# autogenerated. (Since: 2.9)
|
||||
#
|
||||
+# @auto-finalize: When false, this job will wait in a PENDING state after it has
|
||||
+# finished its work, waiting for @block-job-finalize before
|
||||
+# making any block graph changes.
|
||||
+# When true, this job will automatically
|
||||
+# perform its abort or commit actions.
|
||||
+# Defaults to true. (Since 3.1)
|
||||
+#
|
||||
+# @auto-dismiss: When false, this job will wait in a CONCLUDED state after it
|
||||
+# has completely ceased all work, and awaits @block-job-dismiss.
|
||||
+# When true, this job will automatically disappear from the query
|
||||
+# list without user intervention.
|
||||
+# Defaults to true. (Since 3.1)
|
||||
+#
|
||||
# Returns: Nothing on success
|
||||
# If commit or stream is already active on this device, DeviceInUse
|
||||
# If @device does not exist, DeviceNotFound
|
||||
@@ -1518,7 +1531,8 @@
|
||||
{ 'command': 'block-commit',
|
||||
'data': { '*job-id': 'str', 'device': 'str', '*base': 'str', '*top': 'str',
|
||||
'*backing-file': 'str', '*speed': 'int',
|
||||
- '*filter-node-name': 'str' } }
|
||||
+ '*filter-node-name': 'str',
|
||||
+ '*auto-finalize': 'bool', '*auto-dismiss': 'bool' } }
|
||||
|
||||
##
|
||||
# @drive-backup:
|
||||
--
|
||||
1.8.3.1
|
||||
|
144
0042-qapi-block-mirror-expose-new-job-properties.patch
Normal file
144
0042-qapi-block-mirror-expose-new-job-properties.patch
Normal file
@ -0,0 +1,144 @@
|
||||
From 318445193efc33c06e63e021a988814d49658a0f Mon Sep 17 00:00:00 2001
|
||||
From: John Snow <jsnow@redhat.com>
|
||||
Date: Thu, 6 Sep 2018 09:02:22 -0400
|
||||
Subject: qapi/block-mirror: expose new job properties
|
||||
|
||||
RH-Author: John Snow <jsnow@redhat.com>
|
||||
Message-id: <20180925223431.24791-23-jsnow@redhat.com>
|
||||
Patchwork-id: 82274
|
||||
O-Subject: [RHEL8/rhel qemu-kvm PATCH 22/25] qapi/block-mirror: expose new j
|
||||
Bugzilla: 1632939
|
||||
RH-Acked-by: Jeffrey Cody <jcody@redhat.com>
|
||||
RH-Acked-by: Max Reitz <mreitz@redhat.com>
|
||||
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
|
||||
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
||||
Message-id: 20180906130225.5118-14-jsnow@redhat.com
|
||||
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
||||
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
||||
(cherry picked from commit a6b58adec28ff43c0f29ff7c95cdd5d11e87cf61)
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
---
|
||||
blockdev.c | 14 ++++++++++++++
|
||||
qapi/block-core.json | 30 ++++++++++++++++++++++++++++--
|
||||
2 files changed, 42 insertions(+), 2 deletions(-)
|
||||
|
||||
diff --git a/blockdev.c b/blockdev.c
|
||||
index 8efc47e..bbb3279 100644
|
||||
--- a/blockdev.c
|
||||
+++ b/blockdev.c
|
||||
@@ -3707,6 +3707,8 @@ static void blockdev_mirror_common(const char *job_id, BlockDriverState *bs,
|
||||
bool has_filter_node_name,
|
||||
const char *filter_node_name,
|
||||
bool has_copy_mode, MirrorCopyMode copy_mode,
|
||||
+ bool has_auto_finalize, bool auto_finalize,
|
||||
+ bool has_auto_dismiss, bool auto_dismiss,
|
||||
Error **errp)
|
||||
{
|
||||
int job_flags = JOB_DEFAULT;
|
||||
@@ -3735,6 +3737,12 @@ static void blockdev_mirror_common(const char *job_id, BlockDriverState *bs,
|
||||
if (!has_copy_mode) {
|
||||
copy_mode = MIRROR_COPY_MODE_BACKGROUND;
|
||||
}
|
||||
+ if (has_auto_finalize && !auto_finalize) {
|
||||
+ job_flags |= JOB_MANUAL_FINALIZE;
|
||||
+ }
|
||||
+ if (has_auto_dismiss && !auto_dismiss) {
|
||||
+ job_flags |= JOB_MANUAL_DISMISS;
|
||||
+ }
|
||||
|
||||
if (granularity != 0 && (granularity < 512 || granularity > 1048576 * 64)) {
|
||||
error_setg(errp, QERR_INVALID_PARAMETER_VALUE, "granularity",
|
||||
@@ -3912,6 +3920,8 @@ void qmp_drive_mirror(DriveMirror *arg, Error **errp)
|
||||
arg->has_unmap, arg->unmap,
|
||||
false, NULL,
|
||||
arg->has_copy_mode, arg->copy_mode,
|
||||
+ arg->has_auto_finalize, arg->auto_finalize,
|
||||
+ arg->has_auto_dismiss, arg->auto_dismiss,
|
||||
&local_err);
|
||||
bdrv_unref(target_bs);
|
||||
error_propagate(errp, local_err);
|
||||
@@ -3933,6 +3943,8 @@ void qmp_blockdev_mirror(bool has_job_id, const char *job_id,
|
||||
bool has_filter_node_name,
|
||||
const char *filter_node_name,
|
||||
bool has_copy_mode, MirrorCopyMode copy_mode,
|
||||
+ bool has_auto_finalize, bool auto_finalize,
|
||||
+ bool has_auto_dismiss, bool auto_dismiss,
|
||||
Error **errp)
|
||||
{
|
||||
BlockDriverState *bs;
|
||||
@@ -3966,6 +3978,8 @@ void qmp_blockdev_mirror(bool has_job_id, const char *job_id,
|
||||
true, true,
|
||||
has_filter_node_name, filter_node_name,
|
||||
has_copy_mode, copy_mode,
|
||||
+ has_auto_finalize, auto_finalize,
|
||||
+ has_auto_dismiss, auto_dismiss,
|
||||
&local_err);
|
||||
error_propagate(errp, local_err);
|
||||
|
||||
diff --git a/qapi/block-core.json b/qapi/block-core.json
|
||||
index ca7d1b3..9193d49 100644
|
||||
--- a/qapi/block-core.json
|
||||
+++ b/qapi/block-core.json
|
||||
@@ -1732,6 +1732,18 @@
|
||||
# @copy-mode: when to copy data to the destination; defaults to 'background'
|
||||
# (Since: 3.0)
|
||||
#
|
||||
+# @auto-finalize: When false, this job will wait in a PENDING state after it has
|
||||
+# finished its work, waiting for @block-job-finalize before
|
||||
+# making any block graph changes.
|
||||
+# When true, this job will automatically
|
||||
+# perform its abort or commit actions.
|
||||
+# Defaults to true. (Since 3.1)
|
||||
+#
|
||||
+# @auto-dismiss: When false, this job will wait in a CONCLUDED state after it
|
||||
+# has completely ceased all work, and awaits @block-job-dismiss.
|
||||
+# When true, this job will automatically disappear from the query
|
||||
+# list without user intervention.
|
||||
+# Defaults to true. (Since 3.1)
|
||||
# Since: 1.3
|
||||
##
|
||||
{ 'struct': 'DriveMirror',
|
||||
@@ -1741,7 +1753,8 @@
|
||||
'*speed': 'int', '*granularity': 'uint32',
|
||||
'*buf-size': 'int', '*on-source-error': 'BlockdevOnError',
|
||||
'*on-target-error': 'BlockdevOnError',
|
||||
- '*unmap': 'bool', '*copy-mode': 'MirrorCopyMode' } }
|
||||
+ '*unmap': 'bool', '*copy-mode': 'MirrorCopyMode',
|
||||
+ '*auto-finalize': 'bool', '*auto-dismiss': 'bool' } }
|
||||
|
||||
##
|
||||
# @BlockDirtyBitmap:
|
||||
@@ -2007,6 +2020,18 @@
|
||||
# @copy-mode: when to copy data to the destination; defaults to 'background'
|
||||
# (Since: 3.0)
|
||||
#
|
||||
+# @auto-finalize: When false, this job will wait in a PENDING state after it has
|
||||
+# finished its work, waiting for @block-job-finalize before
|
||||
+# making any block graph changes.
|
||||
+# When true, this job will automatically
|
||||
+# perform its abort or commit actions.
|
||||
+# Defaults to true. (Since 3.1)
|
||||
+#
|
||||
+# @auto-dismiss: When false, this job will wait in a CONCLUDED state after it
|
||||
+# has completely ceased all work, and awaits @block-job-dismiss.
|
||||
+# When true, this job will automatically disappear from the query
|
||||
+# list without user intervention.
|
||||
+# Defaults to true. (Since 3.1)
|
||||
# Returns: nothing on success.
|
||||
#
|
||||
# Since: 2.6
|
||||
@@ -2028,7 +2053,8 @@
|
||||
'*buf-size': 'int', '*on-source-error': 'BlockdevOnError',
|
||||
'*on-target-error': 'BlockdevOnError',
|
||||
'*filter-node-name': 'str',
|
||||
- '*copy-mode': 'MirrorCopyMode' } }
|
||||
+ '*copy-mode': 'MirrorCopyMode',
|
||||
+ '*auto-finalize': 'bool', '*auto-dismiss': 'bool' } }
|
||||
|
||||
##
|
||||
# @block_set_io_throttle:
|
||||
--
|
||||
1.8.3.1
|
||||
|
108
0043-qapi-block-stream-expose-new-job-properties.patch
Normal file
108
0043-qapi-block-stream-expose-new-job-properties.patch
Normal file
@ -0,0 +1,108 @@
|
||||
From 67fa4ccaffcd7e2698d30597f51093903aef4a5d Mon Sep 17 00:00:00 2001
|
||||
From: John Snow <jsnow@redhat.com>
|
||||
Date: Tue, 25 Sep 2018 22:34:29 +0100
|
||||
Subject: qapi/block-stream: expose new job properties
|
||||
|
||||
RH-Author: John Snow <jsnow@redhat.com>
|
||||
Message-id: <20180925223431.24791-24-jsnow@redhat.com>
|
||||
Patchwork-id: 82278
|
||||
O-Subject: [RHEL8/rhel qemu-kvm PATCH 23/25] qapi/block-stream: expose new job properties
|
||||
Bugzilla: 1632939
|
||||
RH-Acked-by: Jeffrey Cody <jcody@redhat.com>
|
||||
RH-Acked-by: Max Reitz <mreitz@redhat.com>
|
||||
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
|
||||
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
||||
Message-id: 20180906130225.5118-15-jsnow@redhat.com
|
||||
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
||||
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
||||
(cherry picked from commit 241ca1ab78542f02e666636e0323bcfe3cb1d5e8)
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
|
||||
---
|
||||
blockdev.c | 9 +++++++++
|
||||
hmp.c | 5 +++--
|
||||
qapi/block-core.json | 16 +++++++++++++++-
|
||||
3 files changed, 27 insertions(+), 3 deletions(-)
|
||||
|
||||
diff --git a/blockdev.c b/blockdev.c
|
||||
index bbb3279..806531d 100644
|
||||
--- a/blockdev.c
|
||||
+++ b/blockdev.c
|
||||
@@ -3226,6 +3226,8 @@ void qmp_block_stream(bool has_job_id, const char *job_id, const char *device,
|
||||
bool has_backing_file, const char *backing_file,
|
||||
bool has_speed, int64_t speed,
|
||||
bool has_on_error, BlockdevOnError on_error,
|
||||
+ bool has_auto_finalize, bool auto_finalize,
|
||||
+ bool has_auto_dismiss, bool auto_dismiss,
|
||||
Error **errp)
|
||||
{
|
||||
BlockDriverState *bs, *iter;
|
||||
@@ -3295,6 +3297,13 @@ void qmp_block_stream(bool has_job_id, const char *job_id, const char *device,
|
||||
/* backing_file string overrides base bs filename */
|
||||
base_name = has_backing_file ? backing_file : base_name;
|
||||
|
||||
+ if (has_auto_finalize && !auto_finalize) {
|
||||
+ job_flags |= JOB_MANUAL_FINALIZE;
|
||||
+ }
|
||||
+ if (has_auto_dismiss && !auto_dismiss) {
|
||||
+ job_flags |= JOB_MANUAL_DISMISS;
|
||||
+ }
|
||||
+
|
||||
stream_start(has_job_id ? job_id : NULL, bs, base_bs, base_name,
|
||||
job_flags, has_speed ? speed : 0, on_error, &local_err);
|
||||
if (local_err) {
|
||||
diff --git a/hmp.c b/hmp.c
|
||||
index 2aafb50..e3c3ecd 100644
|
||||
--- a/hmp.c
|
||||
+++ b/hmp.c
|
||||
@@ -1865,8 +1865,9 @@ void hmp_block_stream(Monitor *mon, const QDict *qdict)
|
||||
int64_t speed = qdict_get_try_int(qdict, "speed", 0);
|
||||
|
||||
qmp_block_stream(true, device, device, base != NULL, base, false, NULL,
|
||||
- false, NULL, qdict_haskey(qdict, "speed"), speed,
|
||||
- true, BLOCKDEV_ON_ERROR_REPORT, &error);
|
||||
+ false, NULL, qdict_haskey(qdict, "speed"), speed, true,
|
||||
+ BLOCKDEV_ON_ERROR_REPORT, false, false, false, false,
|
||||
+ &error);
|
||||
|
||||
hmp_handle_error(mon, &error);
|
||||
}
|
||||
diff --git a/qapi/block-core.json b/qapi/block-core.json
|
||||
index 9193d49..d1a9c3e 100644
|
||||
--- a/qapi/block-core.json
|
||||
+++ b/qapi/block-core.json
|
||||
@@ -2320,6 +2320,19 @@
|
||||
# 'stop' and 'enospc' can only be used if the block device
|
||||
# supports io-status (see BlockInfo). Since 1.3.
|
||||
#
|
||||
+# @auto-finalize: When false, this job will wait in a PENDING state after it has
|
||||
+# finished its work, waiting for @block-job-finalize before
|
||||
+# making any block graph changes.
|
||||
+# When true, this job will automatically
|
||||
+# perform its abort or commit actions.
|
||||
+# Defaults to true. (Since 3.1)
|
||||
+#
|
||||
+# @auto-dismiss: When false, this job will wait in a CONCLUDED state after it
|
||||
+# has completely ceased all work, and awaits @block-job-dismiss.
|
||||
+# When true, this job will automatically disappear from the query
|
||||
+# list without user intervention.
|
||||
+# Defaults to true. (Since 3.1)
|
||||
+#
|
||||
# Returns: Nothing on success. If @device does not exist, DeviceNotFound.
|
||||
#
|
||||
# Since: 1.1
|
||||
@@ -2335,7 +2348,8 @@
|
||||
{ 'command': 'block-stream',
|
||||
'data': { '*job-id': 'str', 'device': 'str', '*base': 'str',
|
||||
'*base-node': 'str', '*backing-file': 'str', '*speed': 'int',
|
||||
- '*on-error': 'BlockdevOnError' } }
|
||||
+ '*on-error': 'BlockdevOnError',
|
||||
+ '*auto-finalize': 'bool', '*auto-dismiss': 'bool' } }
|
||||
|
||||
##
|
||||
# @block-job-set-speed:
|
||||
--
|
||||
1.8.3.1
|
||||
|
73
0044-block-backup-qapi-documentation-fixup.patch
Normal file
73
0044-block-backup-qapi-documentation-fixup.patch
Normal file
@ -0,0 +1,73 @@
|
||||
From c104ce571b585040ca4d0c77419d2ca06c2087b8 Mon Sep 17 00:00:00 2001
|
||||
From: John Snow <jsnow@redhat.com>
|
||||
Date: Tue, 25 Sep 2018 22:34:30 +0100
|
||||
Subject: block/backup: qapi documentation fixup
|
||||
|
||||
RH-Author: John Snow <jsnow@redhat.com>
|
||||
Message-id: <20180925223431.24791-25-jsnow@redhat.com>
|
||||
Patchwork-id: 82284
|
||||
O-Subject: [RHEL8/rhel qemu-kvm PATCH 24/25] block/backup: qapi documentation fixup
|
||||
Bugzilla: 1632939
|
||||
RH-Acked-by: Jeffrey Cody <jcody@redhat.com>
|
||||
RH-Acked-by: Max Reitz <mreitz@redhat.com>
|
||||
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
|
||||
|
||||
Fix documentation to match the other jobs amended for 3.1.
|
||||
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
||||
Message-id: 20180906130225.5118-16-jsnow@redhat.com
|
||||
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
||||
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
||||
(cherry picked from commit dfaff2c37dfa52ab045cf87503e60ea56317230a)
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
|
||||
---
|
||||
qapi/block-core.json | 18 ++++++++++--------
|
||||
1 file changed, 10 insertions(+), 8 deletions(-)
|
||||
|
||||
diff --git a/qapi/block-core.json b/qapi/block-core.json
|
||||
index d1a9c3e..2953991 100644
|
||||
--- a/qapi/block-core.json
|
||||
+++ b/qapi/block-core.json
|
||||
@@ -1272,13 +1272,14 @@
|
||||
# a different block device than @device).
|
||||
#
|
||||
# @auto-finalize: When false, this job will wait in a PENDING state after it has
|
||||
-# finished its work, waiting for @block-job-finalize.
|
||||
-# When true, this job will automatically perform its abort or
|
||||
-# commit actions.
|
||||
+# finished its work, waiting for @block-job-finalize before
|
||||
+# making any block graph changes.
|
||||
+# When true, this job will automatically
|
||||
+# perform its abort or commit actions.
|
||||
# Defaults to true. (Since 2.12)
|
||||
#
|
||||
# @auto-dismiss: When false, this job will wait in a CONCLUDED state after it
|
||||
-# has completed ceased all work, and wait for @block-job-dismiss.
|
||||
+# has completely ceased all work, and awaits @block-job-dismiss.
|
||||
# When true, this job will automatically disappear from the query
|
||||
# list without user intervention.
|
||||
# Defaults to true. (Since 2.12)
|
||||
@@ -1327,13 +1328,14 @@
|
||||
# a different block device than @device).
|
||||
#
|
||||
# @auto-finalize: When false, this job will wait in a PENDING state after it has
|
||||
-# finished its work, waiting for @block-job-finalize.
|
||||
-# When true, this job will automatically perform its abort or
|
||||
-# commit actions.
|
||||
+# finished its work, waiting for @block-job-finalize before
|
||||
+# making any block graph changes.
|
||||
+# When true, this job will automatically
|
||||
+# perform its abort or commit actions.
|
||||
# Defaults to true. (Since 2.12)
|
||||
#
|
||||
# @auto-dismiss: When false, this job will wait in a CONCLUDED state after it
|
||||
-# has completed ceased all work, and wait for @block-job-dismiss.
|
||||
+# has completely ceased all work, and awaits @block-job-dismiss.
|
||||
# When true, this job will automatically disappear from the query
|
||||
# list without user intervention.
|
||||
# Defaults to true. (Since 2.12)
|
||||
--
|
||||
1.8.3.1
|
||||
|
53
0045-blockdev-document-transactional-shortcomings.patch
Normal file
53
0045-blockdev-document-transactional-shortcomings.patch
Normal file
@ -0,0 +1,53 @@
|
||||
From 53dc1dce0b91a7ebb1c32d10a7482461c01326d6 Mon Sep 17 00:00:00 2001
|
||||
From: John Snow <jsnow@redhat.com>
|
||||
Date: Tue, 25 Sep 2018 22:34:31 +0100
|
||||
Subject: blockdev: document transactional shortcomings
|
||||
|
||||
RH-Author: John Snow <jsnow@redhat.com>
|
||||
Message-id: <20180925223431.24791-26-jsnow@redhat.com>
|
||||
Patchwork-id: 82286
|
||||
O-Subject: [RHEL8/rhel qemu-kvm PATCH 25/25] blockdev: document transactional shortcomings
|
||||
Bugzilla: 1632939
|
||||
RH-Acked-by: Jeffrey Cody <jcody@redhat.com>
|
||||
RH-Acked-by: Max Reitz <mreitz@redhat.com>
|
||||
RH-Acked-by: Kevin Wolf <kwolf@redhat.com>
|
||||
|
||||
Presently only the backup job really guarantees what one would consider
|
||||
transactional semantics. To guard against someone helpfully adding them
|
||||
in the future, document that there are shortcomings in the model that
|
||||
would need to be audited at that time.
|
||||
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Message-id: 20180906130225.5118-17-jsnow@redhat.com
|
||||
Reviewed-by: Jeff Cody <jcody@redhat.com>
|
||||
Reviewed-by: Max Reitz <mreitz@redhat.com>
|
||||
Signed-off-by: Max Reitz <mreitz@redhat.com>
|
||||
(cherry picked from commit 66da04ddd3dcb8c61ee664b6faced132da002006)
|
||||
Signed-off-by: John Snow <jsnow@redhat.com>
|
||||
Signed-off-by: Danilo C. L. de Paula <ddepaula@redhat.com>
|
||||
---
|
||||
blockdev.c | 8 +++++++-
|
||||
1 file changed, 7 insertions(+), 1 deletion(-)
|
||||
|
||||
diff --git a/blockdev.c b/blockdev.c
|
||||
index 806531d..d97202a 100644
|
||||
--- a/blockdev.c
|
||||
+++ b/blockdev.c
|
||||
@@ -2292,7 +2292,13 @@ static const BlkActionOps actions[] = {
|
||||
.instance_size = sizeof(BlockDirtyBitmapState),
|
||||
.prepare = block_dirty_bitmap_disable_prepare,
|
||||
.abort = block_dirty_bitmap_disable_abort,
|
||||
- }
|
||||
+ },
|
||||
+ /* Where are transactions for MIRROR, COMMIT and STREAM?
|
||||
+ * Although these blockjobs use transaction callbacks like the backup job,
|
||||
+ * these jobs do not necessarily adhere to transaction semantics.
|
||||
+ * These jobs may not fully undo all of their actions on abort, nor do they
|
||||
+ * necessarily work in transactions with more than one job in them.
|
||||
+ */
|
||||
};
|
||||
|
||||
/**
|
||||
--
|
||||
1.8.3.1
|
||||
|
5
85-kvm.preset
Normal file
5
85-kvm.preset
Normal file
@ -0,0 +1,5 @@
|
||||
# Enable kvm-setup by default. This can have odd side effects on
|
||||
# PowerNV systems that aren't intended as KVM hosts, but at present we
|
||||
# only support RHEL on PowerNV for the purpose of being a RHEV host.
|
||||
|
||||
enable kvm-setup.service
|
10
95-kvm-memlock.conf
Normal file
10
95-kvm-memlock.conf
Normal file
@ -0,0 +1,10 @@
|
||||
# The KVM HV implementation on Power can require a significant amount
|
||||
# of unswappable memory (about half of which also needs to be host
|
||||
# physically contiguous) to hold the guest's Hash Page Table (HPT) -
|
||||
# roughly 1/64th of the guest's RAM size, minimum 16MiB.
|
||||
#
|
||||
# These limits allow unprivileged users to start smallish VMs, such as
|
||||
# those used by libguestfs.
|
||||
#
|
||||
* hard memlock 65536
|
||||
* soft memlock 65536
|
2
99-qemu-guest-agent.rules
Normal file
2
99-qemu-guest-agent.rules
Normal file
@ -0,0 +1,2 @@
|
||||
SUBSYSTEM=="virtio-ports", ATTR{name}=="org.qemu.guest_agent.0", \
|
||||
TAG+="systemd" ENV{SYSTEMD_WANTS}="qemu-guest-agent.service"
|
1
bridge.conf
Normal file
1
bridge.conf
Normal file
@ -0,0 +1 @@
|
||||
allow virbr0
|
13
ksm.service
Normal file
13
ksm.service
Normal file
@ -0,0 +1,13 @@
|
||||
[Unit]
|
||||
Description=Kernel Samepage Merging
|
||||
ConditionPathExists=/sys/kernel/mm/ksm
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
RemainAfterExit=yes
|
||||
EnvironmentFile=-/etc/sysconfig/ksm
|
||||
ExecStart=/usr/libexec/ksmctl start
|
||||
ExecStop=/usr/libexec/ksmctl stop
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
4
ksm.sysconfig
Normal file
4
ksm.sysconfig
Normal file
@ -0,0 +1,4 @@
|
||||
# The maximum number of unswappable kernel pages
|
||||
# which may be allocated by ksm (0 for unlimited)
|
||||
# If unset, defaults to half of total memory
|
||||
# KSM_MAX_KERNEL_PAGES=
|
77
ksmctl.c
Normal file
77
ksmctl.c
Normal file
@ -0,0 +1,77 @@
|
||||
/* Start/stop KSM, for systemd.
|
||||
* Copyright (C) 2009, 2011 Red Hat, Inc.
|
||||
* Written by Paolo Bonzini <pbonzini@redhat.com>.
|
||||
* Based on the original sysvinit script by Dan Kenigsberg <danken@redhat.com>
|
||||
* This file is distributed under the GNU General Public License, version 2
|
||||
* or later. */
|
||||
|
||||
#include <unistd.h>
|
||||
#include <stdio.h>
|
||||
#include <limits.h>
|
||||
#include <stdint.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
|
||||
#define KSM_MAX_KERNEL_PAGES_FILE "/sys/kernel/mm/ksm/max_kernel_pages"
|
||||
#define KSM_RUN_FILE "/sys/kernel/mm/ksm/run"
|
||||
|
||||
char *program_name;
|
||||
|
||||
int usage(void)
|
||||
{
|
||||
fprintf(stderr, "Usage: %s {start|stop}\n", program_name);
|
||||
return 1;
|
||||
}
|
||||
|
||||
int write_value(uint64_t value, char *filename)
|
||||
{
|
||||
FILE *fp;
|
||||
if (!(fp = fopen(filename, "w")) ||
|
||||
fprintf(fp, "%llu\n", (unsigned long long) value) == EOF ||
|
||||
fflush(fp) == EOF ||
|
||||
fclose(fp) == EOF)
|
||||
return 1;
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
uint64_t ksm_max_kernel_pages()
|
||||
{
|
||||
char *var = getenv("KSM_MAX_KERNEL_PAGES");
|
||||
char *endptr;
|
||||
uint64_t value;
|
||||
if (var && *var) {
|
||||
value = strtoll(var, &endptr, 0);
|
||||
if (value < LLONG_MAX && !*endptr)
|
||||
return value;
|
||||
}
|
||||
/* Unless KSM_MAX_KERNEL_PAGES is set, let KSM munch up to half of
|
||||
* total memory. */
|
||||
return sysconf(_SC_PHYS_PAGES) / 2;
|
||||
}
|
||||
|
||||
int start(void)
|
||||
{
|
||||
if (access(KSM_MAX_KERNEL_PAGES_FILE, R_OK) >= 0)
|
||||
write_value(ksm_max_kernel_pages(), KSM_MAX_KERNEL_PAGES_FILE);
|
||||
return write_value(1, KSM_RUN_FILE);
|
||||
}
|
||||
|
||||
int stop(void)
|
||||
{
|
||||
return write_value(0, KSM_RUN_FILE);
|
||||
}
|
||||
|
||||
int main(int argc, char **argv)
|
||||
{
|
||||
program_name = argv[0];
|
||||
if (argc < 2) {
|
||||
return usage();
|
||||
} else if (!strcmp(argv[1], "start")) {
|
||||
return start();
|
||||
} else if (!strcmp(argv[1], "stop")) {
|
||||
return stop();
|
||||
} else {
|
||||
return usage();
|
||||
}
|
||||
}
|
139
ksmtuned
Normal file
139
ksmtuned
Normal file
@ -0,0 +1,139 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
# Copyright 2009 Red Hat, Inc. and/or its affiliates.
|
||||
# Released under the GPL
|
||||
#
|
||||
# Author: Dan Kenigsberg <danken@redhat.com>
|
||||
#
|
||||
# ksmtuned - a simple script that controls whether (and with what vigor) ksm
|
||||
# should search for duplicated pages.
|
||||
#
|
||||
# starts ksm when memory commited to qemu processes exceeds a threshold, and
|
||||
# make ksm work harder and harder untill memory load falls below that
|
||||
# threshold.
|
||||
#
|
||||
# send SIGUSR1 to this process right after a new qemu process is started, or
|
||||
# following its death, to retune ksm accordingly
|
||||
#
|
||||
# needs testing and ironing. contact danken@redhat.com if something breaks.
|
||||
|
||||
if [ -f /etc/ksmtuned.conf ]; then
|
||||
. /etc/ksmtuned.conf
|
||||
fi
|
||||
|
||||
debug() {
|
||||
if [ -n "$DEBUG" ]; then
|
||||
s="`/bin/date`: $*"
|
||||
[ -n "$LOGFILE" ] && echo "$s" >> "$LOGFILE" || echo "$s"
|
||||
fi
|
||||
}
|
||||
|
||||
|
||||
KSM_MONITOR_INTERVAL=${KSM_MONITOR_INTERVAL:-60}
|
||||
KSM_NPAGES_BOOST=${KSM_NPAGES_BOOST:-300}
|
||||
KSM_NPAGES_DECAY=${KSM_NPAGES_DECAY:--50}
|
||||
|
||||
KSM_NPAGES_MIN=${KSM_NPAGES_MIN:-64}
|
||||
KSM_NPAGES_MAX=${KSM_NPAGES_MAX:-1250}
|
||||
# millisecond sleep between ksm scans for 16Gb server. Smaller servers sleep
|
||||
# more, bigger sleep less.
|
||||
KSM_SLEEP_MSEC=${KSM_SLEEP_MSEC:-10}
|
||||
|
||||
KSM_THRES_COEF=${KSM_THRES_COEF:-20}
|
||||
KSM_THRES_CONST=${KSM_THRES_CONST:-2048}
|
||||
|
||||
total=`awk '/^MemTotal:/ {print $2}' /proc/meminfo`
|
||||
debug total $total
|
||||
|
||||
npages=0
|
||||
sleep=$[KSM_SLEEP_MSEC * 16 * 1024 * 1024 / total]
|
||||
[ $sleep -le 10 ] && sleep=10
|
||||
debug sleep $sleep
|
||||
thres=$[total * KSM_THRES_COEF / 100]
|
||||
if [ $KSM_THRES_CONST -gt $thres ]; then
|
||||
thres=$KSM_THRES_CONST
|
||||
fi
|
||||
debug thres $thres
|
||||
|
||||
KSMCTL () {
|
||||
case x$1 in
|
||||
xstop)
|
||||
echo 0 > /sys/kernel/mm/ksm/run
|
||||
;;
|
||||
xstart)
|
||||
echo $2 > /sys/kernel/mm/ksm/pages_to_scan
|
||||
echo $3 > /sys/kernel/mm/ksm/sleep_millisecs
|
||||
echo 1 > /sys/kernel/mm/ksm/run
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
committed_memory () {
|
||||
# calculate how much memory is committed to running qemu processes
|
||||
local pidlist
|
||||
pidlist=$(pgrep -d ' ' -- '^qemu(-(kvm|system-.+)|:.{1,11})$')
|
||||
if [ -n "$pidlist" ]; then
|
||||
ps -p "$pidlist" -o rsz=
|
||||
fi | awk '{ sum += $1 }; END { print 0+sum }'
|
||||
}
|
||||
|
||||
free_memory () {
|
||||
awk '/^(MemFree|Buffers|Cached):/ {free += $2}; END {print free}' \
|
||||
/proc/meminfo
|
||||
}
|
||||
|
||||
increase_npages() {
|
||||
local delta
|
||||
delta=${1:-0}
|
||||
npages=$[npages + delta]
|
||||
if [ $npages -lt $KSM_NPAGES_MIN ]; then
|
||||
npages=$KSM_NPAGES_MIN
|
||||
elif [ $npages -gt $KSM_NPAGES_MAX ]; then
|
||||
npages=$KSM_NPAGES_MAX
|
||||
fi
|
||||
echo $npages
|
||||
}
|
||||
|
||||
|
||||
adjust () {
|
||||
local free committed
|
||||
free=`free_memory`
|
||||
committed=`committed_memory`
|
||||
debug committed $committed free $free
|
||||
if [ $[committed + thres] -lt $total -a $free -gt $thres ]; then
|
||||
KSMCTL stop
|
||||
debug "$[committed + thres] < $total and free > $thres, stop ksm"
|
||||
return 1
|
||||
fi
|
||||
debug "$[committed + thres] > $total, start ksm"
|
||||
if [ $free -lt $thres ]; then
|
||||
npages=`increase_npages $KSM_NPAGES_BOOST`
|
||||
debug "$free < $thres, boost"
|
||||
else
|
||||
npages=`increase_npages $KSM_NPAGES_DECAY`
|
||||
debug "$free > $thres, decay"
|
||||
fi
|
||||
KSMCTL start $npages $sleep
|
||||
debug "KSMCTL start $npages $sleep"
|
||||
return 0
|
||||
}
|
||||
|
||||
function nothing () {
|
||||
:
|
||||
}
|
||||
|
||||
loop () {
|
||||
trap nothing SIGUSR1
|
||||
while true
|
||||
do
|
||||
sleep $KSM_MONITOR_INTERVAL &
|
||||
wait $!
|
||||
adjust
|
||||
done
|
||||
}
|
||||
|
||||
PIDFILE=${PIDFILE-/var/run/ksmtune.pid}
|
||||
if touch "$PIDFILE"; then
|
||||
loop &
|
||||
echo $! > "$PIDFILE"
|
||||
fi
|
21
ksmtuned.conf
Normal file
21
ksmtuned.conf
Normal file
@ -0,0 +1,21 @@
|
||||
# Configuration file for ksmtuned.
|
||||
|
||||
# How long ksmtuned should sleep between tuning adjustments
|
||||
# KSM_MONITOR_INTERVAL=60
|
||||
|
||||
# Millisecond sleep between ksm scans for 16Gb server.
|
||||
# Smaller servers sleep more, bigger sleep less.
|
||||
# KSM_SLEEP_MSEC=10
|
||||
|
||||
# KSM_NPAGES_BOOST=300
|
||||
# KSM_NPAGES_DECAY=-50
|
||||
# KSM_NPAGES_MIN=64
|
||||
# KSM_NPAGES_MAX=1250
|
||||
|
||||
# KSM_THRES_COEF=20
|
||||
# KSM_THRES_CONST=2048
|
||||
|
||||
# uncomment the following if you want ksmtuned debug info
|
||||
|
||||
# LOGFILE=/var/log/ksmtuned
|
||||
# DEBUG=1
|
12
ksmtuned.service
Normal file
12
ksmtuned.service
Normal file
@ -0,0 +1,12 @@
|
||||
[Unit]
|
||||
Description=Kernel Samepage Merging (KSM) Tuning Daemon
|
||||
After=ksm.service
|
||||
Requires=ksm.service
|
||||
|
||||
[Service]
|
||||
ExecStart=/usr/sbin/ksmtuned
|
||||
ExecReload=/bin/kill -USR1 $MAINPID
|
||||
Type=forking
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
19
kvm-s390x.conf
Normal file
19
kvm-s390x.conf
Normal file
@ -0,0 +1,19 @@
|
||||
# User changes in this file are preserved across upgrades.
|
||||
#
|
||||
# Setting "modprobe kvm nested=1" only enables Nested Virtualization until
|
||||
# the next reboot or module reload. Uncomment the option below to enable
|
||||
# the feature permanently.
|
||||
#
|
||||
#options kvm nested=1
|
||||
#
|
||||
#
|
||||
# Setting "modprobe kvm hpage=1" only enables Huge Page Backing (1MB)
|
||||
# support until the next reboot or module reload. Uncomment the option
|
||||
# below to enable the feature permanently.
|
||||
#
|
||||
# Note: - Incompatible with "nested=1". Loading the module will fail.
|
||||
# - Dirty page logging will be performed on a 1MB (not 4KB) basis,
|
||||
# which can result in a lot of data having to be transferred during
|
||||
# migration, and therefore taking very long to converge.
|
||||
#
|
||||
#options kvm hpage=1
|
40
kvm-setup
Normal file
40
kvm-setup
Normal file
@ -0,0 +1,40 @@
|
||||
#! /bin/bash
|
||||
|
||||
kvm_setup_powerpc () {
|
||||
if grep '^platform[[:space:]]*:[[:space:]]*PowerNV' /proc/cpuinfo > /dev/null; then
|
||||
# PowerNV platform, which is KVM HV capable
|
||||
|
||||
if [ -z "$SUBCORES" ]; then
|
||||
SUBCORES=1
|
||||
fi
|
||||
|
||||
# Step 1. Load the KVM HVmodule
|
||||
if ! modprobe -b kvm_hv; then
|
||||
return
|
||||
fi
|
||||
|
||||
# On POWER8 a host core can only run threads of a single
|
||||
# guest, meaning that SMT must be disabled on the host in
|
||||
# order to run KVM guests. (Also applieds to POWER7, but we
|
||||
# don't support that).
|
||||
#
|
||||
# POWER9 doesn't have this limitation (though it will for hash
|
||||
# guests on radix host when that's implemented). So, only set
|
||||
# up subcores and disable SMT for POWER*.
|
||||
if grep '^cpu[[:space:]]*:[[:space:]]*POWER8' /proc/cpuinfo > /dev/null; then
|
||||
# Step 2. Configure subcore mode
|
||||
/usr/sbin/ppc64_cpu --subcores-per-core=$SUBCORES
|
||||
|
||||
# Step 3. Disable SMT (multithreading)
|
||||
/usr/sbin/ppc64_cpu --smt=off
|
||||
fi
|
||||
fi
|
||||
}
|
||||
|
||||
case $(uname -m) in
|
||||
ppc64|ppc64le)
|
||||
kvm_setup_powerpc
|
||||
;;
|
||||
esac
|
||||
|
||||
exit 0
|
14
kvm-setup.service
Normal file
14
kvm-setup.service
Normal file
@ -0,0 +1,14 @@
|
||||
[Unit]
|
||||
Description=Perform system configuration to prepare system to run KVM guests
|
||||
# Offlining CPUs can cause irqbalance to throw warnings if it's running
|
||||
Before=irqbalance.service
|
||||
# libvirtd reads CPU topology at startup, so change it before
|
||||
Before=libvirtd.service
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
EnvironmentFile=-/etc/sysconfig/kvm
|
||||
ExecStart=/usr/lib/systemd/kvm-setup
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
12
kvm-x86.conf
Normal file
12
kvm-x86.conf
Normal file
@ -0,0 +1,12 @@
|
||||
# Setting modprobe kvm_intel/kvm_amd nested = 1
|
||||
# only enables Nested Virtualization until the next reboot or
|
||||
# module reload. Uncomment the option applicable
|
||||
# to your system below to enable the feature permanently.
|
||||
#
|
||||
# User changes in this file are preserved across upgrades.
|
||||
#
|
||||
# For Intel
|
||||
#options kvm_intel nested=1
|
||||
#
|
||||
# For AMD
|
||||
#options kvm_amd nested=1
|
3
kvm.conf
Normal file
3
kvm.conf
Normal file
@ -0,0 +1,3 @@
|
||||
#
|
||||
# User changes in this file are preserved across upgrades.
|
||||
#
|
18
kvm.modules
Normal file
18
kvm.modules
Normal file
@ -0,0 +1,18 @@
|
||||
#!/bin/sh
|
||||
|
||||
case $(uname -m) in
|
||||
ppc64)
|
||||
grep OPAL /proc/cpuinfo >/dev/null 2>&1 && opal=1
|
||||
|
||||
modprobe -b kvm >/dev/null 2>&1
|
||||
modprobe -b kvm-pr >/dev/null 2>&1 && kvm=1
|
||||
if [ "$opal" ]; then
|
||||
modprobe -b kvm-hv >/dev/null 2>&1
|
||||
fi
|
||||
;;
|
||||
s390x)
|
||||
modprobe -b kvm >/dev/null 2>&1 && kvm=1
|
||||
;;
|
||||
esac
|
||||
|
||||
exit 0
|
19
qemu-ga.sysconfig
Normal file
19
qemu-ga.sysconfig
Normal file
@ -0,0 +1,19 @@
|
||||
# This is a systemd environment file, not a shell script.
|
||||
# It provides settings for "/lib/systemd/system/qemu-guest-agent.service".
|
||||
|
||||
# Comma-separated blacklist of RPCs to disable, or empty list to enable all.
|
||||
#
|
||||
# You can get the list of RPC commands using "qemu-ga --blacklist='?'".
|
||||
# There should be no spaces between commas and commands in the blacklist.
|
||||
BLACKLIST_RPC=guest-file-open,guest-file-close,guest-file-read,guest-file-write,guest-file-seek,guest-file-flush,guest-exec,guest-exec-status
|
||||
|
||||
# Fsfreeze hook script specification.
|
||||
#
|
||||
# FSFREEZE_HOOK_PATHNAME=/dev/null : disables the feature.
|
||||
#
|
||||
# FSFREEZE_HOOK_PATHNAME=/path/to/executable : enables the feature with the
|
||||
# specified binary or shell script.
|
||||
#
|
||||
# FSFREEZE_HOOK_PATHNAME= : enables the feature with the
|
||||
# default value (invoke "qemu-ga --help" to interrogate).
|
||||
FSFREEZE_HOOK_PATHNAME=/etc/qemu-ga/fsfreeze-hook
|
20
qemu-guest-agent.service
Normal file
20
qemu-guest-agent.service
Normal file
@ -0,0 +1,20 @@
|
||||
[Unit]
|
||||
Description=QEMU Guest Agent
|
||||
BindsTo=dev-virtio\x2dports-org.qemu.guest_agent.0.device
|
||||
After=dev-virtio\x2dports-org.qemu.guest_agent.0.device
|
||||
IgnoreOnIsolate=True
|
||||
|
||||
[Service]
|
||||
UMask=0077
|
||||
EnvironmentFile=/etc/sysconfig/qemu-ga
|
||||
ExecStart=/usr/bin/qemu-ga \
|
||||
--method=virtio-serial \
|
||||
--path=/dev/virtio-ports/org.qemu.guest_agent.0 \
|
||||
--blacklist=${BLACKLIST_RPC} \
|
||||
-F${FSFREEZE_HOOK_PATHNAME}
|
||||
StandardError=syslog
|
||||
Restart=always
|
||||
RestartSec=0
|
||||
|
||||
[Install]
|
||||
WantedBy=dev-virtio\x2dports-org.qemu.guest_agent.0.device
|
1651
qemu-kvm.spec
Normal file
1651
qemu-kvm.spec
Normal file
File diff suppressed because it is too large
Load Diff
15
qemu-pr-helper.service
Normal file
15
qemu-pr-helper.service
Normal file
@ -0,0 +1,15 @@
|
||||
[Unit]
|
||||
Description=Persistent Reservation Daemon for QEMU
|
||||
|
||||
[Service]
|
||||
WorkingDirectory=/tmp
|
||||
Type=simple
|
||||
ExecStart=/usr/bin/qemu-pr-helper
|
||||
PrivateTmp=yes
|
||||
ProtectSystem=strict
|
||||
ReadWritePaths=/var/run
|
||||
RestrictAddressFamilies=AF_UNIX
|
||||
Restart=always
|
||||
RestartSec=0
|
||||
|
||||
[Install]
|
9
qemu-pr-helper.socket
Normal file
9
qemu-pr-helper.socket
Normal file
@ -0,0 +1,9 @@
|
||||
[Unit]
|
||||
Description=Persistent Reservation Daemon for QEMU
|
||||
|
||||
[Socket]
|
||||
ListenStream=/run/qemu-pr-helper.sock
|
||||
SocketMode=0600
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
1
sources
Normal file
1
sources
Normal file
@ -0,0 +1 @@
|
||||
SHA512 (qemu-3.0.0.tar.xz) = a764302f50b9aca4134bbbc1f361b98e71240cdc7b25600dfe733bf4cf17bd86000bd28357697b08f3b656899dceb9e459350b8d55557817444ed5d7fa380a5a
|
3
vhost.conf
Normal file
3
vhost.conf
Normal file
@ -0,0 +1,3 @@
|
||||
# Increase default vhost memory map limit to match
|
||||
# KVM's memory slot limit
|
||||
options vhost max_mem_regions=509
|
Loading…
Reference in New Issue
Block a user