summaryrefslogtreecommitdiff
path: root/block
AgeCommit message (Collapse)AuthorFilesLines
2016-09-28Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into stagingPeter Maydell1-10/+5
* thread-safe tb_flush (Fred, Alex, Sergey, me, Richard, Emilio,... :-) * license clarification for compiler.h (Felipe) * glib cflags improvement (Marc-André) * checkpatch silencing (Paolo) * SMRAM migration fix (Paolo) * Replay improvements (Pavel) * IOMMU notifier improvements (Peter) * IOAPIC now defaults to version 0x20 (Peter) # gpg: Signature made Tue 27 Sep 2016 10:57:40 BST # gpg: using RSA key 0xBFFBD25F78C7AE83 # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * remotes/bonzini/tags/for-upstream: (28 commits) replay: allow replay stopping and restarting replay: vmstate for replay module replay: move internal data to the structure cpus-common: lock-free fast path for cpu_exec_start/end tcg: Make tb_flush() thread safe cpus-common: Introduce async_safe_run_on_cpu() cpus-common: simplify locking for start_exclusive/end_exclusive cpus-common: remove redundant call to exclusive_idle() cpus-common: always defer async_run_on_cpu work items docs: include formal model for TCG exclusive sections cpus-common: move exclusive work infrastructure from linux-user cpus-common: fix uninitialized variable use in run_on_cpu cpus-common: move CPU work item management to common code cpus-common: move CPU list management to common code linux-user: Add qemu_cpu_is_self() and qemu_cpu_kick() linux-user: Use QemuMutex and QemuCond cpus: Rename flush_queued_work() cpus: Move common code out of {async_, }run_on_cpu() cpus: pass CPUState to run_on_cpu helpers build-sys: put glib_cflags in QEMU_CFLAGS ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-09-28linux-aio: fix re-entrant completion processingStefan Hajnoczi1-3/+6
Commit 0ed93d84edabc7656f5c998ae1a346fe8b94ca54 ("linux-aio: process completions from ioq_submit()") added an optimization that processes completions each time ioq_submit() returns with requests in flight. This commit introduces a "Co-routine re-entered recursively" error which can be triggered with -drive format=qcow2,aio=native. Fam Zheng <famz@redhat.com>, Kevin Wolf <kwolf@redhat.com>, and I debugged the following backtrace: (gdb) bt #0 0x00007ffff0a046f5 in raise () at /lib64/libc.so.6 #1 0x00007ffff0a062fa in abort () at /lib64/libc.so.6 #2 0x0000555555ac0013 in qemu_coroutine_enter (co=0x5555583464d0) at util/qemu-coroutine.c:113 #3 0x0000555555a4b663 in qemu_laio_process_completions (s=s@entry=0x555557e2f7f0) at block/linux-aio.c:218 #4 0x0000555555a4b874 in ioq_submit (s=s@entry=0x555557e2f7f0) at block/linux-aio.c:331 #5 0x0000555555a4ba12 in laio_do_submit (fd=fd@entry=13, laiocb=laiocb@entry=0x555559d38ae0, offset=offset@entry=2932727808, type=type@entry=1) at block/linux-aio.c:383 #6 0x0000555555a4bbd3 in laio_co_submit (bs=<optimized out>, s=0x555557e2f7f0, fd=13, offset=2932727808, qiov=0x555559d38e20, type=1) at block/linux-aio.c:402 #7 0x0000555555a4fd23 in bdrv_driver_preadv (bs=bs@entry=0x55555663bcb0, offset=offset@entry=2932727808, bytes=bytes@entry=8192, qiov=qiov@entry=0x555559d38e20, flags=0) at block/io.c:804 #8 0x0000555555a52b34 in bdrv_aligned_preadv (bs=bs@entry=0x55555663bcb0, req=req@entry=0x555559d38d20, offset=offset@entry=2932727808, bytes=bytes@entry=8192, align=align@entry=512, qiov=qiov@entry=0x555559d38e20, flags=0) at block/io.c:1041 #9 0x0000555555a52db8 in bdrv_co_preadv (child=<optimized out>, offset=2932727808, bytes=8192, qiov=qiov@entry=0x555559d38e20, flags=flags@entry=0) at block/io.c:1133 #10 0x0000555555a29629 in qcow2_co_preadv (bs=0x555556635890, offset=6178725888, bytes=8192, qiov=0x555557527840, flags=<optimized out>) at block/qcow2.c:1509 #11 0x0000555555a4fd23 in bdrv_driver_preadv (bs=bs@entry=0x555556635890, offset=offset@entry=6178725888, bytes=bytes@entry=8192, qiov=qiov@entry=0x555557527840, flags=0) at block/io.c:804 #12 0x0000555555a52b34 in bdrv_aligned_preadv (bs=bs@entry=0x555556635890, req=req@entry=0x555559d39000, offset=offset@entry=6178725888, bytes=bytes@entry=8192, align=align@entry=1, qiov=qiov@entry=0x555557527840, flags=0) at block/io.c:1041 #13 0x0000555555a52db8 in bdrv_co_preadv (child=<optimized out>, offset=offset@entry=6178725888, bytes=bytes@entry=8192, qiov=qiov@entry=0x555557527840, flags=flags@entry=0) at block/io.c:1133 #14 0x0000555555a4515a in blk_co_preadv (blk=0x5555566356d0, offset=6178725888, bytes=8192, qiov=0x555557527840, flags=0) at block/block-backend.c:783 #15 0x0000555555a45266 in blk_aio_read_entry (opaque=0x5555577025e0) at block/block-backend.c:991 #16 0x0000555555ac0cfa in coroutine_trampoline (i0=<optimized out>, i1=<optimized out>) at util/coroutine-ucontext.c:78 It turned out that re-entrant ioq_submit() and completion processing between three requests caused this error. The following check is not sufficient to prevent recursively entering coroutines: if (laiocb->co != qemu_coroutine_self()) { qemu_coroutine_enter(laiocb->co); } As the following coroutine backtrace shows, not just the current coroutine (self) can be entered. There might also be other coroutines that are currently entered and transferred control due to the qcow2 lock (CoMutex): (gdb) qemu coroutine 0x5555583464d0 #0 0x0000555555ac0c90 in qemu_coroutine_switch (from_=from_@entry=0x5555583464d0, to_=to_@entry=0x5555572f9890, action=action@entry=COROUTINE_ENTER) at util/coroutine-ucontext.c:175 #1 0x0000555555abfe54 in qemu_coroutine_enter (co=0x5555572f9890) at util/qemu-coroutine.c:117 #2 0x0000555555ac031c in qemu_co_queue_run_restart (co=co@entry=0x5555583462c0) at util/qemu-coroutine-lock.c:60 #3 0x0000555555abfe5e in qemu_coroutine_enter (co=0x5555583462c0) at util/qemu-coroutine.c:119 #4 0x0000555555a4b663 in qemu_laio_process_completions (s=s@entry=0x555557e2f7f0) at block/linux-aio.c:218 #5 0x0000555555a4b874 in ioq_submit (s=s@entry=0x555557e2f7f0) at block/linux-aio.c:331 #6 0x0000555555a4ba12 in laio_do_submit (fd=fd@entry=13, laiocb=laiocb@entry=0x55555a338b40, offset=offset@entry=2911477760, type=type@entry=1) at block/linux-aio.c:383 #7 0x0000555555a4bbd3 in laio_co_submit (bs=<optimized out>, s=0x555557e2f7f0, fd=13, offset=2911477760, qiov=0x55555a338e80, type=1) at block/linux-aio.c:402 #8 0x0000555555a4fd23 in bdrv_driver_preadv (bs=bs@entry=0x55555663bcb0, offset=offset@entry=2911477760, bytes=bytes@entry=8192, qiov=qiov@entry=0x55555a338e80, flags=0) at block/io.c:804 #9 0x0000555555a52b34 in bdrv_aligned_preadv (bs=bs@entry=0x55555663bcb0, req=req@entry=0x55555a338d80, offset=offset@entry=2911477760, bytes=bytes@entry=8192, align=align@entry=512, qiov=qiov@entry=0x55555a338e80, flags=0) at block/io.c:1041 #10 0x0000555555a52db8 in bdrv_co_preadv (child=<optimized out>, offset=2911477760, bytes=8192, qiov=qiov@entry=0x55555a338e80, flags=flags@entry=0) at block/io.c:1133 #11 0x0000555555a29629 in qcow2_co_preadv (bs=0x555556635890, offset=6157475840, bytes=8192, qiov=0x5555575df720, flags=<optimized out>) at block/qcow2.c:1509 #12 0x0000555555a4fd23 in bdrv_driver_preadv (bs=bs@entry=0x555556635890, offset=offset@entry=6157475840, bytes=bytes@entry=8192, qiov=qiov@entry=0x5555575df720, flags=0) at block/io.c:804 #13 0x0000555555a52b34 in bdrv_aligned_preadv (bs=bs@entry=0x555556635890, req=req@entry=0x55555a339060, offset=offset@entry=6157475840, bytes=bytes@entry=8192, align=align@entry=1, qiov=qiov@entry=0x5555575df720, flags=0) at block/io.c:1041 #14 0x0000555555a52db8 in bdrv_co_preadv (child=<optimized out>, offset=offset@entry=6157475840, bytes=bytes@entry=8192, qiov=qiov@entry=0x5555575df720, flags=flags@entry=0) at block/io.c:1133 #15 0x0000555555a4515a in blk_co_preadv (blk=0x5555566356d0, offset=6157475840, bytes=8192, qiov=0x5555575df720, flags=0) at block/block-backend.c:783 #16 0x0000555555a45266 in blk_aio_read_entry (opaque=0x555557231aa0) at block/block-backend.c:991 #17 0x0000555555ac0cfa in coroutine_trampoline (i0=<optimized out>, i1=<optimized out>) at util/coroutine-ucontext.c:78 Use the new qemu_coroutine_entered() function instead of comparing against qemu_coroutine_self(). This is correct because: 1. If a coroutine is not entered then it must have yielded to wait for I/O completion. It is therefore safe to enter. 2. If a coroutine is entered then it must be in ioq_submit()/qemu_laio_process_completions() because otherwise it would be yielded while waiting for I/O completion. Therefore it will check laio->ret and return from ioq_submit() instead of yielding, i.e. it's guaranteed not to hang. Reported-by: Fam Zheng <famz@redhat.com> Tested-by: Fam Zheng <famz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Message-id: 1474989516-18255-4-git-send-email-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-09-27replay: allow replay stopping and restartingPavel Dovgalyuk1-10/+5
This patch fixes bug with stopping and restarting replay through monitor. Signed-off-by: Pavel Dovgalyuk <pavel.dovgaluk@ispras.ru> Message-Id: <20160926080815.6992.71818.stgit@PASHA-ISP> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-23Merge remote-tracking branch 'remotes/kevin/tags/for-upstream' into stagingPeter Maydell4-6/+26
Block layer patches # gpg: Signature made Fri 23 Sep 2016 12:59:46 BST # gpg: using RSA key 0x7F09B272C88F2FD6 # gpg: Good signature from "Kevin Wolf <kwolf@redhat.com>" # Primary key fingerprint: DC3D EB15 9A9A F95D 3D74 56FE 7F09 B272 C88F 2FD6 * remotes/kevin/tags/for-upstream: (33 commits) block: Remove BB interface from blockdev-add/del qemu-iotests/141: Avoid blockdev-add with id block: Avoid printing NULL string in error messages qemu-iotests/139: Avoid blockdev-add with id qemu-iotests/124: Avoid blockdev-add with id qemu-iotests/118: Avoid blockdev-add with id qemu-iotests/117: Avoid blockdev-add with id qemu-iotests/087: Avoid blockdev-add with id qemu-iotests/081: Avoid blockdev-add with id qemu-iotests/071: Avoid blockdev-add with id qemu-iotests/067: Avoid blockdev-add with id qemu-iotests/041: Avoid blockdev-add with id qemu-iotests/118: Test media change with qdev name block: Accept device model name for block_set_io_throttle block: Accept device model name for blockdev-change-medium block: Accept device model name for eject block: Accept device model name for x-blockdev-remove-medium block: Accept device model name for x-blockdev-insert-medium block: Accept device model name for blockdev-open/close-tray qdev-monitor: Add blk_by_qdev_id() ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-09-23Merge remote-tracking branch 'remotes/famz/tags/various-pull-request' into ↵Peter Maydell6-73/+26
staging # gpg: Signature made Fri 23 Sep 2016 05:58:28 BST # gpg: using RSA key 0xCA35624C6A9171C6 # gpg: Good signature from "Fam Zheng <famz@redhat.com>" # gpg: WARNING: This key is not certified with a trusted signature! # gpg: There is no indication that the signature belongs to the owner. # Primary key fingerprint: 5003 7CB7 9706 0F76 F021 AD56 CA35 624C 6A91 71C6 * remotes/famz/tags/various-pull-request: (23 commits) docker: exec $CMD docker: Terminate instances at SIGTERM and SIGHUP docker: Support showing environment information docker: Print used options before doing configure docker: Flatten default target list in test-quick docker: Update fedora image to latest docker: Generate /packages.txt in ubuntu image docker: Generate /packages.txt in fedora image docker: Generate /packages.txt in centos6 image tests: Ignore test-uuid Add UUID files to MAINTAINERS tests: Add uuid tests uuid: Tighten uuid parse vl: Switch qemu_uuid to QemuUUID configure: Remove detection code for UUID tests: No longer dependent on CONFIG_UUID crypto: Switch to QEMU UUID API vpc: Use QEMU UUID API vdi: Use QEMU UUID API vhdx: Use QEMU UUID API ... Signed-off-by: Peter Maydell <peter.maydell@linaro.org> # Conflicts: # tests/Makefile.include
2016-09-23block: Add blk_by_dev()Kevin Wolf1-0/+19
This finds a BlockBackend given the device model that is attached to it. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com>
2016-09-23commit: Add 'base' to the reopen queue before 'overlay_bs'Alberto Garcia1-4/+4
Now that we're checking for duplicates in the reopen queue, there's no need to force a specific order in which the queue is constructed so we can revert 3db2bd5508c86a1605258bc77c9672d93b5c350e. Since both ways of constructing the queue are now valid, this patch doesn't have any effect on the behavior of QEMU and is not strictly necessary. However it can help us check that the fix for the reopen queue is robust: if it stops working properly at some point, iotest 040 will break. Signed-off-by: Alberto Garcia <berto@igalia.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-09-23block: Add "read-only" to the options QDictAlberto Garcia1-1/+2
This adds the "read-only" option to the QDict. One important effect of this change is that when a child inherits options from its parent, the existing "read-only" mode can be preserved if it was explicitly set previously. This addresses scenarios like this: [E] <- [D] <- [C] <- [B] <- [A] In this case, if we reopen [D] with read-only=off, and later reopen [B], then [D] will not inherit read-only=on from its parent during the bdrv_reopen_queue_child() stage. The BDRV_O_RDWR flag is not removed yet, but its keep in sync with the value of the "read-only" option. Signed-off-by: Alberto Garcia <berto@igalia.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-09-23qcow2: fix encryption during cow of sectorsDaniel P. Berrange1-1/+1
Broken in previous commit: commit aaa4d20b4972bb1a811ce929502e6741835d584e Author: Kevin Wolf <kwolf@redhat.com> Date: Wed Jun 1 15:21:05 2016 +0200 qcow2: Make copy_sectors() byte based The copy_sectors() code was originally using the 'sector' parameter for encryption, which was passed in by the caller from the QCowL2Meta.offset field (aka the guest logical offset). After the change, the code is using 'cluster_offset' which was passed in from QCow2L2Meta.alloc_offset field (aka the host physical offset). This would cause the data to be encrypted using an incorrect initialization vector which will in turn cause later reads to return garbage. Although current qcow2 built-in encryption is blocked from usage in the emulator, one could still hit this if writing to the file via qemu-{img,io,nbd} commands. Cc: qemu-stable@nongnu.org Signed-off-by: Daniel P. Berrange <berrange@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-09-23Merge remote-tracking branch 'remotes/bonzini/tags/for-upstream' into stagingPeter Maydell1-5/+8
* More KVM LAPIC fixes * fix divide-by-zero regression on libiscsi SG devices * fix qemu-char segfault * add scripts/show-fixed-bugs.sh # gpg: Signature made Thu 22 Sep 2016 19:20:57 BST # gpg: using RSA key 0xBFFBD25F78C7AE83 # gpg: Good signature from "Paolo Bonzini <bonzini@gnu.org>" # gpg: aka "Paolo Bonzini <pbonzini@redhat.com>" # Primary key fingerprint: 46F5 9FBD 57D6 12E7 BFD4 E2F7 7E15 100C CD36 69B1 # Subkey fingerprint: F133 3857 4B66 2389 866C 7682 BFFB D25F 78C7 AE83 * remotes/bonzini/tags/for-upstream: kvm: fix events.flags (KVM_VCPUEVENT_VALID_SMM) overwritten by 0 scripts: Add a script to check for bug URLs in the git log msmouse: Fix segfault caused by free the chr before chardev cleanup. iscsi: Fix divide-by-zero regression on raw SG devices kvm: apic: set APIC base as part of kvm_apic_put target-i386: introduce kvm_put_one_msr Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-09-23vpc: Use QEMU UUID APIFam Zheng1-7/+3
Previously we conditionally generated footer->uuid, when libuuid was available. Now that we have a built-in implementation, we can switch to it. Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-Id: <1474432046-325-6-git-send-email-famz@redhat.com>
2016-09-23vdi: Use QEMU UUID APIFam Zheng1-56/+17
The UUID operations we need from libuuid are fully supported by QEMU UUID implementation. Use it, and remove the unused code. Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-Id: <1474432046-325-5-git-send-email-famz@redhat.com>
2016-09-23vhdx: Use QEMU UUID APIFam Zheng3-9/+5
This removes our dependency to libuuid, so that the driver can always be built. Similar to how we handled data plane configure options, --enable-vhdx and --disable-vhdx are also changed to a nop with a message saying it's obsolete. Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-Id: <1474432046-325-4-git-send-email-famz@redhat.com>
2016-09-23util: Add UUID APIFam Zheng1-1/+1
A number of different places across the code base use CONFIG_UUID. Some of them are soft dependency, some are not built if libuuid is not available, some come with dummy fallback, some throws runtime error. It is hard to maintain, and hard to reason for users. Since UUID is a simple standard with only a small number of operations, it is cleaner to have a central support in libqemuutil. This patch adds qemu_uuid_* functions that all uuid users in the code base can rely on. Except for qemu_uuid_generate which is new code, all other functions are just copy from existing fallbacks from other files. Note that qemu_uuid_parse is moved without updating the function signature to use QemuUUID, to keep this patch simple. Signed-off-by: Fam Zheng <famz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-Id: <1474432046-325-2-git-send-email-famz@redhat.com>
2016-09-22iscsi: Fix divide-by-zero regression on raw SG devicesEric Blake1-5/+8
When qemu uses iscsi devices in sg mode, iscsilun->block_size is left at 0. Prior to commits cf081fca and similar, when block limits were tracked in sectors, this did not matter: various block limits were just left at 0. But when we started scaling by block size, this caused SIGFPE. Then, in a later patch, commit a5b8dd2c added an assertion to bdrv_open_common() that request_alignment is always non-zero; which was not true for SG mode. Rather than relax that assertion, we can just provide a sane value (we don't know of any SG device with a block size smaller than qemu's default sizing of 512 bytes). One possible solution for SG mode is to just blindly skip ALL of iscsi_refresh_limits(), since we already short circuit so many other things in sg mode. But this patch takes a slightly more conservative approach, and merely guarantees that scaling will succeed, while still using multiples of the original size where possible. Resulting limits may still be zero in SG mode (that is, we mostly only fix block_size used as a denominator or which affect assertions, not all uses). Reported-by: Holger Schranz <holger@fam-schranz.de> Signed-off-by: Eric Blake <eblake@redhat.com> CC: qemu-stable@nongnu.org Message-Id: <1473283640-15756-1-git-send-email-eblake@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-09-20commit: get the overlay node before manipulating the backing chainAlberto Garcia1-2/+1
The 'block-commit' command has a 'top' parameter to specify the topmost node from which the data is going to be copied. [E] <- [D] <- [C] <- [B] <- [A] In this case if [C] is the top node then this is the result: [E] <- [B] <- [A] [B] must be modified so its backing image string points to [E] instead of [C]. commit_start() takes care of reopening [B] in read-write mode, and commit_complete() puts it back in read-only mode once the operation has finished. In order to find [B] (the overlay node) we look for the node that has [C] (the top node) as its backing image. However in commit_complete() we're doing it after [C] has been removed from the chain, so [B] is never found and remains in read-write mode. This patch gets the overlay node before the backing chain is manipulated. Signed-off-by: Alberto Garcia <berto@igalia.com> Message-id: 1471836963-28548-1-git-send-email-berto@igalia.com Signed-off-by: Max Reitz <mreitz@redhat.com>
2016-09-20blockdev: Modularize nfs block driverColin Lord1-0/+1
Modularizes the nfs block driver so that it gets dynamically loaded. Signed-off-by: Colin Lord <clord@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-id: 1471008424-16465-5-git-send-email-clord@redhat.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2016-09-20blockdev: Add dynamic module loading for block driversMarc Mari1-2/+1
Extend the current module interface to allow for block drivers to be loaded dynamically on request. The only block drivers that can be converted into modules are the drivers that don't perform any init operation except for registering themselves. In addition, only the protocol drivers are being modularized, as they are the only ones which see significant performance benefits. The format drivers do not generally link to external libraries, so modularizing them is of no benefit from a performance perspective. All the necessary module information is located in a new structure found in module_block.h This spoils the purpose of 5505e8b76f (block/dmg: make it modular). Before this patch, if module build is enabled, block-dmg.so is linked to libbz2, whereas the main binary is not. In downstream, theoretically, it means only the qemu-block-extra package depends on libbz2, while the main QEMU package needn't to. With this patch, we (temporarily) change the case so that the main QEMU depends on libbz2 again. Signed-off-by: Marc Marí <markmb@redhat.com> Signed-off-by: Colin Lord <clord@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-id: 1471008424-16465-4-git-send-email-clord@redhat.com Reviewed-by: Max Reitz <mreitz@redhat.com> [mreitz: Do a signed comparison against the length of block_driver_modules[], so it will not cause a compile error when empty] Signed-off-by: Max Reitz <mreitz@redhat.com>
2016-09-20blockdev: prepare iSCSI block driver for dynamic loadingColin Lord1-36/+0
This commit moves the initialization of the QemuOptsList qemu_iscsi_opts struct out of block/iscsi.c in order to allow the iscsi module to be dynamically loaded. Signed-off-by: Colin Lord <clord@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-id: 1471008424-16465-2-git-send-email-clord@redhat.com Reviewed-by: Max Reitz <mreitz@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2016-09-19crypto: make PBKDF iterations configurable for LUKS formatDaniel P. Berrange1-0/+6
As protection against bruteforcing passphrases, the PBKDF algorithm is tuned by counting the number of iterations needed to produce 1 second of running time. If the machine that the image will be used on is much faster than the machine where the image is created, it can be desirable to raise the number of iterations. This change adds a new 'iter-time' property that allows the user to choose the iteration wallclock time. Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
2016-09-15sheepdog: remove useless castsLaurent Vivier1-2/+2
This patch is the result of coccinelle script scripts/coccinelle/typecast.cocci CC: Hitoshi Mitake <mitake.hitoshi@lab.ntt.co.jp> CC: qemu-block@nongnu.org Signed-off-by: Laurent Vivier <lvivier@redhat.com> Reviewed-by: Hitoshi Mitake <mitake.hitoshi@lab.ntt.co.jp> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2016-09-15curl: Operate on zero-length fileTomáš Golembiovský1-4/+21
Another attempt to fix the bug 1596870. When creating new disk backed by remote file accessed via HTTPS and the backing file has zero length, qemu-img terminates with uniformative error message: qemu-img: disk.qcow2: CURL: Error opening file: While it may not make much sense to operate on empty file, other block backends (e.g. raw backend for regular files) seem to allow it. This patch fixes it for the curl backend and improves the reported error. Signed-off-by: Tomáš Golembiovský <tgolembi@redhat.com> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2016-09-15Remove unused function declarationsLadi Prosek1-1/+0
Unused function declarations were found using a simple gcc plugin and manually verified by grepping the sources. Signed-off-by: Ladi Prosek <lprosek@redhat.com> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2016-09-13Merge remote-tracking branch 'remotes/stefanha/tags/block-pull-request' into ↵Peter Maydell7-51/+870
staging Pull request v2: * Fixed qcow2 sanitizer warnings [Peter] * Renamed get_error test cases to get_error_all to avoid tripping "error:" grep scripts [Peter] * Added Fam's iothread stop patch # gpg: Signature made Tue 13 Sep 2016 11:02:30 BST # gpg: using RSA key 0x9CA4ABB381AB73C8 # gpg: Good signature from "Stefan Hajnoczi <stefanha@redhat.com>" # gpg: aka "Stefan Hajnoczi <stefanha@gmail.com>" # Primary key fingerprint: 8695 A8BF D3F9 7CDA AC35 775A 9CA4 ABB3 81AB 73C8 * remotes/stefanha/tags/block-pull-request: iothread: Stop threads before main() quits tests: fix qvirtqueue_kick MAINTAINERS: add maintainer for replication support replication driver in blockdev-add tests: add unit test case for replication replication: Implement new driver for block replication replication: Introduce new APIs to do replication operation configure: support replication mirror: auto complete active commit docs: block replication's description block: Link backup into block core Backup: export interfaces for extra serialization Backup: clear all bitmap when doing block checkpoint block: unblock backup operations in backing file virtio-blk: rename virtio_device_info to virtio_blk_info linux-aio: process completions from ioq_submit() linux-aio: split processing events function linux-aio: consume events in userspace instead of calling io_getevents qcow2: avoid memcpy(dst, NULL, len) Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
2016-09-13replication: Implement new driver for block replicationWen Congyang2-0/+660
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com> Signed-off-by: Changlong Xie <xiecl.fnst@cn.fujitsu.com> Signed-off-by: Wang WeiWei <wangww.fnst@cn.fujitsu.com> Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: Gonglei <arei.gonglei@huawei.com> Message-id: 1469602913-20979-10-git-send-email-xiecl.fnst@cn.fujitsu.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-09-13mirror: auto complete active commitWen Congyang1-4/+9
Auto complete mirror job in background to prevent from blocking synchronously Signed-off-by: Wen Congyang <wency@cn.fujitsu.com> Signed-off-by: Changlong Xie <xiecl.fnst@cn.fujitsu.com> Signed-off-by: Wang WeiWei <wangww.fnst@cn.fujitsu.com> Message-id: 1469602913-20979-7-git-send-email-xiecl.fnst@cn.fujitsu.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-09-13block: Link backup into block coreWen Congyang1-1/+1
Some programs that add a dependency on it will use the block layer directly. Signed-off-by: Wen Congyang <wency@cn.fujitsu.com> Signed-off-by: Changlong Xie <xiecl.fnst@cn.fujitsu.com> Signed-off-by: Wang WeiWei <wangww.fnst@cn.fujitsu.com> Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: Gonglei <arei.gonglei@huawei.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com> Message-id: 1469602913-20979-5-git-send-email-xiecl.fnst@cn.fujitsu.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-09-13Backup: export interfaces for extra serializationChanglong Xie1-7/+34
Normal backup(sync='none') workflow: step 1. NBD peformance I/O write from client to server qcow2_co_writev bdrv_co_writev ... bdrv_aligned_pwritev notifier_with_return_list_notify -> backup_do_cow bdrv_driver_pwritev // write new contents step 2. drive-backup sync=none backup_do_cow { wait_for_overlapping_requests cow_request_begin for(; start < end; start++) { bdrv_co_readv_no_serialising //read old contents from Secondary disk bdrv_co_writev // write old contents to hidden-disk } cow_request_end } step 3. Then roll back to "step 1" to write new contents to Secondary disk. And for replication, we must make sure that we only read the old contents from Secondary disk in order to keep contents consistent. 1) Replication workflow of Secondary virtio-blk ^ -------> 1 NBD | || server 3 replication || ^ ^ || | backing backing | || Secondary disk 6<-------- hidden-disk 5 <-------- active-disk 4 || | ^ || '-------------------------' || drive-backup sync=none 2 Hence, we need these interfaces to implement coarse-grained serialization between COW of Secondary disk and the read operation of replication. Example codes about how to use them: *#include "block/block_backup.h" static coroutine_fn int xxx_co_readv() { CowRequest req; BlockJob *job = secondary_disk->bs->job; if (job) { backup_wait_for_overlapping_requests(job, start, end); backup_cow_request_begin(&req, job, start, end); ret = bdrv_co_readv(); backup_cow_request_end(&req); goto out; } ret = bdrv_co_readv(); out: return ret; } Signed-off-by: Changlong Xie <xiecl.fnst@cn.fujitsu.com> Signed-off-by: Wen Congyang <wency@cn.fujitsu.com> Signed-off-by: Wang WeiWei <wangww.fnst@cn.fujitsu.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-id: 1469602913-20979-4-git-send-email-xiecl.fnst@cn.fujitsu.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-09-13Backup: clear all bitmap when doing block checkpointWen Congyang1-0/+18
Signed-off-by: Wen Congyang <wency@cn.fujitsu.com> Signed-off-by: Changlong Xie <xiecl.fnst@cn.fujitsu.com> Signed-off-by: Wang WeiWei <wangww.fnst@cn.fujitsu.com> Signed-off-by: zhanghailiang <zhang.zhanghailiang@huawei.com> Signed-off-by: Gonglei <arei.gonglei@huawei.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Message-id: 1469602913-20979-3-git-send-email-xiecl.fnst@cn.fujitsu.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-09-13linux-aio: process completions from ioq_submit()Roman Pen1-2/+22
In order to reduce completion latency it makes sense to harvest completed requests ASAP. Very fast backend device can complete requests just after submission, so it is worth trying to check ring buffer in order to peek completed requests directly after io_submit() has been called. Indeed, this patch reduces the completions latencies and increases the overall throughput, e.g. the following is the percentiles of number of completed requests at once: 1th 10th 20th 30th 40th 50th 60th 70th 80th 90th 99.99th Before 2 4 42 112 128 128 128 128 128 128 128 After 1 1 4 14 33 45 47 48 50 51 108 That means, that before the current patch is applied the ring buffer is observed as full (128 requests were consumed at once) in 60% of calls. After patch is applied the distribution of number of completed requests is "smoother" and the queue (requests in-flight) is almost never full. The fio read results are the following (write results are almost the same and are not showed here): Before ------ job: (groupid=0, jobs=8): err= 0: pid=2227: Tue Jul 19 11:29:50 2016 Description : [Emulation of Storage Server Access Pattern] read : io=54681MB, bw=1822.7MB/s, iops=179779, runt= 30001msec slat (usec): min=172, max=16883, avg=338.35, stdev=109.66 clat (usec): min=1, max=21977, avg=1051.45, stdev=299.29 lat (usec): min=317, max=22521, avg=1389.83, stdev=300.73 clat percentiles (usec): | 1.00th=[ 346], 5.00th=[ 596], 10.00th=[ 708], 20.00th=[ 852], | 30.00th=[ 932], 40.00th=[ 996], 50.00th=[ 1048], 60.00th=[ 1112], | 70.00th=[ 1176], 80.00th=[ 1256], 90.00th=[ 1384], 95.00th=[ 1496], | 99.00th=[ 1800], 99.50th=[ 1928], 99.90th=[ 2320], 99.95th=[ 2672], | 99.99th=[ 4704] bw (KB /s): min=205229, max=553181, per=12.50%, avg=233278.26, stdev=18383.51 After ------ job: (groupid=0, jobs=8): err= 0: pid=2220: Tue Jul 19 11:31:51 2016 Description : [Emulation of Storage Server Access Pattern] read : io=57637MB, bw=1921.2MB/s, iops=189529, runt= 30002msec slat (usec): min=169, max=20636, avg=329.61, stdev=124.18 clat (usec): min=2, max=19592, avg=988.78, stdev=251.04 lat (usec): min=381, max=21067, avg=1318.42, stdev=243.58 clat percentiles (usec): | 1.00th=[ 310], 5.00th=[ 580], 10.00th=[ 748], 20.00th=[ 876], | 30.00th=[ 908], 40.00th=[ 948], 50.00th=[ 1012], 60.00th=[ 1064], | 70.00th=[ 1080], 80.00th=[ 1128], 90.00th=[ 1224], 95.00th=[ 1288], | 99.00th=[ 1496], 99.50th=[ 1608], 99.90th=[ 1960], 99.95th=[ 2256], | 99.99th=[ 5408] bw (KB /s): min=212149, max=390160, per=12.49%, avg=245746.04, stdev=11606.75 Throughput increased from 1822MB/s to 1921MB/s, average completion latencies decreased from 1051us to 988us. Signed-off-by: Roman Pen <roman.penyaev@profitbricks.com> Message-id: 1468931263-32667-4-git-send-email-roman.penyaev@profitbricks.com Cc: Stefan Hajnoczi <stefanha@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: qemu-devel@nongnu.org Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-09-13linux-aio: split processing events functionRoman Pen1-10/+21
Prepare processing events function to be called from ioq_submit(), thus split function on two parts: the first harvests completed IO requests, the second submits pending requests. Signed-off-by: Roman Pen <roman.penyaev@profitbricks.com> Message-id: 1468931263-32667-3-git-send-email-roman.penyaev@profitbricks.com Cc: Stefan Hajnoczi <stefanha@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: qemu-devel@nongnu.org Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-09-13linux-aio: consume events in userspace instead of calling io_geteventsRoman Pen1-26/+99
AIO context in userspace is represented as a simple ring buffer, which can be consumed directly without entering the kernel, which obviously can bring some performance gain. QEMU does not use timeout value for waiting for events completions, so we can consume all events from userspace. Signed-off-by: Roman Pen <roman.penyaev@profitbricks.com> Message-id: 1468931263-32667-2-git-send-email-roman.penyaev@profitbricks.com Cc: Stefan Hajnoczi <stefanha@redhat.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: qemu-devel@nongnu.org Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-09-13qcow2: avoid memcpy(dst, NULL, len)Stefan Hajnoczi2-2/+7
Section "7.1.4 Use of library functions" in the C99 standard says: If an argument to a function has an invalid value (such as [...] a null pointer [...]) [...] the behavior is undefined. Additionally the "searching and sorting" functions are specified as requiring valid pointer values as described in 7.1.4. This patch fixes the following sanitizer errors: block/qcow2.c:1807:41: runtime error: null pointer passed as argument 2, which is declared to never be null block/qcow2-cluster.c:86:26: runtime error: null pointer passed as argument 2, which is declared to never be null Reported-by: Peter Maydell <peter.maydell@linaro.org> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Message-id: 1473758138-19260-1-git-send-email-stefanha@redhat.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-09-13block/gluster: add support to choose libgfapi logfilePrasanna Kumar Kalever1-4/+38
currently all the libgfapi logs defaults to '/dev/stderr' as it was hardcoded in a call to glfs logging api. When the debug level is chosen to DEBUG/TRACE, gfapi logs will be huge and fill/overflow the console view. This patch provides a commandline option to mention log file path which helps in logging to the specified file and also help in persisting the gfapi logs. Usage: ----- *URI Style: --------- -drive file=gluster://hostname/volname/image.qcow2,file.debug=9,\ file.logfile=/var/log/qemu/qemu-gfapi.log *JSON Style: ---------- 'json:{ "driver":"qcow2", "file":{ "driver":"gluster", "volume":"volname", "path":"image.qcow2", "debug":"9", "logfile":"/var/log/qemu/qemu-gfapi.log", "server":[ { "type":"tcp", "host":"1.2.3.4", "port":24007 }, { "type":"unix", "socket":"/var/run/glusterd.socket" } ] } }' Reviewed-by: Jeff Cody <jcody@redhat.com> Reviewed-by: Markus Armbruster <armbru@redhat.com> Signed-off-by: Prasanna Kumar Kalever <prasanna.kalever@redhat.com> Signed-off-by: Jeff Cody <jcody@redhat.com>
2016-09-05qcow2: fix iovec size at qcow2_co_pwritev_compressedPavel Butsykin1-1/+1
Use bytes as the size would be more exact than s->cluster_size. Although qemu_iovec_to_buf() will not allow to go beyond the qiov. Signed-off-by: Pavel Butsykin <pbutsykin@virtuozzo.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-09-05drive-backup: added support for data compressionPavel Butsykin1-1/+11
The idea is simple - backup is "written-once" data. It is written block by block and it is large enough. It would be nice to save storage space and compress it. The patch adds a flag to the qmp/hmp drive-backup command which enables block compression. Compression should be implemented in the format driver to enable this feature. There are some limitations of the format driver to allow compressed writes. We can write data only once. Though for backup this is perfectly fine. These limitations are maintained by the driver and the error will be reported if we are doing something wrong. Signed-off-by: Pavel Butsykin <pbutsykin@virtuozzo.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Denis V. Lunev <den@openvz.org> CC: Jeff Cody <jcody@redhat.com> CC: Markus Armbruster <armbru@redhat.com> CC: Eric Blake <eblake@redhat.com> CC: John Snow <jsnow@redhat.com> CC: Stefan Hajnoczi <stefanha@redhat.com> CC: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-09-05block/io: turn on dirty_bitmaps for the compressed writesPavel Butsykin1-9/+5
Previously was added the assert: commit 1755da16e32c15b22a521e8a38539e4b5cf367f3 Author: Paolo Bonzini <pbonzini@redhat.com> Date: Thu Oct 18 16:49:18 2012 +0200 block: introduce new dirty bitmap functionality Now the compressed write is always in coroutine and setting the bits is done after the write, so that we can return the dirty_bitmaps for the compressed writes. Signed-off-by: Pavel Butsykin <pbutsykin@virtuozzo.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Denis V. Lunev <den@openvz.org> CC: Jeff Cody <jcody@redhat.com> CC: Markus Armbruster <armbru@redhat.com> CC: Eric Blake <eblake@redhat.com> CC: John Snow <jsnow@redhat.com> CC: Stefan Hajnoczi <stefanha@redhat.com> CC: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-09-05block: remove BlockDriver.bdrv_write_compressedPavel Butsykin2-37/+2
There are no block drivers left that implement the old .bdrv_write_compressed interface, so it can be removed. Also now we have no need to use the bdrv_pwrite_compressed function and we can remove it entirely. Signed-off-by: Pavel Butsykin <pbutsykin@virtuozzo.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Denis V. Lunev <den@openvz.org> CC: Jeff Cody <jcody@redhat.com> CC: Markus Armbruster <armbru@redhat.com> CC: Eric Blake <eblake@redhat.com> CC: John Snow <jsnow@redhat.com> CC: Stefan Hajnoczi <stefanha@redhat.com> CC: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-09-05qcow: cleanup qcow_co_pwritev_compressed to avoid the recursionPavel Butsykin1-17/+7
Now that the function uses a vector instead of a buffer, there is no need to use recursive code. Signed-off-by: Pavel Butsykin <pbutsykin@virtuozzo.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Denis V. Lunev <den@openvz.org> CC: Jeff Cody <jcody@redhat.com> CC: Markus Armbruster <armbru@redhat.com> CC: Eric Blake <eblake@redhat.com> CC: John Snow <jsnow@redhat.com> CC: Stefan Hajnoczi <stefanha@redhat.com> CC: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-09-05qcow: add qcow_co_pwritev_compressedPavel Butsykin1-67/+42
Added implementation of the qcow_co_pwritev_compressed function that will allow us to safely use compressed writes for the qcow from running VMs. Signed-off-by: Pavel Butsykin <pbutsykin@virtuozzo.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Denis V. Lunev <den@openvz.org> CC: Jeff Cody <jcody@redhat.com> CC: Markus Armbruster <armbru@redhat.com> CC: Eric Blake <eblake@redhat.com> CC: John Snow <jsnow@redhat.com> CC: Stefan Hajnoczi <stefanha@redhat.com> CC: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-09-05vmdk: add vmdk_co_pwritev_compressedPavel Butsykin1-50/+5
Added implementation of the vmdk_co_pwritev_compressed function that will allow us to safely use compressed writes for the vmdk from running VMs. Signed-off-by: Pavel Butsykin <pbutsykin@virtuozzo.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Denis V. Lunev <den@openvz.org> CC: Jeff Cody <jcody@redhat.com> CC: Markus Armbruster <armbru@redhat.com> CC: Eric Blake <eblake@redhat.com> CC: John Snow <jsnow@redhat.com> CC: Stefan Hajnoczi <stefanha@redhat.com> CC: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-09-05qcow2: cleanup qcow2_co_pwritev_compressed to avoid the recursionPavel Butsykin1-17/+7
Now that the function uses a vector instead of a buffer, there is no need to use recursive code. Signed-off-by: Pavel Butsykin <pbutsykin@virtuozzo.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Denis V. Lunev <den@openvz.org> CC: Jeff Cody <jcody@redhat.com> CC: Markus Armbruster <armbru@redhat.com> CC: Eric Blake <eblake@redhat.com> CC: John Snow <jsnow@redhat.com> CC: Stefan Hajnoczi <stefanha@redhat.com> CC: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-09-05qcow2: add qcow2_co_pwritev_compressedPavel Butsykin1-74/+50
Added implementation of the qcow2_co_pwritev_compressed function that will allow us to safely use compressed writes for the qcow2 from running VMs. Signed-off-by: Pavel Butsykin <pbutsykin@virtuozzo.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Denis V. Lunev <den@openvz.org> CC: Jeff Cody <jcody@redhat.com> CC: Markus Armbruster <armbru@redhat.com> CC: Eric Blake <eblake@redhat.com> CC: John Snow <jsnow@redhat.com> CC: Stefan Hajnoczi <stefanha@redhat.com> CC: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-09-05block/io: reuse bdrv_co_pwritev() for write compressedPavel Butsykin1-16/+40
For bdrv_pwrite_compressed() it looks like most of the code creating coroutine is duplicated in bdrv_prwv_co(). So we can just add a flag (BDRV_REQ_WRITE_COMPRESSED) and use bdrv_prwv_co() as a generic one. In the end we get coroutine oriented function for write compressed by using bdrv_co_pwritev/blk_co_pwritev with BDRV_REQ_WRITE_COMPRESSED flag. Signed-off-by: Pavel Butsykin <pbutsykin@virtuozzo.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Denis V. Lunev <den@openvz.org> CC: Jeff Cody <jcody@redhat.com> CC: Markus Armbruster <armbru@redhat.com> CC: Eric Blake <eblake@redhat.com> CC: John Snow <jsnow@redhat.com> CC: Stefan Hajnoczi <stefanha@redhat.com> CC: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-09-05block: Convert bdrv_pwrite_compressed() to BdrvChildPavel Butsykin2-2/+3
Signed-off-by: Pavel Butsykin <pbutsykin@virtuozzo.com> Signed-off-by: Denis V. Lunev <den@openvz.org> Reviewed-by: Eric Blake <eblake@redhat.com> CC: Jeff Cody <jcody@redhat.com> CC: Markus Armbruster <armbru@redhat.com> CC: Eric Blake <eblake@redhat.com> CC: John Snow <jsnow@redhat.com> CC: Stefan Hajnoczi <stefanha@redhat.com> CC: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-09-05block: switch blk_write_compressed() to byte-based interfacePavel Butsykin2-34/+11
This is a preparatory patch, which continues the general trend of the transition to the byte-based interfaces. bdrv_check_request() and blk_check_request() are no longer used, thus we can remove them. Signed-off-by: Pavel Butsykin <pbutsykin@virtuozzo.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Signed-off-by: Denis V. Lunev <den@openvz.org> CC: Jeff Cody <jcody@redhat.com> CC: Markus Armbruster <armbru@redhat.com> CC: Eric Blake <eblake@redhat.com> CC: John Snow <jsnow@redhat.com> CC: Stefan Hajnoczi <stefanha@redhat.com> CC: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-09-05block: Accept node-name for block-streamKevin Wolf1-0/+16
In order to remove the necessity to use BlockBackend names in the external API, we want to allow node-names everywhere. This converts block-stream to accept a node-name without lifting the restriction that we're operating at a root node. In case of an invalid device name, the command returns the GenericError error class now instead of DeviceNotFound, because this is what qmp_get_root_bs() returns. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Reviewed-by: Alberto Garcia <berto@igalia.com>
2016-08-18block: fix possible reorder of flush operationsDenis V. Lunev1-1/+2
This patch reduce CPU usage of flush operations a bit. When we have one flush completed we should kick only next operation. We should not start all pending operations in the hope that they will go back to wait on wait_queue. Also there is a technical possibility that requests will get reordered with the previous approach. After wakeup all requests are removed from the wait queue. They become active and they are processed one-by-one adding to the wait queue in the same order. Though new flush can arrive while all requests are not put into the queue. Signed-off-by: Denis V. Lunev <den@openvz.org> Tested-by: Evgeny Yakovlev <eyakovlev@virtuozzo.com> Signed-off-by: Evgeny Yakovlev <eyakovlev@virtuozzo.com> Message-id: 1471457214-3994-3-git-send-email-den@openvz.org CC: Stefan Hajnoczi <stefanha@redhat.com> CC: Fam Zheng <famz@redhat.com> CC: Kevin Wolf <kwolf@redhat.com> CC: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-08-18block: fix deadlock in bdrv_co_flushEvgeny Yakovlev1-2/+3
The following commit commit 3ff2f67a7c24183fcbcfe1332e5223ac6f96438c Author: Evgeny Yakovlev <eyakovlev@virtuozzo.com> Date: Mon Jul 18 22:39:52 2016 +0300 block: ignore flush requests when storage is clean has introduced a regression. There is a problem that it is still possible for 2 requests to execute in non sequential fashion and sometimes this results in a deadlock when bdrv_drain_one/all are called for BDS with such stalled requests. 1. Current flushed_gen and flush_started_gen is 1. 2. Request 1 enters bdrv_co_flush to with write_gen 1 (i.e. the same as flushed_gen). It gets past flushed_gen != flush_started_gen and sets flush_started_gen to 1 (again, the same it was before). 3. Request 1 yields somewhere before exiting bdrv_co_flush 4. Request 2 enters bdrv_co_flush with write_gen 2. It gets past flushed_gen != flush_started_gen and sets flush_started_gen to 2. 5. Request 2 runs to completion and sets flushed_gen to 2 6. Request 1 is resumed, runs to completion and sets flushed_gen to 1. However flush_started_gen is now 2. From here on out flushed_gen is always != to flush_started_gen and all further requests will wait on flush_queue. This change replaces flush_started_gen with an explicitly tracked active flush request. Signed-off-by: Evgeny Yakovlev <eyakovlev@virtuozzo.com> Signed-off-by: Denis V. Lunev <den@openvz.org> Message-id: 1471457214-3994-2-git-send-email-den@openvz.org CC: Stefan Hajnoczi <stefanha@redhat.com> CC: Fam Zheng <famz@redhat.com> CC: Kevin Wolf <kwolf@redhat.com> CC: Max Reitz <mreitz@redhat.com> Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-08-17curl: Cast fd to int for DPRINTFFam Zheng1-1/+1
Currently "make docker-test-mingw@fedora" has a warning like: /tmp/qemu-test/src/block/curl.c: In function 'curl_sock_cb': /tmp/qemu-test/src/block/curl.c:172:6: warning: format '%d' expects argument of type 'int', but argument 4 has type 'curl_socket_t {aka long long unsigned int}' DPRINTF("CURL (AIO): Sock action %d on fd %d\n", action, fd); ^ cc1: all warnings being treated as errors Cast to int to suppress it. Signed-off-by: Fam Zheng <famz@redhat.com> Message-Id: <1470027888-24381-1-git-send-email-famz@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com>