summaryrefslogtreecommitdiff
path: root/migration/block.c
AgeCommit message (Collapse)AuthorFilesLines
2019-01-11qemu/queue.h: leave head structs anonymous unless necessaryPaolo Bonzini1-2/+2
Most list head structs need not be given a name. In most cases the name is given just in case one is going to use QTAILQ_LAST, QTAILQ_PREV or reverse iteration, but this does not apply to lists of other kinds, and even for QTAILQ in practice this is only rarely needed. In addition, we will soon reimplement those macros completely so that they do not need a name for the head struct. So clean up everything, not giving a name except in the rare case where it is necessary. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2018-03-23migration/block: compare only read blocks against the rate limiterPeter Lieven1-2/+1
only read_done blocks are in the queued to be flushed to the migration stream. submitted blocks are still in flight. Signed-off-by: Peter Lieven <pl@kamp.de> Message-Id: <1520507908-16743-6-git-send-email-pl@kamp.de> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2018-03-23migration/block: limit the number of parallel I/O requestsPeter Lieven1-0/+2
the current implementation submits up to 512 I/O requests in parallel which is much to high especially for a background task. This patch adds a maximum limit of 16 I/O requests that can be submitted in parallel to avoid monopolizing the I/O device. Signed-off-by: Peter Lieven <pl@kamp.de> Message-Id: <1520507908-16743-5-git-send-email-pl@kamp.de> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2018-03-13migration: introduce postcopy-only pendingVladimir Sementsov-Ogievskiy1-3/+4
There would be savevm states (dirty-bitmap) which can migrate only in postcopy stage. The corresponding pending is introduced here. Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-id: 20180313180320.339796-6-vsementsov@virtuozzo.com
2018-03-09migration/block: rename MAX_INFLIGHT_IO to MAX_IO_BUFFERSPeter Lieven1-4/+3
this actually limits (as the original commit mesage suggests) the number of I/O buffers that can be allocated and not the number of parallel (inflight) I/O requests. Signed-off-by: Peter Lieven <pl@kamp.de> Message-Id: <1520507908-16743-4-git-send-email-pl@kamp.de> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2018-03-09migration/block: reset dirty bitmap before read in bulk phasePeter Lieven1-3/+2
Reset the dirty bitmap before reading to make sure we don't miss any new data. Cc: qemu-stable@nongnu.org Signed-off-by: Peter Lieven <pl@kamp.de> Message-Id: <1520507908-16743-3-git-send-email-pl@kamp.de> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2018-01-22Replace all occurances of __FUNCTION__ with __func__Alistair Francis1-2/+2
Replace all occurs of __FUNCTION__ except for the check in checkpatch with the non GCC specific __func__. One line in hcd-musb.c was manually tweaked to pass checkpatch. Signed-off-by: Alistair Francis <alistair.francis@xilinx.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Gerd Hoffmann <kraxel@redhat.com> [THH: Removed hunks related to pxa2xx_mmci.c (fixed already)] Signed-off-by: Thomas Huth <thuth@redhat.com>
2017-12-18Remove empty statementsLadi Prosek1-1/+1
Thanks to Laszlo Ersek for spotting the double semicolon in target/i386/kvm.c I have trivially grepped the tree for ';;' in C files. Suggested-by: Laszlo Ersek <lersek@redhat.com> Signed-off-by: Ladi Prosek <lprosek@redhat.com> Reviewed-by: Laszlo Ersek <lersek@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Laurent Vivier <laurent@vivier.eu> Signed-off-by: Michael Tokarev <mjt@tls.msk.ru>
2017-11-17block: Make bdrv_next() keep strong referencesMax Reitz1-0/+1
On one hand, it is a good idea for bdrv_next() to return a strong reference because ideally nearly every pointer should be refcounted. This fixes intermittent failure of iotest 194. On the other, it is absolutely necessary for bdrv_next() itself to keep a strong reference to both the BB (in its first phase) and the BDS (at least in the second phase) because when called the next time, it will dereference those objects to get a link to the next one. Therefore, it needs these objects to stay around until then. Just storing the pointer to the next in the iterator is not really viable because that pointer might become invalid as well. Both arguments taken together means we should probably just invoke bdrv_ref() and blk_ref() in bdrv_next(). This means we have to assert that bdrv_next() is always called from the main loop, but that was probably necessary already before this patch and judging from the callers, it also looks to actually be the case. Keeping these strong references means however that callers need to give them up if they decide to abort the iteration early. They can do so through the new bdrv_next_cleanup() function. Suggested-by: Kevin Wolf <kwolf@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com> Message-id: 20171110172545.32609-1-mreitz@redhat.com Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Max Reitz <mreitz@redhat.com>
2017-10-06dirty-bitmap: Change bdrv_[re]set_dirty_bitmap() to use bytesEric Blake1-2/+5
Some of the callers were already scaling bytes to sectors; others can be easily converted to pass byte offsets, all in our shift towards a consistent byte interface everywhere. Making the change will also make it easier to write the hold-out callers to use byte rather than sectors for their iterations; it also makes it easier for a future dirty-bitmap patch to offload scaling over to the internal hbitmap. Although all callers happen to pass sector-aligned values, make the internal scaling robust to any sub-sector requests. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-10-06dirty-bitmap: Change bdrv_get_dirty_locked() to take bytesEric Blake1-1/+2
Half the callers were already scaling bytes to sectors; the other half can eventually be simplified to use byte iteration. Both callers were already using the result as a bool, so make that explicit. Making the change also makes it easier for a future dirty-bitmap patch to offload scaling over to the internal hbitmap. Remember, asking whether a byte is dirty is effectively asking whether the entire granularity containing the byte is dirty, since we only track dirtiness by granularity. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-10-06dirty-bitmap: Change bdrv_get_dirty_count() to report bytesEric Blake1-1/+1
Thanks to recent cleanups, all callers were scaling a return value of sectors into bytes; do the scaling internally instead. Signed-off-by: Eric Blake <eblake@redhat.com> Reviewed-by: John Snow <jsnow@redhat.com> Reviewed-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-09-27migration: disable auto-converge during bulk block migrationPeter Lieven1-0/+5
auto-converge and block migration currently do not play well together. During block migration the auto-converge logic detects that ram migration makes no progress and thus throttles down the vm until it nearly stalls completely. Avoid this by disabling the throttling logic during the bulk phase of the block migration. Cc: qemu-stable@nongnu.org Signed-off-by: Peter Lieven <pl@kamp.de> Message-Id: <1506421996-12513-1-git-send-email-pl@kamp.de> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2017-07-10migration: Rename cleanup() to save_cleanup()Juan Quintela1-1/+1
We need a cleanup for loads, so we rename here to be consistent. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> -- Rename htab_cleanup to htap_save_cleanup as dave suggestion Message-Id: <20170628095228.4661-3-quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2017-07-10migration: Rename save_live_setup() to save_setup()Juan Quintela1-1/+1
We are going to use it now for more than save live regions. Once there rename qemu_savevm_state_begin() to qemu_savevm_state_setup(). Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Message-Id: <20170628095228.4661-2-quintela@redhat.com> Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2017-07-10block: Make bdrv_is_allocated() byte-basedEric Blake1-6/+10
We are gradually moving away from sector-based interfaces, towards byte-based. In the common case, allocation is unlikely to ever use values that are not naturally sector-aligned, but it is possible that byte-based values will let us be more precise about allocation at the end of an unaligned file that can do byte-based access. Changing the signature of the function to use int64_t *pnum ensures that the compiler enforces that all callers are updated. For now, the io.c layer still assert()s that all callers are sector-aligned on input and that *pnum is sector-aligned on return to the caller, but that can be relaxed when a later patch implements byte-based block status. Therefore, this code adds usages like DIV_ROUND_UP(,BDRV_SECTOR_SIZE) to callers that still want aligned values, where the call might reasonbly give non-aligned results in the future; on the other hand, no rounding is needed for callers that should just continue to work with byte alignment. For the most part this patch is just the addition of scaling at the callers followed by inverse scaling at bdrv_is_allocated(). But some code, particularly bdrv_commit(), gets a lot simpler because it no longer has to mess with sectors; also, it is now possible to pass NULL if the caller does not care how much of the image is allocated beyond the initial offset. Leave comments where we can further simplify once a later patch eliminates the need for sector-aligned requests through bdrv_is_allocated(). For ease of review, bdrv_is_allocated_above() will be tackled separately. Signed-off-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-06-16block: protect modification of dirty bitmaps with a mutexPaolo Bonzini1-4/+6
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20170605123908.18777-17-pbonzini@redhat.com> Signed-off-by: Fam Zheng <famz@redhat.com>
2017-06-16migration/block: reset dirty bitmap before readingPaolo Bonzini1-1/+2
Any data that is returned by read may be stale already, the bitmap has to be cleared before issuing the read. Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20170605123908.18777-16-pbonzini@redhat.com> Signed-off-by: Fam Zheng <famz@redhat.com>
2017-06-16block: introduce dirty_bitmap_mutexPaolo Bonzini1-6/+0
It protects only the list of dirty bitmaps; in the next patch we will also protect their content. Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Message-Id: <20170605123908.18777-15-pbonzini@redhat.com> Signed-off-by: Fam Zheng <famz@redhat.com>
2017-06-14migration: Remove unneeded includesJuan Quintela1-6/+0
Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Peter Xu <peterx@redhat.com>
2017-06-13migration: Move migration.h to migration/Juan Quintela1-1/+1
Nothing uses it outside of migration.h Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Peter Xu <peterx@redhat.com> Reviewed-by: Laurent Vivier <lvivier@redhat.com>
2017-06-13migration: Split registration functions from vmstate.hJuan Quintela1-0/+1
They are indpendent, and nowadays almost every device register things with qdev->vmsd. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> Reviewed-by: Peter Xu <peterx@redhat.com>
2017-06-09migration/block: Clean up BBs in block_save_complete()Kevin Wolf1-5/+17
We need to release any block migrations BlockBackends on the source before successfully completing the migration because otherwise inactivating the images will fail (inactivation only tolerates device BBs). Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> Reviewed-by: Jeff Cody <jcody@redhat.com>
2017-06-01migration: Move include/migration/block.h into migration/Juan Quintela1-1/+2
All functions were internal, except blk_mig_init() that is exported in misc.h now. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2017-06-01migration: Split qemu-file.hJuan Quintela1-1/+1
Split the file into public and internal interfaces. I have to rename the external one because we can't have two include files with the same name in the same directory. Build system gets confused. The only exported functions are the ones that handle basic types. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
2017-05-18migration: Remove vmstate.h from migration.hJuan Quintela1-0/+1
Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org> --- Minor rearrangements due to rebase
2017-05-18migration: Remove qemu-file.h from vmstate.hJuan Quintela1-0/+1
Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com> -- minor rearangements due to the rebase
2017-05-18migration: Remove use of old MigrationParamsJuan Quintela1-15/+2
We have change in the previous patch to use migration capabilities for it. Notice that we continue using the old command line flags from migrate command from the time being. Remove the set_params method as now it is empty. For savevm, one can't do a: savevm -b/-i foo but now one can do: migrate_set_capability block on savevm foo And we can't use block migration. We could disable block capability unconditionally, but it would not be much better. Signed-off-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Eric Blake <eblake@redhat.com> --- - Maintain shared/enabled dependency (Xu suggestion) - Now we maintain the dependency on the setter functions - improve error messages
2017-04-21migration/block: use blk_pwrite_zeroes for each zero clusterLidong Chen1-2/+33
BLOCK_SIZE is (1 << 20), qcow2 cluster size is 65536 by default, this may cause the qcow2 file size to be bigger after migration. This patch checks each cluster, using blk_pwrite_zeroes for each zero cluster. [Initialize cluster_size to BLOCK_SIZE to prevent a gcc uninitialized variable compiler warning. In reality we always initialize cluster_size in a conditional but gcc doesn't know that. --Stefan] Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Lidong Chen <lidongchen@tencent.com> Message-id: 1492050868-16200-1-git-send-email-lidongchen@tencent.com Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
2017-03-16migration/block: Avoid invoking blk_drain too frequentlyLidong Chen1-0/+3
Increase bmds->cur_dirty after submit io, so reduce the frequency involve into blk_drain, and improve the performance obviously when block migration. The performance test result of this patch: During the block dirty save phase, this patch improve guest os IOPS from 4.0K to 9.5K. and improve the migration speed from 505856 rsec/s to 855756 rsec/s. Signed-off-by: Lidong Chen <jemmy858585@gmail.com> Reviewed-by: Fam Zheng <famz@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2017-03-13migration: Document handling of bdrv_is_allocated() errorsEric Blake1-0/+2
Migration is the only code left in the tree that does not react to bdrv_is_allocated() failures. But as there is no useful way to react to the failure, and we are merely skipping unallocated sectors on success, just document that our choice of handling is intended. Signed-off-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2017-02-28migration/block: Use real permissionsKevin Wolf1-5/+17
Request BLK_PERM_CONSISTENT_READ for the source of block migration, and handle potential permission errors as good as we can in this place (which is not very good, but it matches the other failure cases). Signed-off-by: Kevin Wolf <kwolf@redhat.com> Acked-by: Fam Zheng <famz@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2017-02-28block: Add error parameter to blk_insert_bs()Kevin Wolf1-1/+1
Now that blk_insert_bs() requests the BlockBackend permissions for the node it attaches to, it can fail. Instead of aborting, pass the errors to the callers. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Acked-by: Fam Zheng <famz@redhat.com>
2017-02-28block: Add permissions to blk_new()Kevin Wolf1-1/+2
We want every user to be specific about the permissions it needs, so we'll pass the initial permissions as parameters to blk_new(). A user only needs to call blk_set_perm() if it wants to change the permissions after the fact. The permissions are stored in the BlockBackend and applied whenever a BlockDriverState should be attached in blk_insert_bs(). This does not include actually choosing the right set of permissions everywhere yet. Instead, the usual FIXME comment is added to each place and will be addressed in individual patches. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com> Acked-by: Fam Zheng <famz@redhat.com>
2016-06-08migration/block: Convert saving to BlockBackendKevin Wolf1-45/+79
This creates a new BlockBackend for copying data from an images to the migration stream on the source host. All I/O for block migration goes through BlockBackend now. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-06-08migration/block: Convert load to BlockBackendKevin Wolf1-16/+10
This converts the loading part of block migration to use BlockBackend interfaces rather than accessing the BlockDriverState directly. Note that this takes a lazy shortcut. We should really use a separate BlockBackend that is configured for the migration rather than for the guest (e.g. writethrough caching is unnecessary) and holds its own reference to the BlockDriverState, but the impact isn't that big and we didn't have a separate migration reference before either, so it must be good enough, I guess... Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
2016-06-08block: Switch bdrv_write_zeroes() to byte interfaceEric Blake1-2/+3
Rename to bdrv_pwrite_zeroes() to let the compiler ensure we cater to the updated semantics. Do the same for bdrv_co_write_zeroes(). Signed-off-by: Eric Blake <eblake@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-05-25block: Fix bdrv_next() memory leakKevin Wolf1-2/+2
The bdrv_next() users all leaked the BdrvNextIterator after completing the iteration. Simply changing bdrv_next() to free the iterator before returning NULL at the end of list doesn't work because some callers exit the loop before looking at all BDSes. This patch moves the BdrvNextIterator from the heap to the stack of the caller and switches to a bdrv_first()/bdrv_next() interface for initialising the iterator. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com>
2016-05-19block: Avoid bs->blk in bdrv_next()Kevin Wolf1-1/+3
We need to introduce a separate BdrvNextIterator struct that can keep more state than just the current BDS in order to avoid using the bs->blk pointer. Signed-off-by: Kevin Wolf <kwolf@redhat.com> Reviewed-by: Max Reitz <mreitz@redhat.com>
2016-03-22util: move declarations out of qemu-common.hVeronia Bahaa1-0/+1
Move declarations out of qemu-common.h for functions declared in utils/ files: e.g. include/qemu/path.h for utils/path.c. Move inline functions out of qemu-common.h and into new files (e.g. include/qemu/bcd.h) Signed-off-by: Veronia Bahaa <veroniabahaa@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-03-22include/qemu/osdep.h: Don't include qapi/error.hMarkus Armbruster1-0/+1
Commit 57cb38b included qapi/error.h into qemu/osdep.h to get the Error typedef. Since then, we've moved to include qemu/osdep.h everywhere. Its file comment explains: "To avoid getting into possible circular include dependencies, this file should not include any other QEMU headers, with the exceptions of config-host.h, compiler.h, os-posix.h and os-win32.h, all of which are doing a similar job to this file and are under similar constraints." qapi/error.h doesn't do a similar job, and it doesn't adhere to similar constraints: it includes qapi-types.h. That's in excess of 100KiB of crap most .c files don't actually need. Add the typedef to qemu/typedefs.h, and include that instead of qapi/error.h. Include qapi/error.h in .c files that need it and don't get it now. Include qapi-types.h in qom/object.h for uint16List. Update scripts/clean-includes accordingly. Update it further to match reality: replace config.h by config-target.h, add sysemu/os-posix.h, sysemu/os-win32.h. Update the list of includes in the qemu/osdep.h comment quoted above similarly. This reduces the number of objects depending on qapi/error.h from "all of them" to less than a third. Unfortunately, the number depending on qapi-types.h shrinks only a little. More work is needed for that one. Signed-off-by: Markus Armbruster <armbru@redhat.com> [Fix compilation without the spice devel packages. - Paolo] Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-02-25block-migration: acquire AioContext as necessaryPaolo Bonzini1-13/+52
This is needed because dataplane will run during block migration as well. The block device migration code is quite liberal in taking the iothread mutex. For simplicity, keep it the same way, even though one could actually choose between the BQL (for regular BlockDriverStates) and the AioContext (for dataplane BlockDriverStates). When the block layer is made fully thread safe, aio_context_acquire shall go away altogether. Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Acked-by: Stefan Hajnoczi <stefanha@redhat.com> Reviewed-by: Fam Zheng <famz@redhat.com>
2016-02-22block migration: Activate image on destination before writing to itKevin Wolf1-0/+7
When using 'migrate -b', we must make sure to take ownership of the image before writing to it. Otherwise metadata would be thrown away on migration completion; this was caught by the assertions introduced in commit 09e0c771. Reported-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Kevin Wolf <kwolf@redhat.com>
2016-01-29migration: Clean up includesPeter Maydell1-1/+1
Clean up includes so that osdep.h is included first and headers which it implies are not included manually. This commit was created with scripts/clean-includes. Signed-off-by: Peter Maydell <peter.maydell@linaro.org> Message-id: 1453832250-766-2-git-send-email-peter.maydell@linaro.org
2015-11-25block-migration: limit the memory usageWen Congyang1-1/+6
If we set migration speed in a very large value, block-migration will try to read all data to the memory. Because (block_mig_state.submitted + block_mig_state.read_done) * BLOCK_SIZE will be overflow, and it will be always less than rate limit. There is no need to read too many data into memory when the rate limit is very large. So limit the memory usage can fix the overflow problem. Signed-off-by: Wen Congyang <wency@cn.fujitsu.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-11-10Modify save_live_pending for postcopyDr. David Alan Gilbert1-2/+5
Modify save_live_pending to return separate postcopiable and non-postcopiable counts. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Reviewed-by: Amit Shah <amit.shah@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-11-10Rename save_live_complete to save_live_complete_precopyDr. David Alan Gilbert1-1/+1
In postcopy we're going to need to perform the complete phase for postcopiable devices at a different point, start out by renaming all of the 'complete's to make the difference obvious. Signed-off-by: Dr. David Alan Gilbert <dgilbert@redhat.com> Reviewed-by: Amit Shah <amit.shah@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com> Signed-off-by: Juan Quintela <quintela@redhat.com>
2015-11-04migration: code clean upLiang Li1-7/+2
Just clean up code, no behavior change. Signed-off-by: Liang Li <liang.z.li@intel.com> Reviewed-by: Juan Quintela <quintela@redhat.com>al3 Reviewed-by: Amit Shah <amit.shah@redhat.com>al3 Signed-off-by: Juan Quintela <quintela@redhat.com>al3
2015-11-04migration: rename cancel to cleanup in SaveVMHandlesLiang Li1-1/+1
'cleanup' seems more appropriate than 'cancel'. Signed-off-by: Liang Li <liang.z.li@intel.com> Reviewed-by: Juan Quintela <quintela@redhat.com>al3 Reviewed-by: Amit Shah <amit.shah@redhat.com>al3 Signed-off-by: Juan Quintela <quintela@redhat.com>al3
2015-11-04migration: defer migration_end & blk_mig_cleanupLiang Li1-1/+0
Because of the patch 3ea3b7fa9af067982f34b of kvm, which introduces a lazy collapsing of small sptes into large sptes mechanism, now migration_end() is a time consuming operation because it calls memroy_global_dirty_log_stop(), which will trigger the dropping of small sptes operation and takes about dozens of milliseconds, so call migration_end() before all the vmsate data has already been transferred to the destination will prolong VM downtime. This operation should be deferred after all the data has been transferred to the destination. blk_mig_cleanup() can be deferred too. For a VM with 8G RAM, this patch can reduce the VM downtime about 30 ms. Signed-off-by: Liang Li <liang.z.li@intel.com> Reviewed-by: Paolo Bonzini <pbonzini@redhat.com> Reviewed-by: Juan Quintela <quintela@redhat.com>al3 Reviewed-by: Amit Shah <amit.shah@redhat.com>al3 Signed-off-by: Juan Quintela <quintela@redhat.com>al3