diff options
author | Peter Maydell <peter.maydell@linaro.org> | 2015-10-01 15:29:49 +0100 |
---|---|---|
committer | Paolo Bonzini <pbonzini@redhat.com> | 2015-10-12 18:29:26 +0200 |
commit | 53f8a5e9e2633a4a3b6918c36aec725aa80f2887 (patch) | |
tree | 0d83b8b211ea3d3a7fa008f52b86900d9158fdf8 | |
parent | 0a1c71cec63e95f9b8d0dc96d049d2daa00c5210 (diff) |
cpu-exec-common.c: Clarify comment about cpu_reload_memory_map()'s RCU operations
The reason for cpu_reload_memory_map()'s RCU operations is not
so much because the guest could make the critical section very
long, but that it could have a critical section within which
it made an arbitrary number of changes to the memory map and
thus accumulate an unbounded amount of memory data structures
awaiting reclamation. Clarify the comment to make this clearer.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Message-Id: <1443709790-25180-3-git-send-email-peter.maydell@linaro.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
-rw-r--r-- | cpu-exec-common.c | 20 |
1 files changed, 14 insertions, 6 deletions
diff --git a/cpu-exec-common.c b/cpu-exec-common.c index 16d305b911..b95b09a77d 100644 --- a/cpu-exec-common.c +++ b/cpu-exec-common.c @@ -42,13 +42,21 @@ void cpu_reload_memory_map(CPUState *cpu) AddressSpaceDispatch *d; if (qemu_in_vcpu_thread()) { - /* Do not let the guest prolong the critical section as much as it - * as it desires. + /* The guest can in theory prolong the RCU critical section as long + * as it feels like. The major problem with this is that because it + * can do multiple reconfigurations of the memory map within the + * critical section, we could potentially accumulate an unbounded + * collection of memory data structures awaiting reclamation. * - * Currently, this is prevented by the I/O thread's periodinc kicking - * of the VCPU thread (iothread_requesting_mutex, qemu_cpu_kick_thread) - * but this will go away once TCG's execution moves out of the global - * mutex. + * Because the only thing we're currently protecting with RCU is the + * memory data structures, it's sufficient to break the critical section + * in this callback, which we know will get called every time the + * memory map is rearranged. + * + * (If we add anything else in the system that uses RCU to protect + * its data structures, we will need to implement some other mechanism + * to force TCG CPUs to exit the critical section, at which point this + * part of this callback might become unnecessary.) * * This pair matches cpu_exec's rcu_read_lock()/rcu_read_unlock(), which * only protects cpu->as->dispatch. Since we reload it below, we can |