diff options
author | Christian Borntraeger <borntraeger@de.ibm.com> | 2017-09-19 12:34:06 +0200 |
---|---|---|
committer | Christian Borntraeger <borntraeger@de.ibm.com> | 2017-11-09 09:35:06 +0100 |
commit | 72e1ad4200d5ed5c5adf120b77fb2900a29a48e5 (patch) | |
tree | 74c67a34f0badc9ee9fbd7487f3323a5b6871de4 /arch/s390/kvm/interrupt.c | |
parent | 650da25099606db05e3546201410d92cf2e2545a (diff) |
KVM: s390: document memory ordering for kvm_s390_vcpu_wakeup
swait_active does not enforce any ordering and it can therefore trigger
some subtle races when the CPU moves the read for the check before a
previous store and that store is then used on another CPU that is
preparing the swait.
On s390 there is a call to swait_active in kvm_s390_vcpu_wakeup. The
good thing is, on s390 all potential races cannot happen because all
callers of kvm_s390_vcpu_wakeup do not store (no race) or use an atomic
operation, which handles memory ordering. Since this is not guaranteed
by the Linux semantics (but by the implementation on s390) let's add
smp_mb_after_atomic to make this obvious and document the ordering.
Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Acked-by: Halil Pasic <pasic@linux.vnet.ibm.com>
Reviewed-by: Cornelia Huck <cohuck@redhat.com>
Reviewed-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Christian Borntraeger <borntraeger@de.ibm.com>
Diffstat (limited to 'arch/s390/kvm/interrupt.c')
-rw-r--r-- | arch/s390/kvm/interrupt.c | 6 |
1 files changed, 6 insertions, 0 deletions
diff --git a/arch/s390/kvm/interrupt.c b/arch/s390/kvm/interrupt.c index a832ad031cee..23d8fb25c5dd 100644 --- a/arch/s390/kvm/interrupt.c +++ b/arch/s390/kvm/interrupt.c @@ -1074,6 +1074,12 @@ void kvm_s390_vcpu_wakeup(struct kvm_vcpu *vcpu) * in kvm_vcpu_block without having the waitqueue set (polling) */ vcpu->valid_wakeup = true; + /* + * This is mostly to document, that the read in swait_active could + * be moved before other stores, leading to subtle races. + * All current users do not store or use an atomic like update + */ + smp_mb__after_atomic(); if (swait_active(&vcpu->wq)) { /* * The vcpu gave up the cpu voluntarily, mark it as a good |