summaryrefslogtreecommitdiff
path: root/arch/x86/kvm/svm/sev.c
diff options
context:
space:
mode:
authorChristoph Lameter (Ampere) <cl@gentwo.org>2024-06-12 09:49:56 -0700
committerLinus Torvalds <torvalds@linux-foundation.org>2024-09-22 13:35:36 -0700
commitd0dd066a0fa26d55c19ace9e89dedd9504c5bcba (patch)
tree5de4c6d9a5965f1c3686982a59413b065997c179 /arch/x86/kvm/svm/sev.c
parentde5cb0dcb74c294ec527eddfe5094acfdb21ff21 (diff)
seqcount: replace smp_rmb() in read_seqcount() with load acquire
Many architectures support load acquire which can replace a memory barrier and save some cycles. A typical sequence do { seq = read_seqcount_begin(&s); <something> } while (read_seqcount_retry(&s, seq); requires 13 cycles on an N1 Neoverse arm64 core (Ampere Altra, to be specific) for an empty loop. Two read memory barriers are needed. One for each of the seqcount_* functions. We can replace the first read barrier with a load acquire of the seqcount which saves us one barrier. On the Altra doing so reduces the cycle count from 13 to 8. According to ARM, this is a general improvement for the ARM64 architecture and not specific to a certain processor. See https://developer.arm.com/documentation/102336/0100/Load-Acquire-and-Store-Release-instructions "Weaker ordering requirements that are imposed by Load-Acquire and Store-Release instructions allow for micro-architectural optimizations, which could reduce some of the performance impacts that are otherwise imposed by an explicit memory barrier. If the ordering requirement is satisfied using either a Load-Acquire or Store-Release, then it would be preferable to use these instructions instead of a DMB" [ NOTE! This is my original minimal patch that unconditionally switches over to using smp_load_acquire(), instead of the much more involved and subtle patch that Christoph Lameter wrote that made it conditional. But Christoph gets authorship credit because I had initially thought that we needed the more complex model, and Christoph ran with it it and did the work. Only after looking at code generation for all the relevant architectures, did I come to the conclusion that nobody actually really needs the old "smp_rmb()" model. Even architectures without load-acquire support generally do as well or better with smp_load_acquire(). So credit to Christoph, but if this then causes issues on other architectures, put the blame solidly on me. Also note as part of the ruthless simplification, this gets rid of the overly subtle optimization where some code uses a non-barrier version of the sequence count (see the __read_seqcount_begin() users in fs/namei.c). They then play games with their own barriers and/or with nested sequence counts. Those optimizations are literally meaningless on x86, and questionable elsewhere. If somebody can show that they matter, we need to re-do them more cleanly than "use an internal helper". - Linus ] Signed-off-by: Christoph Lameter (Ampere) <cl@gentwo.org> Link: https://lore.kernel.org/all/20240912-seq_optimize-v3-1-8ee25e04dffa@gentwo.org/ Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Diffstat (limited to 'arch/x86/kvm/svm/sev.c')
0 files changed, 0 insertions, 0 deletions