diff options
author | Paul Burton <paul.burton@mips.com> | 2019-10-01 21:53:40 +0000 |
---|---|---|
committer | Paul Burton <paul.burton@mips.com> | 2019-10-07 09:43:06 -0700 |
commit | 7f56b123548142fd48b2c6891977e8fda695a838 (patch) | |
tree | fb4f572a77a5784120c22d95939b30b16c426cd0 | |
parent | e84957e6ae043bb83ad6ae7e949a1ce97b6bbfef (diff) |
MIPS: barrier: Remove loongson_llsc_mb()
The loongson_llsc_mb() macro is no longer used - instead barriers are
emitted as part of inline asm using the __SYNC() macro. Remove the
now-defunct loongson_llsc_mb() macro.
Signed-off-by: Paul Burton <paul.burton@mips.com>
Cc: linux-mips@vger.kernel.org
Cc: Huacai Chen <chenhc@lemote.com>
Cc: Jiaxun Yang <jiaxun.yang@flygoat.com>
Cc: linux-kernel@vger.kernel.org
-rw-r--r-- | arch/mips/include/asm/barrier.h | 40 | ||||
-rw-r--r-- | arch/mips/loongson64/Platform | 2 |
2 files changed, 1 insertions, 41 deletions
diff --git a/arch/mips/include/asm/barrier.h b/arch/mips/include/asm/barrier.h index 133afd565067..6d92d5ccdafa 100644 --- a/arch/mips/include/asm/barrier.h +++ b/arch/mips/include/asm/barrier.h @@ -122,46 +122,6 @@ static inline void wmb(void) #define __smp_mb__before_atomic() __smp_mb__before_llsc() #define __smp_mb__after_atomic() smp_llsc_mb() -/* - * Some Loongson 3 CPUs have a bug wherein execution of a memory access (load, - * store or prefetch) in between an LL & SC can cause the SC instruction to - * erroneously succeed, breaking atomicity. Whilst it's unusual to write code - * containing such sequences, this bug bites harder than we might otherwise - * expect due to reordering & speculation: - * - * 1) A memory access appearing prior to the LL in program order may actually - * be executed after the LL - this is the reordering case. - * - * In order to avoid this we need to place a memory barrier (ie. a SYNC - * instruction) prior to every LL instruction, in between it and any earlier - * memory access instructions. - * - * This reordering case is fixed by 3A R2 CPUs, ie. 3A2000 models and later. - * - * 2) If a conditional branch exists between an LL & SC with a target outside - * of the LL-SC loop, for example an exit upon value mismatch in cmpxchg() - * or similar, then misprediction of the branch may allow speculative - * execution of memory accesses from outside of the LL-SC loop. - * - * In order to avoid this we need a memory barrier (ie. a SYNC instruction) - * at each affected branch target, for which we also use loongson_llsc_mb() - * defined below. - * - * This case affects all current Loongson 3 CPUs. - * - * The above described cases cause an error in the cache coherence protocol; - * such that the Invalidate of a competing LL-SC goes 'missing' and SC - * erroneously observes its core still has Exclusive state and lets the SC - * proceed. - * - * Therefore the error only occurs on SMP systems. - */ -#ifdef CONFIG_CPU_LOONGSON3_WORKAROUNDS /* Loongson-3's LLSC workaround */ -#define loongson_llsc_mb() __asm__ __volatile__("sync" : : :"memory") -#else -#define loongson_llsc_mb() do { } while (0) -#endif - static inline void sync_ginv(void) { asm volatile(__SYNC(ginv, always)); diff --git a/arch/mips/loongson64/Platform b/arch/mips/loongson64/Platform index c1a4d4dc4665..28172500f95a 100644 --- a/arch/mips/loongson64/Platform +++ b/arch/mips/loongson64/Platform @@ -27,7 +27,7 @@ cflags-$(CONFIG_CPU_LOONGSON3) += -Wa,--trap # # Some versions of binutils, not currently mainline as of 2019/02/04, support # an -mfix-loongson3-llsc flag which emits a sync prior to each ll instruction -# to work around a CPU bug (see loongson_llsc_mb() in asm/barrier.h for a +# to work around a CPU bug (see __SYNC_loongson3_war in asm/sync.h for a # description). # # We disable this in order to prevent the assembler meddling with the |