summaryrefslogtreecommitdiff
path: root/Documentation
diff options
context:
space:
mode:
authorStephen Rothwell <sfr@canb.auug.org.au>2014-01-10 13:33:39 +1100
committerStephen Rothwell <sfr@canb.auug.org.au>2014-01-10 13:33:39 +1100
commitb8057368d6c29db538ff259266a4375200c3d029 (patch)
tree456ef3d3c15af4ff4c213daae456855ac26a71c2 /Documentation
parent592410b1ba9daf61c3bde92762a43eac58000850 (diff)
parentbb4d5c0664ffa18d5b4a6d68aead3b0d4a854ee1 (diff)
Merge remote-tracking branch 'tip/auto-latest'
Conflicts: drivers/acpi/acpi_extlog.c drivers/acpi/processor_idle.c
Diffstat (limited to 'Documentation')
-rw-r--r--Documentation/ABI/testing/sysfs-firmware-efi20
-rw-r--r--Documentation/ABI/testing/sysfs-firmware-efi-runtime-map34
-rw-r--r--Documentation/ABI/testing/sysfs-kernel-boot_params38
-rw-r--r--Documentation/RCU/trace.txt22
-rw-r--r--Documentation/acpi/apei/einj.txt19
-rw-r--r--Documentation/circular-buffers.txt45
-rw-r--r--Documentation/kernel-parameters.txt27
-rw-r--r--Documentation/memory-barriers.txt775
-rw-r--r--Documentation/robust-futex-ABI.txt4
-rw-r--r--Documentation/sysctl/kernel.txt5
-rw-r--r--Documentation/x86/boot.txt3
-rw-r--r--Documentation/x86/x86_64/mm.txt7
12 files changed, 824 insertions, 175 deletions
diff --git a/Documentation/ABI/testing/sysfs-firmware-efi b/Documentation/ABI/testing/sysfs-firmware-efi
new file mode 100644
index 000000000000..05874da7ce80
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-firmware-efi
@@ -0,0 +1,20 @@
+What: /sys/firmware/efi/fw_vendor
+Date: December 2013
+Contact: Dave Young <dyoung@redhat.com>
+Description: It shows the physical address of firmware vendor field in the
+ EFI system table.
+Users: Kexec
+
+What: /sys/firmware/efi/runtime
+Date: December 2013
+Contact: Dave Young <dyoung@redhat.com>
+Description: It shows the physical address of runtime service table entry in
+ the EFI system table.
+Users: Kexec
+
+What: /sys/firmware/efi/config_table
+Date: December 2013
+Contact: Dave Young <dyoung@redhat.com>
+Description: It shows the physical address of config table entry in the EFI
+ system table.
+Users: Kexec
diff --git a/Documentation/ABI/testing/sysfs-firmware-efi-runtime-map b/Documentation/ABI/testing/sysfs-firmware-efi-runtime-map
new file mode 100644
index 000000000000..c61b9b348e99
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-firmware-efi-runtime-map
@@ -0,0 +1,34 @@
+What: /sys/firmware/efi/runtime-map/
+Date: December 2013
+Contact: Dave Young <dyoung@redhat.com>
+Description: Switching efi runtime services to virtual mode requires
+ that all efi memory ranges which have the runtime attribute
+ bit set to be mapped to virtual addresses.
+
+ The efi runtime services can only be switched to virtual
+ mode once without rebooting. The kexec kernel must maintain
+ the same physical to virtual address mappings as the first
+ kernel. The mappings are exported to sysfs so userspace tools
+ can reassemble them and pass them into the kexec kernel.
+
+ /sys/firmware/efi/runtime-map/ is the directory the kernel
+ exports that information in.
+
+ subdirectories are named with the number of the memory range:
+
+ /sys/firmware/efi/runtime-map/0
+ /sys/firmware/efi/runtime-map/1
+ /sys/firmware/efi/runtime-map/2
+ /sys/firmware/efi/runtime-map/3
+ ...
+
+ Each subdirectory contains five files:
+
+ attribute : The attributes of the memory range.
+ num_pages : The size of the memory range in pages.
+ phys_addr : The physical address of the memory range.
+ type : The type of the memory range.
+ virt_addr : The virtual address of the memory range.
+
+ Above values are all hexadecimal numbers with the '0x' prefix.
+Users: Kexec
diff --git a/Documentation/ABI/testing/sysfs-kernel-boot_params b/Documentation/ABI/testing/sysfs-kernel-boot_params
new file mode 100644
index 000000000000..eca38ce2852d
--- /dev/null
+++ b/Documentation/ABI/testing/sysfs-kernel-boot_params
@@ -0,0 +1,38 @@
+What: /sys/kernel/boot_params
+Date: December 2013
+Contact: Dave Young <dyoung@redhat.com>
+Description: The /sys/kernel/boot_params directory contains two
+ files: "data" and "version" and one subdirectory "setup_data".
+ It is used to export the kernel boot parameters of an x86
+ platform to userspace for kexec and debugging purpose.
+
+ If there's no setup_data in boot_params the subdirectory will
+ not be created.
+
+ "data" file is the binary representation of struct boot_params.
+
+ "version" file is the string representation of boot
+ protocol version.
+
+ "setup_data" subdirectory contains the setup_data data
+ structure in boot_params. setup_data is maintained in kernel
+ as a link list. In "setup_data" subdirectory there's one
+ subdirectory for each link list node named with the number
+ of the list nodes. The list node subdirectory contains two
+ files "type" and "data". "type" file is the string
+ representation of setup_data type. "data" file is the binary
+ representation of setup_data payload.
+
+ The whole boot_params directory structure is like below:
+ /sys/kernel/boot_params
+ |__ data
+ |__ setup_data
+ | |__ 0
+ | | |__ data
+ | | |__ type
+ | |__ 1
+ | |__ data
+ | |__ type
+ |__ version
+
+Users: Kexec
diff --git a/Documentation/RCU/trace.txt b/Documentation/RCU/trace.txt
index f3778f8952da..910870b15acd 100644
--- a/Documentation/RCU/trace.txt
+++ b/Documentation/RCU/trace.txt
@@ -396,14 +396,14 @@ o Each element of the form "3/3 ..>. 0:7 ^0" represents one rcu_node
The output of "cat rcu/rcu_sched/rcu_pending" looks as follows:
- 0!np=26111 qsp=29 rpq=5386 cbr=1 cng=570 gpc=3674 gps=577 nn=15903
- 1!np=28913 qsp=35 rpq=6097 cbr=1 cng=448 gpc=3700 gps=554 nn=18113
- 2!np=32740 qsp=37 rpq=6202 cbr=0 cng=476 gpc=4627 gps=546 nn=20889
- 3 np=23679 qsp=22 rpq=5044 cbr=1 cng=415 gpc=3403 gps=347 nn=14469
- 4!np=30714 qsp=4 rpq=5574 cbr=0 cng=528 gpc=3931 gps=639 nn=20042
- 5 np=28910 qsp=2 rpq=5246 cbr=0 cng=428 gpc=4105 gps=709 nn=18422
- 6!np=38648 qsp=5 rpq=7076 cbr=0 cng=840 gpc=4072 gps=961 nn=25699
- 7 np=37275 qsp=2 rpq=6873 cbr=0 cng=868 gpc=3416 gps=971 nn=25147
+ 0!np=26111 qsp=29 rpq=5386 cbr=1 cng=570 gpc=3674 gps=577 nn=15903 ndw=0
+ 1!np=28913 qsp=35 rpq=6097 cbr=1 cng=448 gpc=3700 gps=554 nn=18113 ndw=0
+ 2!np=32740 qsp=37 rpq=6202 cbr=0 cng=476 gpc=4627 gps=546 nn=20889 ndw=0
+ 3 np=23679 qsp=22 rpq=5044 cbr=1 cng=415 gpc=3403 gps=347 nn=14469 ndw=0
+ 4!np=30714 qsp=4 rpq=5574 cbr=0 cng=528 gpc=3931 gps=639 nn=20042 ndw=0
+ 5 np=28910 qsp=2 rpq=5246 cbr=0 cng=428 gpc=4105 gps=709 nn=18422 ndw=0
+ 6!np=38648 qsp=5 rpq=7076 cbr=0 cng=840 gpc=4072 gps=961 nn=25699 ndw=0
+ 7 np=37275 qsp=2 rpq=6873 cbr=0 cng=868 gpc=3416 gps=971 nn=25147 ndw=0
The fields are as follows:
@@ -432,6 +432,10 @@ o "gpc" is the number of times that an old grace period had
o "gps" is the number of times that a new grace period had started,
but this CPU was not yet aware of it.
+o "ndw" is the number of times that a wakeup of an rcuo
+ callback-offload kthread had to be deferred in order to avoid
+ deadlock.
+
o "nn" is the number of times that this CPU needed nothing.
@@ -443,7 +447,7 @@ The output of "cat rcu/rcuboost" looks as follows:
balk: nt=0 egt=6541 bt=0 nb=0 ny=126 nos=0
This information is output only for rcu_preempt. Each two-line entry
-corresponds to a leaf rcu_node strcuture. The fields are as follows:
+corresponds to a leaf rcu_node structure. The fields are as follows:
o "n:m" is the CPU-number range for the corresponding two-line
entry. In the sample output above, the first entry covers
diff --git a/Documentation/acpi/apei/einj.txt b/Documentation/acpi/apei/einj.txt
index a58b63da1a36..f51861bcb07b 100644
--- a/Documentation/acpi/apei/einj.txt
+++ b/Documentation/acpi/apei/einj.txt
@@ -45,11 +45,22 @@ directory apei/einj. The following files are provided.
injection. Before this, please specify all necessary error
parameters.
+- flags
+ Present for kernel version 3.13 and above. Used to specify which
+ of param{1..4} are valid and should be used by BIOS during injection.
+ Value is a bitmask as specified in ACPI5.0 spec for the
+ SET_ERROR_TYPE_WITH_ADDRESS data structure:
+ Bit 0 - Processor APIC field valid (see param3 below)
+ Bit 1 - Memory address and mask valid (param1 and param2)
+ Bit 2 - PCIe (seg,bus,dev,fn) valid (param4 below)
+ If set to zero, legacy behaviour is used where the type of injection
+ specifies just one bit set, and param1 is multiplexed.
+
- param1
This file is used to set the first error parameter value. Effect of
parameter depends on error_type specified. For example, if error
type is memory related type, the param1 should be a valid physical
- memory address.
+ memory address. [Unless "flag" is set - see above]
- param2
This file is used to set the second error parameter value. Effect of
@@ -58,6 +69,12 @@ directory apei/einj. The following files are provided.
address mask. Linux requires page or narrower granularity, say,
0xfffffffffffff000.
+- param3
+ Used when the 0x1 bit is set in "flag" to specify the APIC id
+
+- param4
+ Used when the 0x4 bit is set in "flag" to specify target PCIe device
+
- notrigger
The EINJ mechanism is a two step process. First inject the error, then
perform some actions to trigger it. Setting "notrigger" to 1 skips the
diff --git a/Documentation/circular-buffers.txt b/Documentation/circular-buffers.txt
index 8117e5bf6065..88951b179262 100644
--- a/Documentation/circular-buffers.txt
+++ b/Documentation/circular-buffers.txt
@@ -160,6 +160,7 @@ The producer will look something like this:
spin_lock(&producer_lock);
unsigned long head = buffer->head;
+ /* The spin_unlock() and next spin_lock() provide needed ordering. */
unsigned long tail = ACCESS_ONCE(buffer->tail);
if (CIRC_SPACE(head, tail, buffer->size) >= 1) {
@@ -168,9 +169,8 @@ The producer will look something like this:
produce_item(item);
- smp_wmb(); /* commit the item before incrementing the head */
-
- buffer->head = (head + 1) & (buffer->size - 1);
+ smp_store_release(buffer->head,
+ (head + 1) & (buffer->size - 1));
/* wake_up() will make sure that the head is committed before
* waking anyone up */
@@ -183,9 +183,14 @@ This will instruct the CPU that the contents of the new item must be written
before the head index makes it available to the consumer and then instructs the
CPU that the revised head index must be written before the consumer is woken.
-Note that wake_up() doesn't have to be the exact mechanism used, but whatever
-is used must guarantee a (write) memory barrier between the update of the head
-index and the change of state of the consumer, if a change of state occurs.
+Note that wake_up() does not guarantee any sort of barrier unless something
+is actually awakened. We therefore cannot rely on it for ordering. However,
+there is always one element of the array left empty. Therefore, the
+producer must produce two elements before it could possibly corrupt the
+element currently being read by the consumer. Therefore, the unlock-lock
+pair between consecutive invocations of the consumer provides the necessary
+ordering between the read of the index indicating that the consumer has
+vacated a given element and the write by the producer to that same element.
THE CONSUMER
@@ -195,21 +200,20 @@ The consumer will look something like this:
spin_lock(&consumer_lock);
- unsigned long head = ACCESS_ONCE(buffer->head);
+ /* Read index before reading contents at that index. */
+ unsigned long head = smp_load_acquire(buffer->head);
unsigned long tail = buffer->tail;
if (CIRC_CNT(head, tail, buffer->size) >= 1) {
- /* read index before reading contents at that index */
- smp_read_barrier_depends();
/* extract one item from the buffer */
struct item *item = buffer[tail];
consume_item(item);
- smp_mb(); /* finish reading descriptor before incrementing tail */
-
- buffer->tail = (tail + 1) & (buffer->size - 1);
+ /* Finish reading descriptor before incrementing tail. */
+ smp_store_release(buffer->tail,
+ (tail + 1) & (buffer->size - 1));
}
spin_unlock(&consumer_lock);
@@ -218,12 +222,17 @@ This will instruct the CPU to make sure the index is up to date before reading
the new item, and then it shall make sure the CPU has finished reading the item
before it writes the new tail pointer, which will erase the item.
-
-Note the use of ACCESS_ONCE() in both algorithms to read the opposition index.
-This prevents the compiler from discarding and reloading its cached value -
-which some compilers will do across smp_read_barrier_depends(). This isn't
-strictly needed if you can be sure that the opposition index will _only_ be
-used the once.
+Note the use of ACCESS_ONCE() and smp_load_acquire() to read the
+opposition index. This prevents the compiler from discarding and
+reloading its cached value - which some compilers will do across
+smp_read_barrier_depends(). This isn't strictly needed if you can
+be sure that the opposition index will _only_ be used the once.
+The smp_load_acquire() additionally forces the CPU to order against
+subsequent memory references. Similarly, smp_store_release() is used
+in both algorithms to write the thread's index. This documents the
+fact that we are writing to something that can be read concurrently,
+prevents the compiler from tearing the store, and enforces ordering
+against previous accesses.
===============
diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index 50a58adfeaf9..e0ebffef5183 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -888,6 +888,14 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
The xen output can only be used by Xen PV guests.
+ edac_report= [HW,EDAC] Control how to report EDAC event
+ Format: {"on" | "off" | "force"}
+ on: enable EDAC to report H/W event. May be overridden
+ by other higher priority error reporting module.
+ off: disable H/W event reporting through EDAC.
+ force: enforce the use of EDAC to report H/W event.
+ default: on.
+
ekgdboc= [X86,KGDB] Allow early kernel console debugging
ekgdboc=kbd
@@ -897,6 +905,12 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
edd= [EDD]
Format: {"off" | "on" | "skip[mbr]"}
+ efi= [EFI]
+ Format: { "old_map" }
+ old_map [X86-64]: switch to the old ioremap-based EFI
+ runtime services mapping. 32-bit still uses this one by
+ default.
+
efi_no_storage_paranoia [EFI; X86]
Using this parameter you can use more than 50% of
your efi variable storage. Use this parameter only if
@@ -2001,6 +2015,10 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
noapic [SMP,APIC] Tells the kernel to not make use of any
IOAPICs that may be present in the system.
+ nokaslr [X86]
+ Disable kernel base offset ASLR (Address Space
+ Layout Randomization) if built into the kernel.
+
noautogroup Disable scheduler automatic task group creation.
nobats [PPC] Do not use BATs for mapping kernel lowmem
@@ -2634,7 +2652,6 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
for RCU-preempt, and "s" for RCU-sched, and "N"
is the CPU number. This reduces OS jitter on the
offloaded CPUs, which can be useful for HPC and
-
real-time workloads. It can also improve energy
efficiency for asymmetric multiprocessors.
@@ -2650,8 +2667,8 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
periodically wake up to do the polling.
rcutree.blimit= [KNL]
- Set maximum number of finished RCU callbacks to process
- in one batch.
+ Set maximum number of finished RCU callbacks to
+ process in one batch.
rcutree.rcu_fanout_leaf= [KNL]
Increase the number of CPUs assigned to each
@@ -2670,8 +2687,8 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
value is one, and maximum value is HZ.
rcutree.qhimark= [KNL]
- Set threshold of queued
- RCU callbacks over which batch limiting is disabled.
+ Set threshold of queued RCU callbacks beyond which
+ batch limiting is disabled.
rcutree.qlowmark= [KNL]
Set threshold of queued RCU callbacks below which
diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
index c8c42e64e953..cb753c8158f2 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -194,18 +194,22 @@ There are some minimal guarantees that may be expected of a CPU:
(*) On any given CPU, dependent memory accesses will be issued in order, with
respect to itself. This means that for:
- Q = P; D = *Q;
+ ACCESS_ONCE(Q) = P; smp_read_barrier_depends(); D = ACCESS_ONCE(*Q);
the CPU will issue the following memory operations:
Q = LOAD P, D = LOAD *Q
- and always in that order.
+ and always in that order. On most systems, smp_read_barrier_depends()
+ does nothing, but it is required for DEC Alpha. The ACCESS_ONCE()
+ is required to prevent compiler mischief. Please note that you
+ should normally use something like rcu_dereference() instead of
+ open-coding smp_read_barrier_depends().
(*) Overlapping loads and stores within a particular CPU will appear to be
ordered within that CPU. This means that for:
- a = *X; *X = b;
+ a = ACCESS_ONCE(*X); ACCESS_ONCE(*X) = b;
the CPU will only issue the following sequence of memory operations:
@@ -213,7 +217,7 @@ There are some minimal guarantees that may be expected of a CPU:
And for:
- *X = c; d = *X;
+ ACCESS_ONCE(*X) = c; d = ACCESS_ONCE(*X);
the CPU will only issue:
@@ -224,6 +228,12 @@ There are some minimal guarantees that may be expected of a CPU:
And there are a number of things that _must_ or _must_not_ be assumed:
+ (*) It _must_not_ be assumed that the compiler will do what you want with
+ memory references that are not protected by ACCESS_ONCE(). Without
+ ACCESS_ONCE(), the compiler is within its rights to do all sorts
+ of "creative" transformations, which are covered in the Compiler
+ Barrier section.
+
(*) It _must_not_ be assumed that independent loads and stores will be issued
in the order given. This means that for:
@@ -392,12 +402,18 @@ And a couple of implicit varieties:
Memory operations that occur after an UNLOCK operation may appear to
happen before it completes.
- LOCK and UNLOCK operations are guaranteed to appear with respect to each
- other strictly in the order specified.
-
The use of LOCK and UNLOCK operations generally precludes the need for
other sorts of memory barrier (but note the exceptions mentioned in the
- subsection "MMIO write barrier").
+ subsection "MMIO write barrier"). In addition, an UNLOCK+LOCK pair
+ is -not- guaranteed to act as a full memory barrier. However,
+ after a LOCK on a given lock variable, all memory accesses preceding any
+ prior UNLOCK on that same variable are guaranteed to be visible.
+ In other words, within a given lock variable's critical section,
+ all accesses of all previous critical sections for that lock variable
+ are guaranteed to have completed.
+
+ This means that LOCK acts as a minimal "acquire" operation and
+ UNLOCK acts as a minimal "release" operation.
Memory barriers are only required where there's a possibility of interaction
@@ -450,14 +466,14 @@ The usage requirements of data dependency barriers are a little subtle, and
it's not always obvious that they're needed. To illustrate, consider the
following sequence of events:
- CPU 1 CPU 2
- =============== ===============
+ CPU 1 CPU 2
+ =============== ===============
{ A == 1, B == 2, C = 3, P == &A, Q == &C }
B = 4;
<write barrier>
- P = &B
- Q = P;
- D = *Q;
+ ACCESS_ONCE(P) = &B
+ Q = ACCESS_ONCE(P);
+ D = *Q;
There's a clear data dependency here, and it would seem that by the end of the
sequence, Q must be either &A or &B, and that:
@@ -477,15 +493,15 @@ Alpha).
To deal with this, a data dependency barrier or better must be inserted
between the address load and the data load:
- CPU 1 CPU 2
- =============== ===============
+ CPU 1 CPU 2
+ =============== ===============
{ A == 1, B == 2, C = 3, P == &A, Q == &C }
B = 4;
<write barrier>
- P = &B
- Q = P;
- <data dependency barrier>
- D = *Q;
+ ACCESS_ONCE(P) = &B
+ Q = ACCESS_ONCE(P);
+ <data dependency barrier>
+ D = *Q;
This enforces the occurrence of one of the two implications, and prevents the
third possibility from arising.
@@ -500,25 +516,26 @@ odd-numbered bank is idle, one can see the new value of the pointer P (&B),
but the old value of the variable B (2).
-Another example of where data dependency barriers might by required is where a
+Another example of where data dependency barriers might be required is where a
number is read from memory and then used to calculate the index for an array
access:
- CPU 1 CPU 2
- =============== ===============
+ CPU 1 CPU 2
+ =============== ===============
{ M[0] == 1, M[1] == 2, M[3] = 3, P == 0, Q == 3 }
M[1] = 4;
<write barrier>
- P = 1
- Q = P;
- <data dependency barrier>
- D = M[Q];
+ ACCESS_ONCE(P) = 1
+ Q = ACCESS_ONCE(P);
+ <data dependency barrier>
+ D = M[Q];
-The data dependency barrier is very important to the RCU system, for example.
-See rcu_dereference() in include/linux/rcupdate.h. This permits the current
-target of an RCU'd pointer to be replaced with a new modified target, without
-the replacement target appearing to be incompletely initialised.
+The data dependency barrier is very important to the RCU system,
+for example. See rcu_assign_pointer() and rcu_dereference() in
+include/linux/rcupdate.h. This permits the current target of an RCU'd
+pointer to be replaced with a new modified target, without the replacement
+target appearing to be incompletely initialised.
See also the subsection on "Cache Coherency" for a more thorough example.
@@ -530,24 +547,190 @@ A control dependency requires a full read memory barrier, not simply a data
dependency barrier to make it work correctly. Consider the following bit of
code:
- q = &a;
- if (p) {
- <data dependency barrier>
- q = &b;
+ q = ACCESS_ONCE(a);
+ if (q) {
+ <data dependency barrier> /* BUG: No data dependency!!! */
+ p = ACCESS_ONCE(b);
}
- x = *q;
This will not have the desired effect because there is no actual data
-dependency, but rather a control dependency that the CPU may short-circuit by
-attempting to predict the outcome in advance. In such a case what's actually
-required is:
+dependency, but rather a control dependency that the CPU may short-circuit
+by attempting to predict the outcome in advance, so that other CPUs see
+the load from b as having happened before the load from a. In such a
+case what's actually required is:
- q = &a;
- if (p) {
+ q = ACCESS_ONCE(a);
+ if (q) {
<read barrier>
- q = &b;
+ p = ACCESS_ONCE(b);
+ }
+
+However, stores are not speculated. This means that ordering -is- provided
+in the following example:
+
+ q = ACCESS_ONCE(a);
+ if (ACCESS_ONCE(q)) {
+ ACCESS_ONCE(b) = p;
+ }
+
+Please note that ACCESS_ONCE() is not optional! Without the ACCESS_ONCE(),
+the compiler is within its rights to transform this example:
+
+ q = a;
+ if (q) {
+ b = p; /* BUG: Compiler can reorder!!! */
+ do_something();
+ } else {
+ b = p; /* BUG: Compiler can reorder!!! */
+ do_something_else();
+ }
+
+into this, which of course defeats the ordering:
+
+ b = p;
+ q = a;
+ if (q)
+ do_something();
+ else
+ do_something_else();
+
+Worse yet, if the compiler is able to prove (say) that the value of
+variable 'a' is always non-zero, it would be well within its rights
+to optimize the original example by eliminating the "if" statement
+as follows:
+
+ q = a;
+ b = p; /* BUG: Compiler can reorder!!! */
+ do_something();
+
+The solution is again ACCESS_ONCE(), which preserves the ordering between
+the load from variable 'a' and the store to variable 'b':
+
+ q = ACCESS_ONCE(a);
+ if (q) {
+ ACCESS_ONCE(b) = p;
+ do_something();
+ } else {
+ ACCESS_ONCE(b) = p;
+ do_something_else();
}
- x = *q;
+
+You could also use barrier() to prevent the compiler from moving
+the stores to variable 'b', but barrier() would not prevent the
+compiler from proving to itself that a==1 always, so ACCESS_ONCE()
+is also needed.
+
+It is important to note that control dependencies absolutely require a
+a conditional. For example, the following "optimized" version of
+the above example breaks ordering:
+
+ q = ACCESS_ONCE(a);
+ ACCESS_ONCE(b) = p; /* BUG: No ordering vs. load from a!!! */
+ if (q) {
+ /* ACCESS_ONCE(b) = p; -- moved up, BUG!!! */
+ do_something();
+ } else {
+ /* ACCESS_ONCE(b) = p; -- moved up, BUG!!! */
+ do_something_else();
+ }
+
+It is of course legal for the prior load to be part of the conditional,
+for example, as follows:
+
+ if (ACCESS_ONCE(a) > 0) {
+ ACCESS_ONCE(b) = q / 2;
+ do_something();
+ } else {
+ ACCESS_ONCE(b) = q / 3;
+ do_something_else();
+ }
+
+This will again ensure that the load from variable 'a' is ordered before the
+stores to variable 'b'.
+
+In addition, you need to be careful what you do with the local variable 'q',
+otherwise the compiler might be able to guess the value and again remove
+the needed conditional. For example:
+
+ q = ACCESS_ONCE(a);
+ if (q % MAX) {
+ ACCESS_ONCE(b) = p;
+ do_something();
+ } else {
+ ACCESS_ONCE(b) = p;
+ do_something_else();
+ }
+
+If MAX is defined to be 1, then the compiler knows that (q % MAX) is
+equal to zero, in which case the compiler is within its rights to
+transform the above code into the following:
+
+ q = ACCESS_ONCE(a);
+ ACCESS_ONCE(b) = p;
+ do_something_else();
+
+This transformation loses the ordering between the load from variable 'a'
+and the store to variable 'b'. If you are relying on this ordering, you
+should do something like the following:
+
+ q = ACCESS_ONCE(a);
+ BUILD_BUG_ON(MAX <= 1); /* Order load from a with store to b. */
+ if (q % MAX) {
+ ACCESS_ONCE(b) = p;
+ do_something();
+ } else {
+ ACCESS_ONCE(b) = p;
+ do_something_else();
+ }
+
+Finally, control dependencies do -not- provide transitivity. This is
+demonstrated by two related examples:
+
+ CPU 0 CPU 1
+ ===================== =====================
+ r1 = ACCESS_ONCE(x); r2 = ACCESS_ONCE(y);
+ if (r1 >= 0) if (r2 >= 0)
+ ACCESS_ONCE(y) = 1; ACCESS_ONCE(x) = 1;
+
+ assert(!(r1 == 1 && r2 == 1));
+
+The above two-CPU example will never trigger the assert(). However,
+if control dependencies guaranteed transitivity (which they do not),
+then adding the following two CPUs would guarantee a related assertion:
+
+ CPU 2 CPU 3
+ ===================== =====================
+ ACCESS_ONCE(x) = 2; ACCESS_ONCE(y) = 2;
+
+ assert(!(r1 == 2 && r2 == 2 && x == 1 && y == 1)); /* FAILS!!! */
+
+But because control dependencies do -not- provide transitivity, the
+above assertion can fail after the combined four-CPU example completes.
+If you need the four-CPU example to provide ordering, you will need
+smp_mb() between the loads and stores in the CPU 0 and CPU 1 code fragments.
+
+In summary:
+
+ (*) Control dependencies can order prior loads against later stores.
+ However, they do -not- guarantee any other sort of ordering:
+ Not prior loads against later loads, nor prior stores against
+ later anything. If you need these other forms of ordering,
+ use smb_rmb(), smp_wmb(), or, in the case of prior stores and
+ later loads, smp_mb().
+
+ (*) Control dependencies require at least one run-time conditional
+ between the prior load and the subsequent store. If the compiler
+ is able to optimize the conditional away, it will have also
+ optimized away the ordering. Careful use of ACCESS_ONCE() can
+ help to preserve the needed conditional.
+
+ (*) Control dependencies require that the compiler avoid reordering the
+ dependency into nonexistence. Careful use of ACCESS_ONCE() or
+ barrier() can help to preserve your control dependency. Please
+ see the Compiler Barrier section for more information.
+
+ (*) Control dependencies do -not- provide transitivity. If you
+ need transitivity, use smp_mb().
SMP BARRIER PAIRING
@@ -561,23 +744,23 @@ barrier, though a general barrier would also be viable. Similarly a read
barrier or a data dependency barrier should always be paired with at least an
write barrier, though, again, a general barrier is viable:
- CPU 1 CPU 2
- =============== ===============
- a = 1;
+ CPU 1 CPU 2
+ =============== ===============
+ ACCESS_ONCE(a) = 1;
<write barrier>
- b = 2; x = b;
- <read barrier>
- y = a;
+ ACCESS_ONCE(b) = 2; x = ACCESS_ONCE(b);
+ <read barrier>
+ y = ACCESS_ONCE(a);
Or:
- CPU 1 CPU 2
- =============== ===============================
+ CPU 1 CPU 2
+ =============== ===============================
a = 1;
<write barrier>
- b = &a; x = b;
- <data dependency barrier>
- y = *x;
+ ACCESS_ONCE(b) = &a; x = ACCESS_ONCE(b);
+ <data dependency barrier>
+ y = *x;
Basically, the read barrier always has to be there, even though it can be of
the "weaker" type.
@@ -586,13 +769,13 @@ the "weaker" type.
match the loads after the read barrier or the data dependency barrier, and vice
versa:
- CPU 1 CPU 2
- =============== ===============
- a = 1; }---- --->{ v = c
- b = 2; } \ / { w = d
- <write barrier> \ <read barrier>
- c = 3; } / \ { x = a;
- d = 4; }---- --->{ y = b;
+ CPU 1 CPU 2
+ =================== ===================
+ ACCESS_ONCE(a) = 1; }---- --->{ v = ACCESS_ONCE(c);
+ ACCESS_ONCE(b) = 2; } \ / { w = ACCESS_ONCE(d);
+ <write barrier> \ <read barrier>
+ ACCESS_ONCE(c) = 3; } / \ { x = ACCESS_ONCE(a);
+ ACCESS_ONCE(d) = 4; }---- --->{ y = ACCESS_ONCE(b);
EXAMPLES OF MEMORY BARRIER SEQUENCES
@@ -882,12 +1065,12 @@ cache it for later use.
Consider:
- CPU 1 CPU 2
+ CPU 1 CPU 2
======================= =======================
- LOAD B
- DIVIDE } Divide instructions generally
- DIVIDE } take a long time to perform
- LOAD A
+ LOAD B
+ DIVIDE } Divide instructions generally
+ DIVIDE } take a long time to perform
+ LOAD A
Which might appear as this:
@@ -910,13 +1093,13 @@ Which might appear as this:
Placing a read barrier or a data dependency barrier just before the second
load:
- CPU 1 CPU 2
+ CPU 1 CPU 2
======================= =======================
- LOAD B
- DIVIDE
- DIVIDE
+ LOAD B
+ DIVIDE
+ DIVIDE
<read barrier>
- LOAD A
+ LOAD A
will force any value speculatively obtained to be reconsidered to an extent
dependent on the type of barrier used. If there was no change made to the
@@ -1042,10 +1225,277 @@ compiler from moving the memory accesses either side of it to the other side:
barrier();
-This is a general barrier - lesser varieties of compiler barrier do not exist.
+This is a general barrier -- there are no read-read or write-write variants
+of barrier(). However, ACCESS_ONCE() can be thought of as a weak form
+for barrier() that affects only the specific accesses flagged by the
+ACCESS_ONCE().
+
+The barrier() function has the following effects:
+
+ (*) Prevents the compiler from reordering accesses following the
+ barrier() to precede any accesses preceding the barrier().
+ One example use for this property is to ease communication between
+ interrupt-handler code and the code that was interrupted.
+
+ (*) Within a loop, forces the compiler to load the variables used
+ in that loop's conditional on each pass through that loop.
+
+The ACCESS_ONCE() function can prevent any number of optimizations that,
+while perfectly safe in single-threaded code, can be fatal in concurrent
+code. Here are some examples of these sorts of optimizations:
+
+ (*) The compiler is within its rights to merge successive loads from
+ the same variable. Such merging can cause the compiler to "optimize"
+ the following code:
+
+ while (tmp = a)
+ do_something_with(tmp);
+
+ into the following code, which, although in some sense legitimate
+ for single-threaded code, is almost certainly not what the developer
+ intended:
+
+ if (tmp = a)
+ for (;;)
+ do_something_with(tmp);
+
+ Use ACCESS_ONCE() to prevent the compiler from doing this to you:
+
+ while (tmp = ACCESS_ONCE(a))
+ do_something_with(tmp);
+
+ (*) The compiler is within its rights to reload a variable, for example,
+ in cases where high register pressure prevents the compiler from
+ keeping all data of interest in registers. The compiler might
+ therefore optimize the variable 'tmp' out of our previous example:
+
+ while (tmp = a)
+ do_something_with(tmp);
+
+ This could result in the following code, which is perfectly safe in
+ single-threaded code, but can be fatal in concurrent code:
+
+ while (a)
+ do_something_with(a);
+
+ For example, the optimized version of this code could result in
+ passing a zero to do_something_with() in the case where the variable
+ a was modified by some other CPU between the "while" statement and
+ the call to do_something_with().
+
+ Again, use ACCESS_ONCE() to prevent the compiler from doing this:
+
+ while (tmp = ACCESS_ONCE(a))
+ do_something_with(tmp);
-The compiler barrier has no direct effect on the CPU, which may then reorder
-things however it wishes.
+ Note that if the compiler runs short of registers, it might save
+ tmp onto the stack. The overhead of this saving and later restoring
+ is why compilers reload variables. Doing so is perfectly safe for
+ single-threaded code, so you need to tell the compiler about cases
+ where it is not safe.
+
+ (*) The compiler is within its rights to omit a load entirely if it knows
+ what the value will be. For example, if the compiler can prove that
+ the value of variable 'a' is always zero, it can optimize this code:
+
+ while (tmp = a)
+ do_something_with(tmp);
+
+ Into this:
+
+ do { } while (0);
+
+ This transformation is a win for single-threaded code because it gets
+ rid of a load and a branch. The problem is that the compiler will
+ carry out its proof assuming that the current CPU is the only one
+ updating variable 'a'. If variable 'a' is shared, then the compiler's
+ proof will be erroneous. Use ACCESS_ONCE() to tell the compiler
+ that it doesn't know as much as it thinks it does:
+
+ while (tmp = ACCESS_ONCE(a))
+ do_something_with(tmp);
+
+ But please note that the compiler is also closely watching what you
+ do with the value after the ACCESS_ONCE(). For example, suppose you
+ do the following and MAX is a preprocessor macro with the value 1:
+
+ while ((tmp = ACCESS_ONCE(a)) % MAX)
+ do_something_with(tmp);
+
+ Then the compiler knows that the result of the "%" operator applied
+ to MAX will always be zero, again allowing the compiler to optimize
+ the code into near-nonexistence. (It will still load from the
+ variable 'a'.)
+
+ (*) Similarly, the compiler is within its rights to omit a store entirely
+ if it knows that the variable already has the value being stored.
+ Again, the compiler assumes that the current CPU is the only one
+ storing into the variable, which can cause the compiler to do the
+ wrong thing for shared variables. For example, suppose you have
+ the following:
+
+ a = 0;
+ /* Code that does not store to variable a. */
+ a = 0;
+
+ The compiler sees that the value of variable 'a' is already zero, so
+ it might well omit the second store. This would come as a fatal
+ surprise if some other CPU might have stored to variable 'a' in the
+ meantime.
+
+ Use ACCESS_ONCE() to prevent the compiler from making this sort of
+ wrong guess:
+
+ ACCESS_ONCE(a) = 0;
+ /* Code that does not store to variable a. */
+ ACCESS_ONCE(a) = 0;
+
+ (*) The compiler is within its rights to reorder memory accesses unless
+ you tell it not to. For example, consider the following interaction
+ between process-level code and an interrupt handler:
+
+ void process_level(void)
+ {
+ msg = get_message();
+ flag = true;
+ }
+
+ void interrupt_handler(void)
+ {
+ if (flag)
+ process_message(msg);
+ }
+
+ There is nothing to prevent the the compiler from transforming
+ process_level() to the following, in fact, this might well be a
+ win for single-threaded code:
+
+ void process_level(void)
+ {
+ flag = true;
+ msg = get_message();
+ }
+
+ If the interrupt occurs between these two statement, then
+ interrupt_handler() might be passed a garbled msg. Use ACCESS_ONCE()
+ to prevent this as follows:
+
+ void process_level(void)
+ {
+ ACCESS_ONCE(msg) = get_message();
+ ACCESS_ONCE(flag) = true;
+ }
+
+ void interrupt_handler(void)
+ {
+ if (ACCESS_ONCE(flag))
+ process_message(ACCESS_ONCE(msg));
+ }
+
+ Note that the ACCESS_ONCE() wrappers in interrupt_handler()
+ are needed if this interrupt handler can itself be interrupted
+ by something that also accesses 'flag' and 'msg', for example,
+ a nested interrupt or an NMI. Otherwise, ACCESS_ONCE() is not
+ needed in interrupt_handler() other than for documentation purposes.
+ (Note also that nested interrupts do not typically occur in modern
+ Linux kernels, in fact, if an interrupt handler returns with
+ interrupts enabled, you will get a WARN_ONCE() splat.)
+
+ You should assume that the compiler can move ACCESS_ONCE() past
+ code not containing ACCESS_ONCE(), barrier(), or similar primitives.
+
+ This effect could also be achieved using barrier(), but ACCESS_ONCE()
+ is more selective: With ACCESS_ONCE(), the compiler need only forget
+ the contents of the indicated memory locations, while with barrier()
+ the compiler must discard the value of all memory locations that
+ it has currented cached in any machine registers. Of course,
+ the compiler must also respect the order in which the ACCESS_ONCE()s
+ occur, though the CPU of course need not do so.
+
+ (*) The compiler is within its rights to invent stores to a variable,
+ as in the following example:
+
+ if (a)
+ b = a;
+ else
+ b = 42;
+
+ The compiler might save a branch by optimizing this as follows:
+
+ b = 42;
+ if (a)
+ b = a;
+
+ In single-threaded code, this is not only safe, but also saves
+ a branch. Unfortunately, in concurrent code, this optimization
+ could cause some other CPU to see a spurious value of 42 -- even
+ if variable 'a' was never zero -- when loading variable 'b'.
+ Use ACCESS_ONCE() to prevent this as follows:
+
+ if (a)
+ ACCESS_ONCE(b) = a;
+ else
+ ACCESS_ONCE(b) = 42;
+
+ The compiler can also invent loads. These are usually less
+ damaging, but they can result in cache-line bouncing and thus in
+ poor performance and scalability. Use ACCESS_ONCE() to prevent
+ invented loads.
+
+ (*) For aligned memory locations whose size allows them to be accessed
+ with a single memory-reference instruction, prevents "load tearing"
+ and "store tearing," in which a single large access is replaced by
+ multiple smaller accesses. For example, given an architecture having
+ 16-bit store instructions with 7-bit immediate fields, the compiler
+ might be tempted to use two 16-bit store-immediate instructions to
+ implement the following 32-bit store:
+
+ p = 0x00010002;
+
+ Please note that GCC really does use this sort of optimization,
+ which is not surprising given that it would likely take more
+ than two instructions to build the constant and then store it.
+ This optimization can therefore be a win in single-threaded code.
+ In fact, a recent bug (since fixed) caused GCC to incorrectly use
+ this optimization in a volatile store. In the absence of such bugs,
+ use of ACCESS_ONCE() prevents store tearing in the following example:
+
+ ACCESS_ONCE(p) = 0x00010002;
+
+ Use of packed structures can also result in load and store tearing,
+ as in this example:
+
+ struct __attribute__((__packed__)) foo {
+ short a;
+ int b;
+ short c;
+ };
+ struct foo foo1, foo2;
+ ...
+
+ foo2.a = foo1.a;
+ foo2.b = foo1.b;
+ foo2.c = foo1.c;
+
+ Because there are no ACCESS_ONCE() wrappers and no volatile markings,
+ the compiler would be well within its rights to implement these three
+ assignment statements as a pair of 32-bit loads followed by a pair
+ of 32-bit stores. This would result in load tearing on 'foo1.b'
+ and store tearing on 'foo2.b'. ACCESS_ONCE() again prevents tearing
+ in this example:
+
+ foo2.a = foo1.a;
+ ACCESS_ONCE(foo2.b) = ACCESS_ONCE(foo1.b);
+ foo2.c = foo1.c;
+
+All that aside, it is never necessary to use ACCESS_ONCE() on a variable
+that has been marked volatile. For example, because 'jiffies' is marked
+volatile, it is never necessary to say ACCESS_ONCE(jiffies). The reason
+for this is that ACCESS_ONCE() is implemented as a volatile cast, which
+has no effect when its argument is already marked volatile.
+
+Please note that these compiler barriers have no direct effect on the CPU,
+which may then reorder things however it wishes.
CPU MEMORY BARRIERS
@@ -1189,8 +1639,12 @@ for each construct. These operations all imply certain barriers:
Memory operations issued after the LOCK will be completed after the LOCK
operation has completed.
- Memory operations issued before the LOCK may be completed after the LOCK
- operation has completed.
+ Memory operations issued before the LOCK may be completed after the
+ LOCK operation has completed. An smp_mb__before_spinlock(), combined
+ with a following LOCK, orders prior loads against subsequent stores
+ and stores and prior stores against subsequent stores. Note that
+ this is weaker than smp_mb()! The smp_mb__before_spinlock()
+ primitive is free on many architectures.
(2) UNLOCK operation implication:
@@ -1210,9 +1664,6 @@ for each construct. These operations all imply certain barriers:
All LOCK operations issued before an UNLOCK operation will be completed
before the UNLOCK operation.
- All UNLOCK operations issued before a LOCK operation will be completed
- before the LOCK operation.
-
(5) Failed conditional LOCK implication:
Certain variants of the LOCK operation may fail, either due to being
@@ -1220,9 +1671,6 @@ for each construct. These operations all imply certain barriers:
signal whilst asleep waiting for the lock to become available. Failed
locks do not imply any sort of barrier.
-Therefore, from (1), (2) and (4) an UNLOCK followed by an unconditional LOCK is
-equivalent to a full barrier, but a LOCK followed by an UNLOCK is not.
-
[!] Note: one of the consequences of LOCKs and UNLOCKs being only one-way
barriers is that the effects of instructions outside of a critical section
may seep into the inside of the critical section.
@@ -1233,13 +1681,57 @@ LOCK, and an access following the UNLOCK to happen before the UNLOCK, and the
two accesses can themselves then cross:
*A = a;
- LOCK
- UNLOCK
+ LOCK M
+ UNLOCK M
*B = b;
may occur as:
- LOCK, STORE *B, STORE *A, UNLOCK
+ LOCK M, STORE *B, STORE *A, UNLOCK M
+
+This same reordering can of course occur if the LOCK and UNLOCK are
+to the same lock variable, but only from the perspective of another
+CPU not holding that lock.
+
+In short, an UNLOCK followed by a LOCK may -not- be assumed to be a full
+memory barrier because it is possible for a preceding UNLOCK to pass a
+later LOCK from the viewpoint of the CPU, but not from the viewpoint
+of the compiler. Note that deadlocks cannot be introduced by this
+interchange because if such a deadlock threatened, the UNLOCK would
+simply complete.
+
+If it is necessary for an UNLOCK-LOCK pair to produce a full barrier,
+the LOCK can be followed by an smp_mb__after_unlock_lock() invocation.
+This will produce a full barrier if either (a) the UNLOCK and the LOCK
+are executed by the same CPU or task, or (b) the UNLOCK and LOCK act
+on the same lock variable. The smp_mb__after_unlock_lock() primitive
+is free on many architectures. Without smp_mb__after_unlock_lock(),
+the critical sections corresponding to the UNLOCK and the LOCK can cross:
+
+ *A = a;
+ UNLOCK M
+ LOCK N
+ *B = b;
+
+could occur as:
+
+ LOCK N, STORE *B, STORE *A, UNLOCK M
+
+With smp_mb__after_unlock_lock(), they cannot, so that:
+
+ *A = a;
+ UNLOCK M
+ LOCK N
+ smp_mb__after_unlock_lock();
+ *B = b;
+
+will always occur as either of the following:
+
+ STORE *A, UNLOCK, LOCK, STORE *B
+ STORE *A, LOCK, UNLOCK, STORE *B
+
+If the UNLOCK and LOCK were instead both operating on the same lock
+variable, only the first of these two alternatives can occur.
Locks and semaphores may not provide any guarantee of ordering on UP compiled
systems, and so cannot be counted on in such a situation to actually achieve
@@ -1435,12 +1927,12 @@ three CPUs; then should the following sequence of events occur:
CPU 1 CPU 2
=============================== ===============================
- *A = a; *E = e;
+ ACCESS_ONCE(*A) = a; ACCESS_ONCE(*E) = e;
LOCK M LOCK Q
- *B = b; *F = f;
- *C = c; *G = g;
+ ACCESS_ONCE(*B) = b; ACCESS_ONCE(*F) = f;
+ ACCESS_ONCE(*C) = c; ACCESS_ONCE(*G) = g;
UNLOCK M UNLOCK Q
- *D = d; *H = h;
+ ACCESS_ONCE(*D) = d; ACCESS_ONCE(*H) = h;
Then there is no guarantee as to what order CPU 3 will see the accesses to *A
through *H occur in, other than the constraints imposed by the separate locks
@@ -1460,17 +1952,18 @@ However, if the following occurs:
CPU 1 CPU 2
=============================== ===============================
- *A = a;
- LOCK M [1]
- *B = b;
- *C = c;
- UNLOCK M [1]
- *D = d; *E = e;
- LOCK M [2]
- *F = f;
- *G = g;
- UNLOCK M [2]
- *H = h;
+ ACCESS_ONCE(*A) = a;
+ LOCK M [1]
+ ACCESS_ONCE(*B) = b;
+ ACCESS_ONCE(*C) = c;
+ UNLOCK M [1]
+ ACCESS_ONCE(*D) = d; ACCESS_ONCE(*E) = e;
+ LOCK M [2]
+ smp_mb__after_unlock_lock();
+ ACCESS_ONCE(*F) = f;
+ ACCESS_ONCE(*G) = g;
+ UNLOCK M [2]
+ ACCESS_ONCE(*H) = h;
CPU 3 might see:
@@ -1484,6 +1977,11 @@ But assuming CPU 1 gets the lock first, CPU 3 won't see any of:
*F, *G or *H preceding LOCK M [2]
*A, *B, *C, *E, *F or *G following UNLOCK M [2]
+Note that the smp_mb__after_unlock_lock() is critically important
+here: Without it CPU 3 might see some of the above orderings.
+Without smp_mb__after_unlock_lock(), the accesses are not guaranteed
+to be seen in order unless CPU 3 holds lock M.
+
LOCKS VS I/O ACCESSES
---------------------
@@ -1687,21 +2185,23 @@ explicit lock operations, described later). These include:
xchg();
cmpxchg();
- atomic_xchg();
- atomic_cmpxchg();
- atomic_inc_return();
- atomic_dec_return();
- atomic_add_return();
- atomic_sub_return();
- atomic_inc_and_test();
- atomic_dec_and_test();
- atomic_sub_and_test();
- atomic_add_negative();
- atomic_add_unless(); /* when succeeds (returns 1) */
+ atomic_xchg(); atomic_long_xchg();
+ atomic_cmpxchg(); atomic_long_cmpxchg();
+ atomic_inc_return(); atomic_long_inc_return();
+ atomic_dec_return(); atomic_long_dec_return();
+ atomic_add_return(); atomic_long_add_return();
+ atomic_sub_return(); atomic_long_sub_return();
+ atomic_inc_and_test(); atomic_long_inc_and_test();
+ atomic_dec_and_test(); atomic_long_dec_and_test();
+ atomic_sub_and_test(); atomic_long_sub_and_test();
+ atomic_add_negative(); atomic_long_add_negative();
test_and_set_bit();
test_and_clear_bit();
test_and_change_bit();
+ /* when succeeds (returns 1) */
+ atomic_add_unless(); atomic_long_add_unless();
+
These are used for such things as implementing LOCK-class and UNLOCK-class
operations and adjusting reference counters towards object destruction, and as
such the implicit memory barrier effects are necessary.
@@ -1887,8 +2387,8 @@ functions:
space should suffice for PCI.
[*] NOTE! attempting to load from the same location as was written to may
- cause a malfunction - consider the 16550 Rx/Tx serial registers for
- example.
+ cause a malfunction - consider the 16550 Rx/Tx serial registers for
+ example.
Used with prefetchable I/O memory, an mmiowb() barrier may be required to
force stores to be ordered.
@@ -1955,19 +2455,19 @@ barriers for the most part act at the interface between the CPU and its cache
:
+--------+ +--------+ : +--------+ +-----------+
| | | | : | | | | +--------+
- | CPU | | Memory | : | CPU | | | | |
- | Core |--->| Access |----->| Cache |<-->| | | |
+ | CPU | | Memory | : | CPU | | | | |
+ | Core |--->| Access |----->| Cache |<-->| | | |
| | | Queue | : | | | |--->| Memory |
- | | | | : | | | | | |
- +--------+ +--------+ : +--------+ | | | |
+ | | | | : | | | | | |
+ +--------+ +--------+ : +--------+ | | | |
: | Cache | +--------+
: | Coherency |
: | Mechanism | +--------+
+--------+ +--------+ : +--------+ | | | |
| | | | : | | | | | |
| CPU | | Memory | : | CPU | | |--->| Device |
- | Core |--->| Access |----->| Cache |<-->| | | |
- | | | Queue | : | | | | | |
+ | Core |--->| Access |----->| Cache |<-->| | | |
+ | | | Queue | : | | | | | |
| | | | : | | | | +--------+
+--------+ +--------+ : +--------+ +-----------+
:
@@ -2090,7 +2590,7 @@ CPU's caches by some other cache event:
p = &v; q = p;
<D:request p>
<B:modify p=&v> <D:commit p=&v>
- <D:read p>
+ <D:read p>
x = *q;
<C:read *q> Reads from v before v updated in cache
<C:unbusy>
@@ -2115,7 +2615,7 @@ queue before processing any further requests:
p = &v; q = p;
<D:request p>
<B:modify p=&v> <D:commit p=&v>
- <D:read p>
+ <D:read p>
smp_read_barrier_depends()
<C:unbusy>
<C:commit v=2>
@@ -2177,11 +2677,11 @@ A programmer might take it for granted that the CPU will perform memory
operations in exactly the order specified, so that if the CPU is, for example,
given the following piece of code to execute:
- a = *A;
- *B = b;
- c = *C;
- d = *D;
- *E = e;
+ a = ACCESS_ONCE(*A);
+ ACCESS_ONCE(*B) = b;
+ c = ACCESS_ONCE(*C);
+ d = ACCESS_ONCE(*D);
+ ACCESS_ONCE(*E) = e;
they would then expect that the CPU will complete the memory operation for each
instruction before moving on to the next one, leading to a definite sequence of
@@ -2228,12 +2728,12 @@ However, it is guaranteed that a CPU will be self-consistent: it will see its
_own_ accesses appear to be correctly ordered, without the need for a memory
barrier. For instance with the following code:
- U = *A;
- *A = V;
- *A = W;
- X = *A;
- *A = Y;
- Z = *A;
+ U = ACCESS_ONCE(*A);
+ ACCESS_ONCE(*A) = V;
+ ACCESS_ONCE(*A) = W;
+ X = ACCESS_ONCE(*A);
+ ACCESS_ONCE(*A) = Y;
+ Z = ACCESS_ONCE(*A);
and assuming no intervention by an external influence, it can be assumed that
the final result will appear to be:
@@ -2250,7 +2750,12 @@ accesses:
in that order, but, without intervention, the sequence may have almost any
combination of elements combined or discarded, provided the program's view of
-the world remains consistent.
+the world remains consistent. Note that ACCESS_ONCE() is -not- optional
+in the above example, as there are architectures where a given CPU might
+interchange successive loads to the same location. On such architectures,
+ACCESS_ONCE() does whatever is necessary to prevent this, for example, on
+Itanium the volatile casts used by ACCESS_ONCE() cause GCC to emit the
+special ld.acq and st.rel instructions that prevent such reordering.
The compiler may also combine, discard or defer elements of the sequence before
the CPU even sees them.
@@ -2264,13 +2769,13 @@ may be reduced to:
*A = W;
-since, without a write barrier, it can be assumed that the effect of the
-storage of V to *A is lost. Similarly:
+since, without either a write barrier or an ACCESS_ONCE(), it can be
+assumed that the effect of the storage of V to *A is lost. Similarly:
*A = Y;
Z = *A;
-may, without a memory barrier, be reduced to:
+may, without a memory barrier or an ACCESS_ONCE(), be reduced to:
*A = Y;
Z = Y;
diff --git a/Documentation/robust-futex-ABI.txt b/Documentation/robust-futex-ABI.txt
index fd1cd8aae4eb..16eb314f56cc 100644
--- a/Documentation/robust-futex-ABI.txt
+++ b/Documentation/robust-futex-ABI.txt
@@ -146,8 +146,8 @@ On removal:
1) set the 'list_op_pending' word to the address of the 'lock entry'
to be removed,
2) remove the lock entry for this lock from the 'head' list,
- 2) release the futex lock, and
- 2) clear the 'lock_op_pending' word.
+ 3) release the futex lock, and
+ 4) clear the 'lock_op_pending' word.
On exit, the kernel will consider the address stored in
'list_op_pending' and the address of each 'lock word' found by walking
diff --git a/Documentation/sysctl/kernel.txt b/Documentation/sysctl/kernel.txt
index 26b7ee491df8..6d486404200e 100644
--- a/Documentation/sysctl/kernel.txt
+++ b/Documentation/sysctl/kernel.txt
@@ -428,11 +428,6 @@ rate for each task.
numa_balancing_scan_size_mb is how many megabytes worth of pages are
scanned for a given scan.
-numa_balancing_settle_count is how many scan periods must complete before
-the schedule balancer stops pushing the task towards a preferred node. This
-gives the scheduler a chance to place the task on an alternative node if the
-preferred node is overloaded.
-
numa_balancing_migrate_deferred is how many page migrations get skipped
unconditionally, after a page migration is skipped because a page is shared
with other tasks. This reduces page migration overhead, and determines
diff --git a/Documentation/x86/boot.txt b/Documentation/x86/boot.txt
index f4f268c2b826..cb81741d3b0b 100644
--- a/Documentation/x86/boot.txt
+++ b/Documentation/x86/boot.txt
@@ -608,6 +608,9 @@ Protocol: 2.12+
- If 1, the kernel supports the 64-bit EFI handoff entry point
given at handover_offset + 0x200.
+ Bit 4 (read): XLF_EFI_KEXEC
+ - If 1, the kernel supports kexec EFI boot with EFI runtime support.
+
Field name: cmdline_size
Type: read
Offset/size: 0x238/4
diff --git a/Documentation/x86/x86_64/mm.txt b/Documentation/x86/x86_64/mm.txt
index 881582f75c9c..c584a51add15 100644
--- a/Documentation/x86/x86_64/mm.txt
+++ b/Documentation/x86/x86_64/mm.txt
@@ -28,4 +28,11 @@ reference.
Current X86-64 implementations only support 40 bits of address space,
but we support up to 46 bits. This expands into MBZ space in the page tables.
+->trampoline_pgd:
+
+We map EFI runtime services in the aforementioned PGD in the virtual
+range of 64Gb (arbitrarily set, can be raised if needed)
+
+0xffffffef00000000 - 0xffffffff00000000
+
-Andi Kleen, Jul 2004