diff options
author | Kent Overstreet <kent.overstreet@linux.dev> | 2023-05-20 23:57:48 -0400 |
---|---|---|
committer | Kent Overstreet <kent.overstreet@linux.dev> | 2023-10-22 17:10:02 -0400 |
commit | 1fb4fe63178881a0ac043a5c05288d9fff85d6b8 (patch) | |
tree | 0d0d67219ffab8adfe1c1ba51eee301fa2f938e3 /fs/bcachefs/trace.h | |
parent | c4bd3491b1c0b335f63599ec96d1d4ab0d37a3c1 (diff) |
six locks: Kill six_lock_state union
As suggested by Linus, this drops the six_lock_state union in favor of
raw bitmasks.
On the one hand, bitfields give more type-level structure to the code.
However, a significant amount of the code was working with
six_lock_state as a u64/atomic64_t, and the conversions from the
bitfields to the u64 were deemed a bit too out-there.
More significantly, because bitfield order is poorly defined (#ifdef
__LITTLE_ENDIAN_BITFIELD can be used, but is gross), incrementing the
sequence number would overflow into the rest of the bitfield if the
compiler didn't put the sequence number at the high end of the word.
The new code is a bit saner when we're on an architecture without real
atomic64_t support - all accesses to lock->state now go through
atomic64_*() operations.
On architectures with real atomic64_t support, we additionally use
atomic bit ops for setting/clearing individual bits.
Text size: 7467 bytes -> 4649 bytes - compilers still suck at
bitfields.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Diffstat (limited to 'fs/bcachefs/trace.h')
-rw-r--r-- | fs/bcachefs/trace.h | 8 |
1 files changed, 6 insertions, 2 deletions
diff --git a/fs/bcachefs/trace.h b/fs/bcachefs/trace.h index 8027c2a14199..cfb1779d712a 100644 --- a/fs/bcachefs/trace.h +++ b/fs/bcachefs/trace.h @@ -420,7 +420,9 @@ TRACE_EVENT(btree_path_relock_fail, else scnprintf(__entry->node, sizeof(__entry->node), "%px", b); __entry->iter_lock_seq = path->l[level].lock_seq; - __entry->node_lock_seq = is_btree_node(path, level) ? path->l[level].b->c.lock.state.seq : 0; + __entry->node_lock_seq = is_btree_node(path, level) + ? six_lock_seq(&path->l[level].b->c.lock) + : 0; ), TP_printk("%s %pS btree %s pos %llu:%llu:%u level %u node %s iter seq %u lock seq %u", @@ -475,7 +477,9 @@ TRACE_EVENT(btree_path_upgrade_fail, __entry->read_count = c.n[SIX_LOCK_read]; __entry->intent_count = c.n[SIX_LOCK_read]; __entry->iter_lock_seq = path->l[level].lock_seq; - __entry->node_lock_seq = is_btree_node(path, level) ? path->l[level].b->c.lock.state.seq : 0; + __entry->node_lock_seq = is_btree_node(path, level) + ? six_lock_seq(&path->l[level].b->c.lock) + : 0; ), TP_printk("%s %pS btree %s pos %llu:%llu:%u level %u locked %u held %u:%u lock count %u:%u iter seq %u lock seq %u", |