You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Updated queued_spinlock to latest Linux implementation
Ported changes from the queued lock implementation in Linux kernel
v6.6.7 in addition to general improvements.
General improvements:
- Change CONFIG_NR_CPUS to the max allowed without affecting the lock
bit layout.
- Fix allocation of mcs_pool. Used cache-aligned allocations to make
sure there is no false-sharing between cpus (in the actual kernel
implementation DEFINE_PER_CPU_ALIGNED is used). Cache line size is
assumed to be 64 bytes.
- Removed intermediate struct __qspinlock definition.
Lock behavior changes from Linux v6.6.7:
- Limit number of spinning iteration in the initial lock->pending check.
- lock->pending set using fetch_or instead of an cmpxchg loop.
- Backup spin if cannot allocate MCS node (should not happen in
lockhammer).
- Reduce the number of cmpxchg operations performed when acquiring the
lock after being released form the MCS node
- Added missing atomic primitives to lk_atomics.h
Signed-off-by: Tiago Mück <tiago.muck@arm.com>
0 commit comments