Skip to content

Commit 4b322bc

Browse files
committed
Merge: Update rcu to v6.12
MR: https://gitlab.com/redhat/centos-stream/src/kernel/centos-stream-9/-/merge_requests/7112 JIRA: https://issues.redhat.com/browse/RHEL-79879 Omitted: Remove RCU Tasks Rude asynchronous APIs Omitted-fix: a0b6594 Just a comment change in tools/gpio/gpio-sloppy-logic-analyzer.sh, skip. Signed-off-by: Čestmír Kalina <ckalina@redhat.com> Approved-by: Tony Camuso <tcamuso@redhat.com> Approved-by: Waiman Long <longman@redhat.com> Approved-by: Rafael Aquini <raquini@redhat.com> Approved-by: David Arcari <darcari@redhat.com> Approved-by: CKI KWF Bot <cki-ci-bot+kwf-gitlab-com@redhat.com> Merged-by: Augusto Caringi <acaringi@redhat.com>
2 parents bd2adc9 + 8296cfa commit 4b322bc

File tree

106 files changed

+1681
-2673
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

106 files changed

+1681
-2673
lines changed

Documentation/RCU/Design/Data-Structures/Data-Structures.rst

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -921,10 +921,10 @@ This portion of the ``rcu_data`` structure is declared as follows:
921921

922922
::
923923

924-
1 int dynticks_snap;
924+
1 int watching_snap;
925925
2 unsigned long dynticks_fqs;
926926

927-
The ``->dynticks_snap`` field is used to take a snapshot of the
927+
The ``->watching_snap`` field is used to take a snapshot of the
928928
corresponding CPU's dyntick-idle state when forcing quiescent states,
929929
and is therefore accessed from other CPUs. Finally, the
930930
``->dynticks_fqs`` field is used to count the number of times this CPU
@@ -935,8 +935,8 @@ This portion of the rcu_data structure is declared as follows:
935935

936936
::
937937

938-
1 long dynticks_nesting;
939-
2 long dynticks_nmi_nesting;
938+
1 long nesting;
939+
2 long nmi_nesting;
940940
3 atomic_t dynticks;
941941
4 bool rcu_need_heavy_qs;
942942
5 bool rcu_urgent_qs;
@@ -945,27 +945,27 @@ These fields in the rcu_data structure maintain the per-CPU dyntick-idle
945945
state for the corresponding CPU. The fields may be accessed only from
946946
the corresponding CPU (and from tracing) unless otherwise stated.
947947

948-
The ``->dynticks_nesting`` field counts the nesting depth of process
948+
The ``->nesting`` field counts the nesting depth of process
949949
execution, so that in normal circumstances this counter has value zero
950950
or one. NMIs, irqs, and tracers are counted by the
951-
``->dynticks_nmi_nesting`` field. Because NMIs cannot be masked, changes
951+
``->nmi_nesting`` field. Because NMIs cannot be masked, changes
952952
to this variable have to be undertaken carefully using an algorithm
953953
provided by Andy Lutomirski. The initial transition from idle adds one,
954954
and nested transitions add two, so that a nesting level of five is
955-
represented by a ``->dynticks_nmi_nesting`` value of nine. This counter
955+
represented by a ``->nmi_nesting`` value of nine. This counter
956956
can therefore be thought of as counting the number of reasons why this
957957
CPU cannot be permitted to enter dyntick-idle mode, aside from
958958
process-level transitions.
959959

960960
However, it turns out that when running in non-idle kernel context, the
961961
Linux kernel is fully capable of entering interrupt handlers that never
962962
exit and perhaps also vice versa. Therefore, whenever the
963-
``->dynticks_nesting`` field is incremented up from zero, the
964-
``->dynticks_nmi_nesting`` field is set to a large positive number, and
965-
whenever the ``->dynticks_nesting`` field is decremented down to zero,
966-
the ``->dynticks_nmi_nesting`` field is set to zero. Assuming that
963+
``->nesting`` field is incremented up from zero, the
964+
``->nmi_nesting`` field is set to a large positive number, and
965+
whenever the ``->nesting`` field is decremented down to zero,
966+
the ``->nmi_nesting`` field is set to zero. Assuming that
967967
the number of misnested interrupts is not sufficient to overflow the
968-
counter, this approach corrects the ``->dynticks_nmi_nesting`` field
968+
counter, this approach corrects the ``->nmi_nesting`` field
969969
every time the corresponding CPU enters the idle loop from process
970970
context.
971971

@@ -992,8 +992,8 @@ code.
992992
+-----------------------------------------------------------------------+
993993
| **Quick Quiz**: |
994994
+-----------------------------------------------------------------------+
995-
| Why not simply combine the ``->dynticks_nesting`` and |
996-
| ``->dynticks_nmi_nesting`` counters into a single counter that just |
995+
| Why not simply combine the ``->nesting`` and |
996+
| ``->nmi_nesting`` counters into a single counter that just |
997997
| counts the number of reasons that the corresponding CPU is non-idle? |
998998
+-----------------------------------------------------------------------+
999999
| **Answer**: |

Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.rst

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -147,11 +147,11 @@ RCU read-side critical sections preceding and following the current
147147
idle sojourn.
148148
This case is handled by calls to the strongly ordered
149149
``atomic_add_return()`` read-modify-write atomic operation that
150-
is invoked within ``rcu_dynticks_eqs_enter()`` at idle-entry
151-
time and within ``rcu_dynticks_eqs_exit()`` at idle-exit time.
152-
The grace-period kthread invokes ``rcu_dynticks_snap()`` and
153-
``rcu_dynticks_in_eqs_since()`` (both of which invoke
154-
an ``atomic_add_return()`` of zero) to detect idle CPUs.
150+
is invoked within ``ct_kernel_exit_state()`` at idle-entry
151+
time and within ``ct_kernel_enter_state()`` at idle-exit time.
152+
The grace-period kthread invokes first ``ct_rcu_watching_cpu_acquire()``
153+
(preceded by a full memory barrier) and ``rcu_watching_snap_stopped_since()``
154+
(both of which rely on acquire semantics) to detect idle CPUs.
155155

156156
+-----------------------------------------------------------------------+
157157
| **Quick Quiz**: |

Documentation/RCU/Design/Memory-Ordering/TreeRCU-callback-registry.svg

Lines changed: 0 additions & 9 deletions
Loading

Documentation/RCU/Design/Memory-Ordering/TreeRCU-dyntick.svg

Lines changed: 4 additions & 4 deletions
Loading

Documentation/RCU/Design/Memory-Ordering/TreeRCU-gp-fqs.svg

Lines changed: 4 additions & 4 deletions
Loading

Documentation/RCU/Design/Memory-Ordering/TreeRCU-gp.svg

Lines changed: 4 additions & 13 deletions
Loading

Documentation/RCU/Design/Memory-Ordering/TreeRCU-hotplug.svg

Lines changed: 2 additions & 2 deletions
Loading

Documentation/RCU/listRCU.rst

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,15 @@ is that all of the required memory barriers are included for you in
99
the list macros. This document describes several applications of RCU,
1010
with the best fits first.
1111

12+
When iterating a list while holding the rcu_read_lock(), writers may
13+
modify the list. The reader is guaranteed to see all of the elements
14+
which were added to the list before they acquired the rcu_read_lock()
15+
and are still on the list when they drop the rcu_read_unlock().
16+
Elements which are added to, or removed from the list may or may not
17+
be seen. If the writer calls list_replace_rcu(), the reader may see
18+
either the old element or the new element; they will not see both,
19+
nor will they see neither.
20+
1221

1322
Example 1: Read-mostly list: Deferred Destruction
1423
-------------------------------------------------

Documentation/RCU/lockdep-splat.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ misuses of the RCU API, most notably using one of the rcu_dereference()
1010
family to access an RCU-protected pointer without the proper protection.
1111
When such misuse is detected, an lockdep-RCU splat is emitted.
1212

13-
The usual cause of a lockdep-RCU slat is someone accessing an
13+
The usual cause of a lockdep-RCU splat is someone accessing an
1414
RCU-protected data structure without either (1) being in the right kind of
1515
RCU read-side critical section or (2) holding the right update-side lock.
1616
This problem can therefore be serious: it might result in random memory

0 commit comments

Comments
 (0)