Skip to content

Commit cb63335

Browse files
committed
sched: Don't account irq time if sched_clock_irqtime is disabled
JIRA: https://issues.redhat.com/browse/RHEL-78821 commit 763a744 Author: Yafang Shao <laoar.shao@gmail.com> Date: Fri Jan 3 10:24:07 2025 +0800 sched: Don't account irq time if sched_clock_irqtime is disabled sched_clock_irqtime may be disabled due to the clock source, in which case IRQ time should not be accounted. Let's add a conditional check to avoid unnecessary logic. Signed-off-by: Yafang Shao <laoar.shao@gmail.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Reviewed-by: Michal Koutný <mkoutny@suse.com> Reviewed-by: Vincent Guittot <vincent.guittot@linaro.org> Link: https://lore.kernel.org/r/20250103022409.2544-3-laoar.shao@gmail.com Signed-off-by: Phil Auld <pauld@redhat.com>
1 parent 8657a68 commit cb63335

File tree

1 file changed

+23
-21
lines changed

1 file changed

+23
-21
lines changed

kernel/sched/core.c

Lines changed: 23 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -701,29 +701,31 @@ static void update_rq_clock_task(struct rq *rq, s64 delta)
701701
s64 __maybe_unused steal = 0, irq_delta = 0;
702702

703703
#ifdef CONFIG_IRQ_TIME_ACCOUNTING
704-
irq_delta = irq_time_read(cpu_of(rq)) - rq->prev_irq_time;
704+
if (irqtime_enabled()) {
705+
irq_delta = irq_time_read(cpu_of(rq)) - rq->prev_irq_time;
705706

706-
/*
707-
* Since irq_time is only updated on {soft,}irq_exit, we might run into
708-
* this case when a previous update_rq_clock() happened inside a
709-
* {soft,}IRQ region.
710-
*
711-
* When this happens, we stop ->clock_task and only update the
712-
* prev_irq_time stamp to account for the part that fit, so that a next
713-
* update will consume the rest. This ensures ->clock_task is
714-
* monotonic.
715-
*
716-
* It does however cause some slight miss-attribution of {soft,}IRQ
717-
* time, a more accurate solution would be to update the irq_time using
718-
* the current rq->clock timestamp, except that would require using
719-
* atomic ops.
720-
*/
721-
if (irq_delta > delta)
722-
irq_delta = delta;
707+
/*
708+
* Since irq_time is only updated on {soft,}irq_exit, we might run into
709+
* this case when a previous update_rq_clock() happened inside a
710+
* {soft,}IRQ region.
711+
*
712+
* When this happens, we stop ->clock_task and only update the
713+
* prev_irq_time stamp to account for the part that fit, so that a next
714+
* update will consume the rest. This ensures ->clock_task is
715+
* monotonic.
716+
*
717+
* It does however cause some slight miss-attribution of {soft,}IRQ
718+
* time, a more accurate solution would be to update the irq_time using
719+
* the current rq->clock timestamp, except that would require using
720+
* atomic ops.
721+
*/
722+
if (irq_delta > delta)
723+
irq_delta = delta;
723724

724-
rq->prev_irq_time += irq_delta;
725-
delta -= irq_delta;
726-
delayacct_irq(rq->curr, irq_delta);
725+
rq->prev_irq_time += irq_delta;
726+
delta -= irq_delta;
727+
delayacct_irq(rq->curr, irq_delta);
728+
}
727729
#endif
728730
#ifdef CONFIG_PARAVIRT_TIME_ACCOUNTING
729731
if (static_key_false((&paravirt_steal_rq_enabled))) {

0 commit comments

Comments
 (0)