diff options
author | Oleg Nesterov <oleg@redhat.com> | 2024-01-22 16:50:50 +0100 |
---|---|---|
committer | Andrew Morton <akpm@linux-foundation.org> | 2024-02-07 21:20:32 -0800 |
commit | daa694e4137571b4ebec330f9a9b4d54aa8b8089 (patch) | |
tree | c5455af304fd2066d7b44e9ca6279aa6f7520a32 /fs/bcachefs | |
parent | e656c7a9e59607d1672d85ffa9a89031876ffe67 (diff) | |
download | linux-daa694e4137571b4ebec330f9a9b4d54aa8b8089.tar.gz linux-daa694e4137571b4ebec330f9a9b4d54aa8b8089.tar.bz2 linux-daa694e4137571b4ebec330f9a9b4d54aa8b8089.zip |
getrusage: move thread_group_cputime_adjusted() outside of lock_task_sighand()
Patch series "getrusage: use sig->stats_lock", v2.
This patch (of 2):
thread_group_cputime() does its own locking, we can safely shift
thread_group_cputime_adjusted() which does another for_each_thread loop
outside of ->siglock protected section.
This is also preparation for the next patch which changes getrusage() to
use stats_lock instead of siglock, thread_group_cputime() takes the same
lock. With the current implementation recursive read_seqbegin_or_lock()
is fine, thread_group_cputime() can't enter the slow mode if the caller
holds stats_lock, yet this looks more safe and better performance-wise.
Link: https://lkml.kernel.org/r/20240122155023.GA26169@redhat.com
Link: https://lkml.kernel.org/r/20240122155050.GA26205@redhat.com
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Reported-by: Dylan Hatch <dylanbhatch@google.com>
Tested-by: Dylan Hatch <dylanbhatch@google.com>
Cc: Eric W. Biederman <ebiederm@xmission.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Diffstat (limited to 'fs/bcachefs')
0 files changed, 0 insertions, 0 deletions