diff options
author | Dmitry Safonov <dima@arista.com> | 2022-11-23 17:38:57 +0000 |
---|---|---|
committer | Jakub Kicinski <kuba@kernel.org> | 2022-12-01 15:53:05 -0800 |
commit | 459837b522f7dff3b6681f534d8fff4eca19b7d1 (patch) | |
tree | 2a802b4f79c2ede2bccadd155af6dfa635a71bc1 /net/ipv4/tcp.c | |
parent | f62c7517ffa1378cc60cb5646567fa98e4b388cd (diff) | |
download | linux-459837b522f7dff3b6681f534d8fff4eca19b7d1.tar.gz linux-459837b522f7dff3b6681f534d8fff4eca19b7d1.tar.bz2 linux-459837b522f7dff3b6681f534d8fff4eca19b7d1.zip |
net/tcp: Disable TCP-MD5 static key on tcp_md5sig_info destruction
To do that, separate two scenarios:
- where it's the first MD5 key on the system, which means that enabling
of the static key may need to sleep;
- copying of an existing key from a listening socket to the request
socket upon receiving a signed TCP segment, where static key was
already enabled (when the key was added to the listening socket).
Now the life-time of the static branch for TCP-MD5 is until:
- last tcp_md5sig_info is destroyed
- last socket in time-wait state with MD5 key is closed.
Which means that after all sockets with TCP-MD5 keys are gone, the
system gets back the performance of disabled md5-key static branch.
While at here, provide static_key_fast_inc() helper that does ref
counter increment in atomic fashion (without grabbing cpus_read_lock()
on CONFIG_JUMP_LABEL=y). This is needed to add a new user for
a static_key when the caller controls the lifetime of another user.
Signed-off-by: Dmitry Safonov <dima@arista.com>
Acked-by: Jakub Kicinski <kuba@kernel.org>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
Diffstat (limited to 'net/ipv4/tcp.c')
-rw-r--r-- | net/ipv4/tcp.c | 5 |
1 files changed, 1 insertions, 4 deletions
diff --git a/net/ipv4/tcp.c b/net/ipv4/tcp.c index 24602a5184b0..001947136b0a 100644 --- a/net/ipv4/tcp.c +++ b/net/ipv4/tcp.c @@ -4464,11 +4464,8 @@ bool tcp_alloc_md5sig_pool(void) if (unlikely(!READ_ONCE(tcp_md5sig_pool_populated))) { mutex_lock(&tcp_md5sig_mutex); - if (!tcp_md5sig_pool_populated) { + if (!tcp_md5sig_pool_populated) __tcp_alloc_md5sig_pool(); - if (tcp_md5sig_pool_populated) - static_branch_inc(&tcp_md5_needed); - } mutex_unlock(&tcp_md5sig_mutex); } |