Age | Commit message (Collapse) | Author |
|
Add multi-uprobe and multi-uretprobe benchmarks to bench tool.
Multi- and classic uprobes/uretprobes have different low-level
triggering code paths, so it's sometimes important to be able to
benchmark both flavors of uprobes/uretprobes.
Sample examples from my dev machine below. Single-threaded peformance
almost doesn't differ, but with more parallel CPUs triggering the same
uprobe/uretprobe the difference grows. This might be due to [0], but
given the code is slightly different, there could be other sources of
slowdown.
Note, all these numbers will change due to ongoing work to improve
uprobe/uretprobe scalability (e.g., [1]), but having benchmark like this
is useful for measurements and debugging nevertheless.
\#!/bin/bash
set -eufo pipefail
for p in 1 8 16 32; do
for i in uprobe-nop uretprobe-nop uprobe-multi-nop uretprobe-multi-nop; do
summary=$(sudo ./bench -w1 -d3 -p$p -a trig-$i | tail -n1)
total=$(echo "$summary" | cut -d'(' -f1 | cut -d' ' -f3-)
percpu=$(echo "$summary" | cut -d'(' -f2 | cut -d')' -f1 | cut -d'/' -f1)
printf "%-21s (%2d cpus): %s (%s/s/cpu)\n" $i $p "$total" "$percpu"
done
echo
done
uprobe-nop ( 1 cpus): 1.020 ± 0.005M/s ( 1.020M/s/cpu)
uretprobe-nop ( 1 cpus): 0.515 ± 0.009M/s ( 0.515M/s/cpu)
uprobe-multi-nop ( 1 cpus): 1.036 ± 0.004M/s ( 1.036M/s/cpu)
uretprobe-multi-nop ( 1 cpus): 0.512 ± 0.005M/s ( 0.512M/s/cpu)
uprobe-nop ( 8 cpus): 3.481 ± 0.030M/s ( 0.435M/s/cpu)
uretprobe-nop ( 8 cpus): 2.222 ± 0.008M/s ( 0.278M/s/cpu)
uprobe-multi-nop ( 8 cpus): 3.769 ± 0.094M/s ( 0.471M/s/cpu)
uretprobe-multi-nop ( 8 cpus): 2.482 ± 0.007M/s ( 0.310M/s/cpu)
uprobe-nop (16 cpus): 2.968 ± 0.011M/s ( 0.185M/s/cpu)
uretprobe-nop (16 cpus): 1.870 ± 0.002M/s ( 0.117M/s/cpu)
uprobe-multi-nop (16 cpus): 3.541 ± 0.037M/s ( 0.221M/s/cpu)
uretprobe-multi-nop (16 cpus): 2.123 ± 0.026M/s ( 0.133M/s/cpu)
uprobe-nop (32 cpus): 2.524 ± 0.026M/s ( 0.079M/s/cpu)
uretprobe-nop (32 cpus): 1.572 ± 0.003M/s ( 0.049M/s/cpu)
uprobe-multi-nop (32 cpus): 2.717 ± 0.003M/s ( 0.085M/s/cpu)
uretprobe-multi-nop (32 cpus): 1.687 ± 0.007M/s ( 0.053M/s/cpu)
[0] https://lore.kernel.org/linux-trace-kernel/20240805202803.1813090-1-andrii@kernel.org/
[1] https://lore.kernel.org/linux-trace-kernel/20240731214256.3588718-1-andrii@kernel.org/
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/r/20240806042935.3867862-1-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
Utilize bpf_modify_return_test_tp() kfunc to have a fast way to trigger
tp/raw_tp/fmodret programs from another BPF program, which gives us
comparable batched benchmarks to (batched) kprobe/fentry benchmarks.
We don't switch kprobe/fentry batched benchmarks to this kfunc to make
bench tool usable on older kernels as well.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20240326162151.3981687-7-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
Instead of front-loading all possible benchmarking BPF programs for
trigger benchmarks, explicitly specify which BPF programs are used by
specific benchmark and load only it.
This allows to be more flexible in supporting older kernels, where some
program types might not be possible to load (e.g., those that rely on
newly added kfunc).
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20240326162151.3981687-5-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
Remove "legacy" benchmarks triggered by syscalls in favor of newly added
in-kernel/batched benchmarks. Drop -batched suffix now as well.
Next patch will restore "feature parity" by adding back
tp/raw_tp/fmodret benchmarks based on in-kernel kfunc approach.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20240326162151.3981687-4-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
Existing kprobe/fentry triggering benchmarks have 1-to-1 mapping between
one syscall execution and BPF program run. While we use a fast
get_pgid() syscall, syscall overhead can still be non-trivial.
This patch adds kprobe/fentry set of benchmarks significantly amortizing
the cost of syscall vs actual BPF triggering overhead. We do this by
employing BPF_PROG_TEST_RUN command to trigger "driver" raw_tp program
which does a tight parameterized loop calling cheap BPF helper
(bpf_get_numa_node_id()), to which kprobe/fentry programs are
attached for benchmarking.
This way 1 bpf() syscall causes N executions of BPF program being
benchmarked. N defaults to 100, but can be adjusted with
--trig-batch-iters CLI argument.
For comparison we also implement a new baseline program that instead of
triggering another BPF program just does N atomic per-CPU counter
increments, establishing the limit for all other types of program within
this batched benchmarking setup.
Taking the final set of benchmarks added in this patch set (including
tp/raw_tp/fmodret, added in later patch), and keeping for now "legacy"
syscall-driven benchmarks, we can capture all triggering benchmarks in
one place for comparison, before we remove the legacy ones (and rename
xxx-batched into just xxx).
$ benchs/run_bench_trigger.sh
usermode-count : 79.500 ± 0.024M/s
kernel-count : 49.949 ± 0.081M/s
syscall-count : 9.009 ± 0.007M/s
fentry-batch : 31.002 ± 0.015M/s
fexit-batch : 20.372 ± 0.028M/s
fmodret-batch : 21.651 ± 0.659M/s
rawtp-batch : 36.775 ± 0.264M/s
tp-batch : 19.411 ± 0.248M/s
kprobe-batch : 12.949 ± 0.220M/s
kprobe-multi-batch : 15.400 ± 0.007M/s
kretprobe-batch : 5.559 ± 0.011M/s
kretprobe-multi-batch: 5.861 ± 0.003M/s
fentry-legacy : 8.329 ± 0.004M/s
fexit-legacy : 6.239 ± 0.003M/s
fmodret-legacy : 6.595 ± 0.001M/s
rawtp-legacy : 8.305 ± 0.004M/s
tp-legacy : 6.382 ± 0.001M/s
kprobe-legacy : 5.528 ± 0.003M/s
kprobe-multi-legacy : 5.864 ± 0.022M/s
kretprobe-legacy : 3.081 ± 0.001M/s
kretprobe-multi-legacy: 3.193 ± 0.001M/s
Note how xxx-batch variants are measured with significantly higher
throughput, even though it's exactly the same in-kernel overhead. As
such, results can be compared only between benchmarks of the same kind
(syscall vs batched):
fentry-legacy : 8.329 ± 0.004M/s
fentry-batch : 31.002 ± 0.015M/s
kprobe-multi-legacy : 5.864 ± 0.022M/s
kprobe-multi-batch : 15.400 ± 0.007M/s
Note also that syscall-count is setting a theoretical limit for
syscall-triggered benchmarks, while kernel-count is setting similar
limits for batch variants. usermode-count is a happy and unachievable
case of user space counting without doing any syscalls, and is mostly
the measure of CPU speed for such a trivial benchmark.
As was mentioned, tp/raw_tp/fmodret require kernel-side kfunc to produce
similar benchmark, which we address in a separate patch.
Note that run_bench_trigger.sh allows to override a list of benchmarks
to run, which is very useful for performance work.
Cc: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/r/20240326162151.3981687-3-andrii@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
When benchmarking with multiple threads (-pN, where N>1), we start
contending on single atomic counter that both BPF trigger benchmarks are
using, as well as "baseline" tests in user space (trig-base and
trig-uprobe-base benchmarks). As such, we start bottlenecking on
something completely irrelevant to benchmark at hand.
Scale counting up by using per-CPU counters on BPF side. On use space
side we do the next best thing: hash thread ID to approximate per-CPU
behavior. It seems to work quite well in practice.
To demonstrate the difference, I ran three benchmarks with 1, 2, 4, 8,
16, and 32 threads:
- trig-uprobe-base (no syscalls, pure tight counting loop in user-space);
- trig-base (get_pgid() syscall, atomic counter in user-space);
- trig-fentry (syscall to trigger fentry program, atomic uncontended per-CPU
counter on BPF side).
Command used:
for b in uprobe-base base fentry; do \
for p in 1 2 4 8 16 32; do \
printf "%-11s %2d: %s\n" $b $p \
"$(sudo ./bench -w2 -d5 -a -p$p trig-$b | tail -n1 | cut -d'(' -f1 | cut -d' ' -f3-)"; \
done; \
done
Before these changes, aggregate throughput across all threads doesn't
scale well with number of threads, it actually even falls sharply for
uprobe-base due to a very high contention:
uprobe-base 1: 138.998 ± 0.650M/s
uprobe-base 2: 70.526 ± 1.147M/s
uprobe-base 4: 63.114 ± 0.302M/s
uprobe-base 8: 54.177 ± 0.138M/s
uprobe-base 16: 45.439 ± 0.057M/s
uprobe-base 32: 37.163 ± 0.242M/s
base 1: 16.940 ± 0.182M/s
base 2: 19.231 ± 0.105M/s
base 4: 21.479 ± 0.038M/s
base 8: 23.030 ± 0.037M/s
base 16: 22.034 ± 0.004M/s
base 32: 18.152 ± 0.013M/s
fentry 1: 14.794 ± 0.054M/s
fentry 2: 17.341 ± 0.055M/s
fentry 4: 23.792 ± 0.024M/s
fentry 8: 21.557 ± 0.047M/s
fentry 16: 21.121 ± 0.004M/s
fentry 32: 17.067 ± 0.023M/s
After these changes, we see almost perfect linear scaling, as expected.
The sub-linear scaling when going from 8 to 16 threads is interesting
and consistent on my test machine, but I haven't investigated what is
causing it this peculiar slowdown (across all benchmarks, could be due
to hyperthreading effects, not sure).
uprobe-base 1: 139.980 ± 0.648M/s
uprobe-base 2: 270.244 ± 0.379M/s
uprobe-base 4: 532.044 ± 1.519M/s
uprobe-base 8: 1004.571 ± 3.174M/s
uprobe-base 16: 1720.098 ± 0.744M/s
uprobe-base 32: 3506.659 ± 8.549M/s
base 1: 16.869 ± 0.071M/s
base 2: 33.007 ± 0.092M/s
base 4: 64.670 ± 0.203M/s
base 8: 121.969 ± 0.210M/s
base 16: 207.832 ± 0.112M/s
base 32: 424.227 ± 1.477M/s
fentry 1: 14.777 ± 0.087M/s
fentry 2: 28.575 ± 0.146M/s
fentry 4: 56.234 ± 0.176M/s
fentry 8: 106.095 ± 0.385M/s
fentry 16: 181.440 ± 0.032M/s
fentry 32: 369.131 ± 0.693M/s
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Message-ID: <20240315213329.1161589-1-andrii@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
|
|
Adding kprobe multi triggering benchmarks. It's useful now to bench
new fprobe implementation and might be useful later as well.
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/20240311211023.590321-1-jolsa@kernel.org
|
|
We already have kprobe and fentry benchmarks. Let's add kretprobe and
fexit ones for completeness.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Jiri Olsa <jolsa@kernel.org>
Link: https://lore.kernel.org/bpf/20240309005124.3004446-1-andrii@kernel.org
|
|
Now that u[ret]probes can use name-based specification, it makes
sense to add support for auto-attach based on SEC() definition.
The format proposed is
SEC("u[ret]probe/binary:[raw_offset|[function_name[+offset]]")
For example, to trace malloc() in libc:
SEC("uprobe/libc.so.6:malloc")
...or to trace function foo2 in /usr/bin/foo:
SEC("uprobe//usr/bin/foo:foo2")
Auto-attach is done for all tasks (pid -1). prog can be an absolute
path or simply a program/library name; in the latter case, we use
PATH/LD_LIBRARY_PATH to resolve the full path, falling back to
standard locations (/usr/bin:/usr/sbin or /usr/lib64:/usr/lib) if
the file is not found via environment-variable specified locations.
Signed-off-by: Alan Maguire <alan.maguire@oracle.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/1648654000-21758-4-git-send-email-alan.maguire@oracle.com
|
|
Some of the tests are using x86_64 ABI-specific syscall entry points
(such as __x64_sys_nanosleep and __x64_sys_getpgid). Update them to use
architecture-dependent syscall entry names.
Also update fexit_sleep test to not use BPF_PROG() so that it is clear
that the syscall parameters aren't being accessed in the bpf prog.
Note that none of the bpf progs in these tests are actually accessing
any of the syscall parameters. The only exception is perfbuf_bench, which
passes on the bpf prog context into bpf_perf_event_output() as a pointer
to pt_regs, but that looks to be mostly ignored.
Signed-off-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com>
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Link: https://lore.kernel.org/bpf/e35f7051f03e269b623a68b139d8ed131325f7b7.1643973917.git.naveen.n.rao@linux.vnet.ibm.com
|
|
Add benchmark to measure overhead of uprobes and uretprobes. Also have
a baseline (no uprobe attached) benchmark.
On my dev machine, baseline benchmark can trigger 130M user_target()
invocations. When uprobe is attached, this falls to just 700K. With
uretprobe, we get down to 520K:
$ sudo ./bench trig-uprobe-base -a
Summary: hits 131.289 ± 2.872M/s
# UPROBE
$ sudo ./bench -a trig-uprobe-without-nop
Summary: hits 0.729 ± 0.007M/s
$ sudo ./bench -a trig-uprobe-with-nop
Summary: hits 1.798 ± 0.017M/s
# URETPROBE
$ sudo ./bench -a trig-uretprobe-without-nop
Summary: hits 0.508 ± 0.012M/s
$ sudo ./bench -a trig-uretprobe-with-nop
Summary: hits 0.883 ± 0.008M/s
So there is almost 2.5x performance difference between probing nop vs
non-nop instruction for entry uprobe. And 1.7x difference for uretprobe.
This means that non-nop uprobe overhead is around 1.4 microseconds for uprobe
and 2 microseconds for non-nop uretprobe.
For nop variants, uprobe and uretprobe overhead is down to 0.556 and
1.13 microseconds, respectively.
For comparison, just doing a very low-overhead syscall (with no BPF
programs attached anywhere) gives:
$ sudo ./bench trig-base -a
Summary: hits 4.830 ± 0.036M/s
So uprobes are about 2.67x slower than pure context switch.
Signed-off-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20211116013041.4072571-1-andrii@kernel.org
|
|
Modify few tests to sanity test sleepable bpf functionality.
Running 'bench trig-fentry-sleep' vs 'bench trig-fentry' and 'perf report':
sleepable with SRCU:
3.86% bench [k] __srcu_read_unlock
3.22% bench [k] __srcu_read_lock
0.92% bench [k] bpf_prog_740d4210cdcd99a3_bench_trigger_fentry_sleep
0.50% bench [k] bpf_trampoline_10297
0.26% bench [k] __bpf_prog_exit_sleepable
0.21% bench [k] __bpf_prog_enter_sleepable
sleepable with RCU_TRACE:
0.79% bench [k] bpf_prog_740d4210cdcd99a3_bench_trigger_fentry_sleep
0.72% bench [k] bpf_trampoline_10381
0.31% bench [k] __bpf_prog_exit_sleepable
0.29% bench [k] __bpf_prog_enter_sleepable
non-sleepable with RCU:
0.88% bench [k] bpf_prog_740d4210cdcd99a3_bench_trigger_fentry
0.84% bench [k] bpf_trampoline_10297
0.13% bench [k] __bpf_prog_enter
0.12% bench [k] __bpf_prog_exit
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: KP Singh <kpsingh@google.com>
Link: https://lore.kernel.org/bpf/20200827220114.69225-6-alexei.starovoitov@gmail.com
|
|
It is sometimes desirable to be able to trigger BPF program from user-space
with minimal overhead. sys_enter would seem to be a good candidate, yet in
a lot of cases there will be a lot of noise from syscalls triggered by other
processes on the system. So while searching for low-overhead alternative, I've
stumbled upon getpgid() syscall, which seems to be specific enough to not
suffer from accidental syscall by other apps.
This set of benchmarks compares tp, raw_tp w/ filtering by syscall ID, kprobe,
fentry and fmod_ret with returning error (so that syscall would not be
executed), to determine the lowest-overhead way. Here are results on my
machine (using benchs/run_bench_trigger.sh script):
base : 9.200 ± 0.319M/s
tp : 6.690 ± 0.125M/s
rawtp : 8.571 ± 0.214M/s
kprobe : 6.431 ± 0.048M/s
fentry : 8.955 ± 0.241M/s
fmodret : 8.903 ± 0.135M/s
So it seems like fmodret doesn't give much benefit for such lightweight
syscall. Raw tracepoint is pretty decent despite additional filtering logic,
but it will be called for any other syscall in the system, which rules it out.
Fentry, though, seems to be adding the least amoung of overhead and achieves
97.3% of performance of baseline no-BPF-attached syscall.
Using getpgid() seems to be preferable to set_task_comm() approach from
test_overhead, as it's about 2.35x faster in a baseline performance.
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20200512192445.2351848-5-andriin@fb.com
|