Merge branch 'bpf: misc performance improvements for cgroup'
Stanislav Fomichev says:
====================
First patch adds custom getsockopt for TCP_ZEROCOPY_RECEIVE
to remove kmalloc and lock_sock overhead from the dat path.
Second patch removes kzalloc/kfree from getsockopt for the common cases.
Third patch switches cgroup_bpf_enabled to be per-attach to
to add only overhead for the cgroup attach types used on the system.
No visible user-side changes.
v9:
- include linux/tcp.h instead of netinet/tcp.h in sockopt_sk.c
- note that v9 depends on the commit
4be34f3d0731 ("bpf: Don't leak
memory in bpf getsockopt when optlen == 0") from bpf tree
v8:
- add bpi.h to tools/include/uapi in the same patch (Martin KaFai Lau)
- kmalloc instead of kzalloc when exporting buffer (Martin KaFai Lau)
- note that v8 depends on the commit
4be34f3d0731 ("bpf: Don't leak
memory in bpf getsockopt when optlen == 0") from bpf tree
v7:
- add comment about buffer contents for retval != 0 (Martin KaFai Lau)
- export tcp.h into tools/include/uapi (Martin KaFai Lau)
- note that v7 depends on the commit
4be34f3d0731 ("bpf: Don't leak
memory in bpf getsockopt when optlen == 0") from bpf tree
v6:
- avoid indirect cost for new bpf_bypass_getsockopt (Eric Dumazet)
v5:
- reorder patches to reduce the churn (Martin KaFai Lau)
v4:
- update performance numbers
- bypass_bpf_getsockopt (Martin KaFai Lau)
v3:
- remove extra newline, add comment about sizeof tcp_zerocopy_receive
(Martin KaFai Lau)
- add another patch to remove lock_sock overhead from
TCP_ZEROCOPY_RECEIVE; technically, this makes patch #1 obsolete,
but I'd still prefer to keep it to help with other socket
options
v2:
- perf numbers for getsockopt kmalloc reduction (Song Liu)
- (sk) in BPF_CGROUP_PRE_CONNECT_ENABLED (Song Liu)
- 128 -> 64 buffer size, BUILD_BUG_ON (Martin KaFai Lau)
====================
Signed-off-by: Alexei Starovoitov <ast@kernel.org>