linux-2.6-microblaze.git
4 years agobpf: Add missing annotations for __bpf_prog_enter() and __bpf_prog_exit()
Jules Irenge [Wed, 11 Mar 2020 01:09:01 +0000 (01:09 +0000)]
bpf: Add missing annotations for __bpf_prog_enter() and __bpf_prog_exit()

Sparse reports a warning at __bpf_prog_enter() and __bpf_prog_exit()

warning: context imbalance in __bpf_prog_enter() - wrong count at exit
warning: context imbalance in __bpf_prog_exit() - unexpected unlock

The root cause is the missing annotation at __bpf_prog_enter()
and __bpf_prog_exit()

Add the missing __acquires(RCU) annotation
Add the missing __releases(RCU) annotation

Signed-off-by: Jules Irenge <jbi.octave@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200311010908.42366-2-jbi.octave@gmail.com
4 years agobpf_helpers_doc.py: Fix warning when compiling bpftool
Carlos Neira [Fri, 13 Mar 2020 15:46:50 +0000 (12:46 -0300)]
bpf_helpers_doc.py: Fix warning when compiling bpftool

When compiling bpftool the following warning is found: "declaration of
'struct bpf_pidns_info' will not be visible outside of this function."
This patch adds struct bpf_pidns_info to type_fwds array to fix this.

Fixes: b4490c5c4e02 ("bpf: Added new helper bpf_get_ns_current_pid_tgid")
Signed-off-by: Carlos Neira <cneirabustos@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20200313154650.13366-1-cneirabustos@gmail.com
4 years agoselftests/bpf: Fix usleep() implementation
Andrii Nakryiko [Fri, 13 Mar 2020 06:18:37 +0000 (23:18 -0700)]
selftests/bpf: Fix usleep() implementation

nanosleep syscall expects pointer to struct timespec, not nanoseconds
directly. Current implementation fulfills its purpose of invoking nanosleep
syscall, but doesn't really provide sleeping capabilities, which can cause
flakiness for tests relying on usleep() to wait for something.

Fixes: ec12a57b822c ("selftests/bpf: Guarantee that useep() calls nanosleep() syscall")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200313061837.3685572-1-andriin@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
4 years agoMerge branch 'generalize-bpf-ksym'
Alexei Starovoitov [Fri, 13 Mar 2020 02:23:12 +0000 (19:23 -0700)]
Merge branch 'generalize-bpf-ksym'

Jiri Olsa says:

====================
this patchset adds trampoline and dispatcher objects
to be visible in /proc/kallsyms.

  $ sudo cat /proc/kallsyms | tail -20
  ...
  ffffffffa050f000 t bpf_prog_5a2b06eab81b8f51    [bpf]
  ffffffffa0511000 t bpf_prog_6deef7357e7b4530    [bpf]
  ffffffffa0542000 t bpf_trampoline_13832 [bpf]
  ffffffffa0548000 t bpf_prog_96f1b5bf4e4cc6dc_mutex_lock [bpf]
  ffffffffa0572000 t bpf_prog_d1c63e29ad82c4ab_bpf_prog1  [bpf]
  ffffffffa0585000 t bpf_prog_e314084d332a5338__dissect   [bpf]
  ffffffffa0587000 t bpf_prog_59785a79eac7e5d2_mutex_unlock       [bpf]
  ffffffffa0589000 t bpf_prog_d0db6e0cac050163_mutex_lock [bpf]
  ffffffffa058d000 t bpf_prog_d8f047721e4d8321_bpf_prog2  [bpf]
  ffffffffa05df000 t bpf_trampoline_25637 [bpf]
  ffffffffa05e3000 t bpf_prog_d8f047721e4d8321_bpf_prog2  [bpf]
  ffffffffa05e5000 t bpf_prog_3b185187f1855c4c    [bpf]
  ffffffffa05e7000 t bpf_prog_d8f047721e4d8321_bpf_prog2  [bpf]
  ffffffffa05eb000 t bpf_prog_93cebb259dd5c4b2_do_sys_open        [bpf]
  ffffffffa0677000 t bpf_dispatcher_xdp   [bpf]

v5 changes:
  - keeping just 1 bpf_tree for all the objects and adding flag
    to recognize bpf_objects when searching for exception tables [Alexei]
  - no need for is_bpf_image_address call in kernel_text_address [Alexei]
  - removed the bpf_image tree, because it's no longer needed

v4 changes:
  - add trampoline and dispatcher to kallsyms once the it's allocated [Alexei]
  - omit the symbols sorting for kallsyms [Alexei]
  - small title change in one patch [Song]
  - some function renames:
     bpf_get_prog_name to bpf_prog_ksym_set_name
     bpf_get_prog_addr_region to bpf_prog_ksym_set_addr
  - added acks to changelogs
  - I checked and there'll be conflict on perftool side with
    upcoming changes from Adrian Hunter (text poke events),
    so I think it's better if Arnaldo takes the perf changes
    via perf tree and we will solve all conflicts there

v3 changes:
  - use container_of directly in bpf_get_ksym_start  [Daniel]
  - add more changelog explanations for ksym addresses [Daniel]

v2 changes:
  - omit extra condition in __bpf_ksym_add for sorting code (Andrii)
  - rename bpf_kallsyms_tree_ops to bpf_ksym_tree (Andrii)
  - expose only executable code in kallsyms (Andrii)
  - use full trampoline key as its kallsyms id (Andrii)
  - explained the BPF_TRAMP_REPLACE case (Andrii)
  - small format changes in bpf_trampoline_link_prog/bpf_trampoline_unlink_prog (Andrii)
  - propagate error value in bpf_dispatcher_update and update kallsym if it's successful (Andrii)
  - get rid of __always_inline for bpf_ksym_tree callbacks (Andrii)
  - added KSYMBOL notification for bpf_image add/removal
  - added perf tools changes to properly display trampoline/dispatcher
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
4 years agobpf: Remove bpf_image tree
Jiri Olsa [Thu, 12 Mar 2020 19:56:07 +0000 (20:56 +0100)]
bpf: Remove bpf_image tree

Now that we have all the objects (bpf_prog, bpf_trampoline,
bpf_dispatcher) linked in bpf_tree, there's no need to have
separate bpf_image tree for images.

Reverting the bpf_image tree together with struct bpf_image,
because it's no longer needed.

Also removing bpf_image_alloc function and adding the original
bpf_jit_alloc_exec_page interface instead.

The kernel_text_address function can now rely only on is_bpf_text_address,
because it checks the bpf_tree that contains all the objects.

Keeping bpf_image_ksym_add and bpf_image_ksym_del because they are
useful wrappers with perf's ksymbol interface calls.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200312195610.346362-13-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
4 years agobpf: Add dispatchers to kallsyms
Jiri Olsa [Thu, 12 Mar 2020 19:56:06 +0000 (20:56 +0100)]
bpf: Add dispatchers to kallsyms

Adding dispatchers to kallsyms. It's displayed as
  bpf_dispatcher_<NAME>

where NAME is the name of dispatcher.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200312195610.346362-12-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
4 years agobpf: Add trampolines to kallsyms
Jiri Olsa [Thu, 12 Mar 2020 19:56:05 +0000 (20:56 +0100)]
bpf: Add trampolines to kallsyms

Adding trampolines to kallsyms. It's displayed as
  bpf_trampoline_<ID> [bpf]

where ID is the BTF id of the trampoline function.

Adding bpf_image_ksym_add/del functions that setup
the start/end values and call KSYMBOL perf events
handlers.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200312195610.346362-11-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
4 years agobpf: Add bpf_ksym_add/del functions
Jiri Olsa [Thu, 12 Mar 2020 19:56:04 +0000 (20:56 +0100)]
bpf: Add bpf_ksym_add/del functions

Separating /proc/kallsyms add/del code and adding bpf_ksym_add/del
functions for that.

Moving bpf_prog_ksym_node_add/del functions to __bpf_ksym_add/del
and changing their argument to 'struct bpf_ksym' object. This way
we can call them for other bpf objects types like trampoline and
dispatcher.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200312195610.346362-10-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
4 years agobpf: Add prog flag to struct bpf_ksym object
Jiri Olsa [Thu, 12 Mar 2020 19:56:03 +0000 (20:56 +0100)]
bpf: Add prog flag to struct bpf_ksym object

Adding 'prog' bool flag to 'struct bpf_ksym' to mark that
this object belongs to bpf_prog object.

This change allows having bpf_prog objects together with
other types (trampolines and dispatchers) in the single
bpf_tree. It's used when searching for bpf_prog exception
tables by the bpf_prog_ksym_find function, where we need
to get the bpf_prog pointer.

>From now we can safely add bpf_ksym support for trampoline
or dispatcher objects, because we can differentiate them
from bpf_prog objects.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200312195610.346362-9-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
4 years agobpf: Abstract away entire bpf_link clean up procedure
Andrii Nakryiko [Fri, 13 Mar 2020 00:21:28 +0000 (17:21 -0700)]
bpf: Abstract away entire bpf_link clean up procedure

Instead of requiring users to do three steps for cleaning up bpf_link, its
anon_inode file, and unused fd, abstract that away into bpf_link_cleanup()
helper. bpf_link_defunct() is removed, as it shouldn't be needed as an
individual operation anymore.

v1->v2:
- keep bpf_link_cleanup() static for now (Daniel).

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20200313002128.2028680-1-andriin@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
4 years agobpf: Add bpf_ksym_find function
Jiri Olsa [Thu, 12 Mar 2020 19:56:02 +0000 (20:56 +0100)]
bpf: Add bpf_ksym_find function

Adding bpf_ksym_find function that is used bpf bpf address
lookup functions:
  __bpf_address_lookup
  is_bpf_text_address

while keeping bpf_prog_kallsyms_find to be used only for lookup
of bpf_prog objects (will happen in following changes).

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200312195610.346362-8-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
4 years agoselftests/bpf: Make tcp_rtt test more robust to failures
Andrii Nakryiko [Wed, 11 Mar 2020 22:27:49 +0000 (15:27 -0700)]
selftests/bpf: Make tcp_rtt test more robust to failures

Switch to non-blocking accept and wait for server thread to exit before
proceeding. I noticed that sometimes tcp_rtt server thread failure would
"spill over" into other tests (that would run after tcp_rtt), probably just
because server thread exits much later and tcp_rtt doesn't wait for it.

v1->v2:
  - add usleep() while waiting on initial non-blocking accept() (Stanislav);

Fixes: 8a03222f508b ("selftests/bpf: test_progs: fix client/server race in tcp_rtt")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Reviewed-by: Stanislav Fomichev <sdf@google.com>
Link: https://lore.kernel.org/bpf/20200311222749.458015-1-andriin@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
4 years agobpf: Move ksym_tnode to bpf_ksym
Jiri Olsa [Thu, 12 Mar 2020 19:56:01 +0000 (20:56 +0100)]
bpf: Move ksym_tnode to bpf_ksym

Moving ksym_tnode list node to 'struct bpf_ksym' object,
so the symbol itself can be chained and used in other
objects like bpf_trampoline and bpf_dispatcher.

We need bpf_ksym object to be linked both in bpf_kallsyms
via lnode for /proc/kallsyms and in bpf_tree via tnode for
bpf address lookup functions like __bpf_address_lookup or
bpf_prog_kallsyms_find.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200312195610.346362-7-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
4 years agoselftests/bpf: Guarantee that useep() calls nanosleep() syscall
Andrii Nakryiko [Wed, 11 Mar 2020 18:53:45 +0000 (11:53 -0700)]
selftests/bpf: Guarantee that useep() calls nanosleep() syscall

Some implementations of C runtime library won't call nanosleep() syscall from
usleep(). But a bunch of kprobe/tracepoint selftests rely on nanosleep being
called to trigger them. To make this more reliable, "override" usleep
implementation and call nanosleep explicitly.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Cc: Julia Kartseva <hex@fb.com>
Link: https://lore.kernel.org/bpf/20200311185345.3874602-1-andriin@fb.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
4 years agobpf: Move lnode list node to struct bpf_ksym
Jiri Olsa [Thu, 12 Mar 2020 19:56:00 +0000 (20:56 +0100)]
bpf: Move lnode list node to struct bpf_ksym

Adding lnode list node to 'struct bpf_ksym' object,
so the struct bpf_ksym itself can be chained and used
in other objects like bpf_trampoline and bpf_dispatcher.

Changing iterator to bpf_ksym in bpf_get_kallsym function.

The ksym->start is holding the prog->bpf_func value,
so it's ok to use it as value in bpf_get_kallsym.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20200312195610.346362-6-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
4 years agotools: bpftool: Restore message on failure to guess program type
Quentin Monnet [Wed, 11 Mar 2020 02:12:05 +0000 (02:12 +0000)]
tools: bpftool: Restore message on failure to guess program type

In commit 4a3d6c6a6e4d ("libbpf: Reduce log level for custom section
names"), log level for messages for libbpf_attach_type_by_name() and
libbpf_prog_type_by_name() was downgraded from "info" to "debug". The
latter function, in particular, is used by bpftool when attempting to
load programs, and this change caused bpftool to exit with no hint or
error message when it fails to detect the type of the program to load
(unless "-d" option was provided).

To help users understand why bpftool fails to load the program, let's do
a second run of the function with log level in "debug" mode in case of
failure.

Before:

    # bpftool prog load sample_ret0.o /sys/fs/bpf/sample_ret0
    # echo $?
    255

Or really verbose with -d flag:

    # bpftool -d prog load sample_ret0.o /sys/fs/bpf/sample_ret0
    libbpf: loading sample_ret0.o
    libbpf: section(1) .strtab, size 134, link 0, flags 0, type=3
    libbpf: skip section(1) .strtab
    libbpf: section(2) .text, size 16, link 0, flags 6, type=1
    libbpf: found program .text
    libbpf: section(3) .debug_abbrev, size 55, link 0, flags 0, type=1
    libbpf: skip section(3) .debug_abbrev
    libbpf: section(4) .debug_info, size 75, link 0, flags 0, type=1
    libbpf: skip section(4) .debug_info
    libbpf: section(5) .rel.debug_info, size 32, link 14, flags 0, type=9
    libbpf: skip relo .rel.debug_info(5) for section(4)
    libbpf: section(6) .debug_str, size 150, link 0, flags 30, type=1
    libbpf: skip section(6) .debug_str
    libbpf: section(7) .BTF, size 155, link 0, flags 0, type=1
    libbpf: section(8) .BTF.ext, size 80, link 0, flags 0, type=1
    libbpf: section(9) .rel.BTF.ext, size 32, link 14, flags 0, type=9
    libbpf: skip relo .rel.BTF.ext(9) for section(8)
    libbpf: section(10) .debug_frame, size 40, link 0, flags 0, type=1
    libbpf: skip section(10) .debug_frame
    libbpf: section(11) .rel.debug_frame, size 16, link 14, flags 0, type=9
    libbpf: skip relo .rel.debug_frame(11) for section(10)
    libbpf: section(12) .debug_line, size 74, link 0, flags 0, type=1
    libbpf: skip section(12) .debug_line
    libbpf: section(13) .rel.debug_line, size 16, link 14, flags 0, type=9
    libbpf: skip relo .rel.debug_line(13) for section(12)
    libbpf: section(14) .symtab, size 96, link 1, flags 0, type=2
    libbpf: looking for externs among 4 symbols...
    libbpf: collected 0 externs total
    libbpf: failed to guess program type from ELF section '.text'
    libbpf: supported section(type) names are: socket sk_reuseport kprobe/ [...]

After:

    # bpftool prog load sample_ret0.o /sys/fs/bpf/sample_ret0
    libbpf: failed to guess program type from ELF section '.text'
    libbpf: supported section(type) names are: socket sk_reuseport kprobe/ [...]

Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20200311021205.9755-1-quentin@isovalent.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
4 years agobpf: Add name to struct bpf_ksym
Jiri Olsa [Thu, 12 Mar 2020 19:55:59 +0000 (20:55 +0100)]
bpf: Add name to struct bpf_ksym

Adding name to 'struct bpf_ksym' object to carry the name
of the symbol for bpf_prog, bpf_trampoline, bpf_dispatcher
objects.

The current benefit is that name is now generated only when
the symbol is added to the list, so we don't need to generate
it every time it's accessed.

The future benefit is that we will have all the bpf objects
symbols represented by struct bpf_ksym.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20200312195610.346362-5-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
4 years agobpf: Add struct bpf_ksym
Jiri Olsa [Thu, 12 Mar 2020 19:55:58 +0000 (20:55 +0100)]
bpf: Add struct bpf_ksym

Adding 'struct bpf_ksym' object that will carry the
kallsym information for bpf symbol. Adding the start
and end address to begin with. It will be used by
bpf_prog, bpf_trampoline, bpf_dispatcher objects.

The symbol_start/symbol_end values were originally used
to sort bpf_prog objects. For the address displayed in
/proc/kallsyms we are using prog->bpf_func value.

I'm using the bpf_func value for program symbol start
instead of the symbol_start, because it makes no difference
for sorting bpf_prog objects and we can use it directly as
an address to display it in /proc/kallsyms.

Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20200312195610.346362-4-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
4 years agobpf: Add bpf_trampoline_ name prefix for DECLARE_BPF_DISPATCHER
Björn Töpel [Thu, 12 Mar 2020 19:55:57 +0000 (20:55 +0100)]
bpf: Add bpf_trampoline_ name prefix for DECLARE_BPF_DISPATCHER

Adding bpf_trampoline_ name prefix for DECLARE_BPF_DISPATCHER,
so all the dispatchers have the common name prefix.

And also a small '_' cleanup for bpf_dispatcher_nopfunc function
name.

Signed-off-by: Björn Töpel <bjorn.topel@intel.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20200312195610.346362-3-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
4 years agox86/mm: Rename is_kernel_text to __is_kernel_text
Jiri Olsa [Thu, 12 Mar 2020 19:55:56 +0000 (20:55 +0100)]
x86/mm: Rename is_kernel_text to __is_kernel_text

The kbuild test robot reported compile issue on x86 in one of
the following patches that adds <linux/kallsyms.h> include into
<linux/bpf.h>, which is picked up by init_32.c object.

The problem is that <linux/kallsyms.h> defines global function
is_kernel_text which colides with the static function of the
same name defined in init_32.c:

  $ make ARCH=i386
  ...
  >> arch/x86/mm/init_32.c:241:19: error: redefinition of 'is_kernel_text'
    static inline int is_kernel_text(unsigned long addr)
                      ^~~~~~~~~~~~~~
   In file included from include/linux/bpf.h:21:0,
                    from include/linux/bpf-cgroup.h:5,
                    from include/linux/cgroup-defs.h:22,
                    from include/linux/cgroup.h:28,
                    from include/linux/hugetlb.h:9,
                    from arch/x86/mm/init_32.c:18:
   include/linux/kallsyms.h:31:19: note: previous definition of 'is_kernel_text' was here
    static inline int is_kernel_text(unsigned long addr)

Renaming the init_32.c is_kernel_text function to __is_kernel_text.

Reported-by: kbuild test robot <lkp@intel.com>
Signed-off-by: Jiri Olsa <jolsa@kernel.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Song Liu <songliubraving@fb.com>
Link: https://lore.kernel.org/bpf/20200312195610.346362-2-jolsa@kernel.org
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
4 years agobpf: Add bpf_xdp_output() helper
Eelco Chaudron [Fri, 6 Mar 2020 08:59:23 +0000 (08:59 +0000)]
bpf: Add bpf_xdp_output() helper

Introduce new helper that reuses existing xdp perf_event output
implementation, but can be called from raw_tracepoint programs
that receive 'struct xdp_buff *' as a tracepoint argument.

Signed-off-by: Eelco Chaudron <echaudro@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>
Link: https://lore.kernel.org/bpf/158348514556.2239.11050972434793741444.stgit@xdp-tutorial
4 years agoMerge branch 'bpf_get_ns_current_pid_tgid'
Alexei Starovoitov [Fri, 13 Mar 2020 00:33:12 +0000 (17:33 -0700)]
Merge branch 'bpf_get_ns_current_pid_tgid'

Carlos Neira says:

====================
Currently bpf_get_current_pid_tgid(), is used to do pid filtering in bcc's
scripts but this helper returns the pid as seen by the root namespace which is
fine when a bcc script is not executed inside a container.
When the process of interest is inside a container, pid filtering will not work
if bpf_get_current_pid_tgid() is used.
This helper addresses this limitation returning the pid as it's seen by the current
namespace where the script is executing.

In the future different pid_ns files may belong to different devices, according to the
discussion between Eric Biederman and Yonghong in 2017 Linux plumbers conference.
To address that situation the helper requires inum and dev_t from /proc/self/ns/pid.
This helper has the same use cases as bpf_get_current_pid_tgid() as it can be
used to do pid filtering even inside a container.
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
4 years agotools/testing/selftests/bpf: Add self-tests for new helper bpf_get_ns_current_pid_tgid.
Carlos Neira [Wed, 4 Mar 2020 20:41:57 +0000 (17:41 -0300)]
tools/testing/selftests/bpf: Add self-tests for new helper bpf_get_ns_current_pid_tgid.

Self tests added for new helper bpf_get_ns_current_pid_tgid

Signed-off-by: Carlos Neira <cneirabustos@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200304204157.58695-4-cneirabustos@gmail.com
4 years agobpf: Added new helper bpf_get_ns_current_pid_tgid
Carlos Neira [Wed, 4 Mar 2020 20:41:56 +0000 (17:41 -0300)]
bpf: Added new helper bpf_get_ns_current_pid_tgid

New bpf helper bpf_get_ns_current_pid_tgid,
This helper will return pid and tgid from current task
which namespace matches dev_t and inode number provided,
this will allows us to instrument a process inside a container.

Signed-off-by: Carlos Neira <cneirabustos@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20200304204157.58695-3-cneirabustos@gmail.com
4 years agofs/nsfs.c: Added ns_match
Carlos Neira [Wed, 4 Mar 2020 20:41:55 +0000 (17:41 -0300)]
fs/nsfs.c: Added ns_match

ns_match returns true if the namespace inode and dev_t matches the ones
provided by the caller.

Signed-off-by: Carlos Neira <cneirabustos@gmail.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200304204157.58695-2-cneirabustos@gmail.com
4 years agotools: bpftool: Fix minor bash completion mistakes
Quentin Monnet [Thu, 12 Mar 2020 18:46:08 +0000 (18:46 +0000)]
tools: bpftool: Fix minor bash completion mistakes

Minor fixes for bash completion: addition of program name completion for
two subcommands, and correction for program test-runs and map pinning.

The completion for the following commands is fixed or improved:

    # bpftool prog run [TAB]
    # bpftool prog pin [TAB]
    # bpftool map pin [TAB]
    # bpftool net attach xdp name [TAB]

Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20200312184608.12050-3-quentin@isovalent.com
4 years agotools: bpftool: Allow all prog/map handles for pinning objects
Quentin Monnet [Thu, 12 Mar 2020 18:46:07 +0000 (18:46 +0000)]
tools: bpftool: Allow all prog/map handles for pinning objects

Documentation and interactive help for bpftool have always explained
that the regular handles for programs (id|name|tag|pinned) and maps
(id|name|pinned) can be passed to the utility when attempting to pin
objects (bpftool prog pin PROG / bpftool map pin MAP).

THIS IS A LIE!! The tool actually accepts only ids, as the parsing is
done in do_pin_any() in common.c instead of reusing the parsing
functions that have long been generic for program and map handles.

Instead of fixing the doc, fix the code. It is trivial to reuse the
generic parsing, and to simplify do_pin_any() in the process.

Do not accept to pin multiple objects at the same time with
prog_parse_fds() or map_parse_fds() (this would require a more complex
syntax for passing multiple sysfs paths and validating that they
correspond to the number of e.g. programs we find for a given name or
tag).

Signed-off-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Link: https://lore.kernel.org/bpf/20200312184608.12050-2-quentin@isovalent.com
4 years agolibbpf: Split BTF presence checks into libbpf- and kernel-specific parts
Andrii Nakryiko [Thu, 12 Mar 2020 18:50:33 +0000 (11:50 -0700)]
libbpf: Split BTF presence checks into libbpf- and kernel-specific parts

Needs for application BTF being present differs between user-space libbpf needs and kernel
needs. Currently, BTF is mandatory only in kernel only when BPF application is
using STRUCT_OPS. While libbpf itself relies more heavily on presense of BTF:
  - for BTF-defined maps;
  - for Kconfig externs;
  - for STRUCT_OPS as well.

Thus, checks for presence and validness of bpf_object's BPF needs to be
performed separately, which is patch does.

Fixes: 5327644614a1 ("libbpf: Relax check whether BTF is mandatory")
Reported-by: Michal Rostecki <mrostecki@opensuse.org>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Martin KaFai Lau <kafai@fb.com>
Cc: Quentin Monnet <quentin@isovalent.com>
Link: https://lore.kernel.org/bpf/20200312185033.736911-1-andriin@fb.com
4 years agobpftool: Add _bpftool and profiler.skel.h to .gitignore
Song Liu [Thu, 12 Mar 2020 18:23:32 +0000 (11:23 -0700)]
bpftool: Add _bpftool and profiler.skel.h to .gitignore

These files are generated, so ignore them.

Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Link: https://lore.kernel.org/bpf/20200312182332.3953408-4-songliubraving@fb.com
4 years agobpftool: Skeleton should depend on libbpf
Song Liu [Thu, 12 Mar 2020 18:23:31 +0000 (11:23 -0700)]
bpftool: Skeleton should depend on libbpf

Add the dependency to libbpf, to fix build errors like:

  In file included from skeleton/profiler.bpf.c:5:
  .../bpf_helpers.h:5:10: fatal error: 'bpf_helper_defs.h' file not found
  #include "bpf_helper_defs.h"
           ^~~~~~~~~~~~~~~~~~~
  1 error generated.
  make: *** [skeleton/profiler.bpf.o] Error 1
  make: *** Waiting for unfinished jobs....

Fixes: 47c09d6a9f67 ("bpftool: Introduce "prog profile" command")
Suggested-by: Quentin Monnet <quentin@isovalent.com>
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20200312182332.3953408-3-songliubraving@fb.com
4 years agobpftool: Only build bpftool-prog-profile if supported by clang
Song Liu [Thu, 12 Mar 2020 18:23:30 +0000 (11:23 -0700)]
bpftool: Only build bpftool-prog-profile if supported by clang

bpftool-prog-profile requires clang to generate BTF for global variables.
When compared with older clang, skip this command. This is achieved by
adding a new feature, clang-bpf-global-var, to tools/build/feature.

Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Link: https://lore.kernel.org/bpf/20200312182332.3953408-2-songliubraving@fb.com
4 years agobpftool: Use linux/types.h from source tree for profiler build
Tobias Klauser [Thu, 12 Mar 2020 13:03:30 +0000 (14:03 +0100)]
bpftool: Use linux/types.h from source tree for profiler build

When compiling bpftool on a system where the /usr/include/asm symlink
doesn't exist (e.g. on an Ubuntu system without gcc-multilib installed),
the build fails with:

    CLANG    skeleton/profiler.bpf.o
  In file included from skeleton/profiler.bpf.c:4:
  In file included from /usr/include/linux/bpf.h:11:
  /usr/include/linux/types.h:5:10: fatal error: 'asm/types.h' file not found
  #include <asm/types.h>
           ^~~~~~~~~~~~~
  1 error generated.
  make: *** [Makefile:123: skeleton/profiler.bpf.o] Error 1

This indicates that the build is using linux/types.h from system headers
instead of source tree headers.

To fix this, adjust the clang search path to include the necessary
headers from tools/testing/selftests/bpf/include/uapi and
tools/include/uapi. Also use __bitwise__ instead of __bitwise in
skeleton/profiler.h to avoid clashing with the definition in
tools/testing/selftests/bpf/include/uapi/linux/types.h.

Signed-off-by: Tobias Klauser <tklauser@distanz.ch>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Link: https://lore.kernel.org/bpf/20200312130330.32239-1-tklauser@distanz.ch
4 years agotools/runqslower: Add BPF_F_CURRENT_CPU for running selftest on older kernels
Andrii Nakryiko [Wed, 11 Mar 2020 04:30:10 +0000 (21:30 -0700)]
tools/runqslower: Add BPF_F_CURRENT_CPU for running selftest on older kernels

Libbpf compiles and runs subset of selftests on each PR in its Github mirror
repository. To allow still building up-to-date selftests against outdated
kernel images, add back BPF_F_CURRENT_CPU definitions back.

N.B. BCC's runqslower version ([0]) doesn't need BPF_F_CURRENT_CPU due to use of
locally checked in vmlinux.h, generated against kernel with 1aae4bdd7879 ("bpf:
Switch BPF UAPI #define constants used from BPF program side to enums")
applied.

  [0] https://github.com/iovisor/bcc/pull/2809

Fixes: 367d82f17eff (" tools/runqslower: Drop copy/pasted BPF_F_CURRENT_CPU definiton")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200311043010.530620-1-andriin@fb.com
4 years agobpf: Fix trampoline generation for fmod_ret programs
Alexei Starovoitov [Wed, 11 Mar 2020 00:39:06 +0000 (17:39 -0700)]
bpf: Fix trampoline generation for fmod_ret programs

fmod_ret progs are emitted as:

start = __bpf_prog_enter();
call fmod_ret
*(u64 *)(rbp - 8) = rax
__bpf_prog_exit(, start);
test eax, eax
jne do_fexit

That 'test eax, eax' is working by accident. The compiler is free to use rax
inside __bpf_prog_exit() or inside functions that __bpf_prog_exit() is calling.
Which caused "test_progs -t modify_return" to sporadically fail depending on
compiler version and kconfig. Fix it by using 'cmp [rbp - 8], 0' instead of
'test eax, eax'.

Fixes: ae24082331d9 ("bpf: Introduce BPF_MODIFY_RETURN")
Reported-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: KP Singh <kpsingh@google.com>
Link: https://lore.kernel.org/bpf/20200311003906.3643037-1-ast@kernel.org
4 years agobpf: Add bpf_link_new_file that doesn't install FD
Andrii Nakryiko [Mon, 9 Mar 2020 23:10:51 +0000 (16:10 -0700)]
bpf: Add bpf_link_new_file that doesn't install FD

Add bpf_link_new_file() API for cases when we need to ensure anon_inode is
successfully created before we proceed with expensive BPF program attachment
procedure, which will require equally (if not more so) expensive and
potentially failing compensation detachment procedure just because anon_inode
creation failed. This API allows to simplify code by ensuring first that
anon_inode is created and after BPF program is attached proceed with
fd_install() that can't fail.

After anon_inode file is created, link can't be just kfree()'d anymore,
because its destruction will be performed by deferred file_operations->release
call. For this, bpf_link API required specifying two separate operations:
release() and dealloc(), former performing detachment only, while the latter
frees memory used by bpf_link itself. dealloc() needs to be specified, because
struct bpf_link is frequently embedded into link type-specific container
struct (e.g., struct bpf_raw_tp_link), so bpf_link itself doesn't know how to
properly free the memory. In case when anon_inode file was successfully
created, but subsequent BPF attachment failed, bpf_link needs to be marked as
"defunct", so that file's release() callback will perform only memory
deallocation, but no detachment.

Convert raw tracepoint and tracing attachment to new API and eliminate
detachment from error handling path.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20200309231051.1270337-1-andriin@fb.com
4 years agobpftool: Fix typo in bash-completion
Song Liu [Mon, 9 Mar 2020 17:32:18 +0000 (10:32 -0700)]
bpftool: Fix typo in bash-completion

_bpftool_get_map_names => _bpftool_get_prog_names for prog-attach|detach.

Fixes: 99f9863a0c45 ("bpftool: Match maps by name")
Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Link: https://lore.kernel.org/bpf/20200309173218.2739965-5-songliubraving@fb.com
4 years agobpftool: Bash completion for "bpftool prog profile"
Song Liu [Mon, 9 Mar 2020 17:32:17 +0000 (10:32 -0700)]
bpftool: Bash completion for "bpftool prog profile"

Add bash completion for "bpftool prog profile" command.

Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Link: https://lore.kernel.org/bpf/20200309173218.2739965-4-songliubraving@fb.com
4 years agobpftool: Documentation for bpftool prog profile
Song Liu [Mon, 9 Mar 2020 17:32:16 +0000 (10:32 -0700)]
bpftool: Documentation for bpftool prog profile

Add documentation for the new bpftool prog profile command.

Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Link: https://lore.kernel.org/bpf/20200309173218.2739965-3-songliubraving@fb.com
4 years agobpftool: Introduce "prog profile" command
Song Liu [Mon, 9 Mar 2020 17:32:15 +0000 (10:32 -0700)]
bpftool: Introduce "prog profile" command

With fentry/fexit programs, it is possible to profile BPF program with
hardware counters. Introduce bpftool "prog profile", which measures key
metrics of a BPF program.

bpftool prog profile command creates per-cpu perf events. Then it attaches
fentry/fexit programs to the target BPF program. The fentry program saves
perf event value to a map. The fexit program reads the perf event again,
and calculates the difference, which is the instructions/cycles used by
the target program.

Example input and output:

  ./bpftool prog profile id 337 duration 3 cycles instructions llc_misses

        4228 run_cnt
     3403698 cycles                                              (84.08%)
     3525294 instructions   #  1.04 insn per cycle               (84.05%)
          13 llc_misses     #  3.69 LLC misses per million isns  (83.50%)

This command measures cycles and instructions for BPF program with id
337 for 3 seconds. The program has triggered 4228 times. The rest of the
output is similar to perf-stat. In this example, the counters were only
counting ~84% of the time because of time multiplexing of perf counters.

Note that, this approach measures cycles and instructions in very small
increments. So the fentry/fexit programs introduce noticeable errors to
the measurement results.

The fentry/fexit programs are generated with BPF skeletons. Therefore, we
build bpftool twice. The first time _bpftool is built without skeletons.
Then, _bpftool is used to generate the skeletons. The second time, bpftool
is built with skeletons.

Signed-off-by: Song Liu <songliubraving@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Quentin Monnet <quentin@isovalent.com>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20200309173218.2739965-2-songliubraving@fb.com
4 years agobpf, doc: Update maintainers for L7 BPF
Lorenz Bauer [Mon, 9 Mar 2020 11:12:43 +0000 (11:12 +0000)]
bpf, doc: Update maintainers for L7 BPF

Add Jakub and myself as maintainers for sockmap related code.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20200309111243.6982-13-lmb@cloudflare.com
4 years agoselftests: bpf: Enable UDP sockmap reuseport tests
Lorenz Bauer [Mon, 9 Mar 2020 11:12:42 +0000 (11:12 +0000)]
selftests: bpf: Enable UDP sockmap reuseport tests

Remove the guard that disables UDP tests now that sockmap
has support for them.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20200309111243.6982-12-lmb@cloudflare.com
4 years agoselftests: bpf: Add tests for UDP sockets in sockmap
Lorenz Bauer [Mon, 9 Mar 2020 11:12:41 +0000 (11:12 +0000)]
selftests: bpf: Add tests for UDP sockets in sockmap

Expand the TCP sockmap test suite to also check UDP sockets.

Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20200309111243.6982-11-lmb@cloudflare.com
4 years agoselftests: bpf: Don't listen() on UDP sockets
Lorenz Bauer [Mon, 9 Mar 2020 11:12:40 +0000 (11:12 +0000)]
selftests: bpf: Don't listen() on UDP sockets

Most tests for TCP sockmap can be adapted to UDP sockmap if the
listen call is skipped. Rename listen_loopback, etc. to socket_loopback
and skip listen() for SOCK_DGRAM.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20200309111243.6982-10-lmb@cloudflare.com
4 years agobpf: sockmap: Add UDP support
Lorenz Bauer [Mon, 9 Mar 2020 11:12:39 +0000 (11:12 +0000)]
bpf: sockmap: Add UDP support

Allow adding hashed UDP sockets to sockmaps.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20200309111243.6982-9-lmb@cloudflare.com
4 years agobpf: Add sockmap hooks for UDP sockets
Lorenz Bauer [Mon, 9 Mar 2020 11:12:38 +0000 (11:12 +0000)]
bpf: Add sockmap hooks for UDP sockets

Add basic psock hooks for UDP sockets. This allows adding and
removing sockets, as well as automatic removal on unhash and close.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Jakub Sitnicki <jakub@cloudflare.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20200309111243.6982-8-lmb@cloudflare.com
4 years agobpf: sockmap: Simplify sock_map_init_proto
Lorenz Bauer [Mon, 9 Mar 2020 11:12:37 +0000 (11:12 +0000)]
bpf: sockmap: Simplify sock_map_init_proto

We can take advantage of the fact that both callers of
sock_map_init_proto are holding a RCU read lock, and
have verified that psock is valid.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20200309111243.6982-7-lmb@cloudflare.com
4 years agobpf: sockmap: Move generic sockmap hooks from BPF TCP
Lorenz Bauer [Mon, 9 Mar 2020 11:12:36 +0000 (11:12 +0000)]
bpf: sockmap: Move generic sockmap hooks from BPF TCP

The init, close and unhash handlers from TCP sockmap are generic,
and can be reused by UDP sockmap. Move the helpers into the sockmap code
base and expose them. This requires tcp_bpf_get_proto and tcp_bpf_clone to
be conditional on BPF_STREAM_PARSER.

The moved functions are unmodified, except that sk_psock_unlink is
renamed to sock_map_unlink to better match its behaviour.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20200309111243.6982-6-lmb@cloudflare.com
4 years agobpf: tcp: Guard declarations with CONFIG_NET_SOCK_MSG
Lorenz Bauer [Mon, 9 Mar 2020 11:12:35 +0000 (11:12 +0000)]
bpf: tcp: Guard declarations with CONFIG_NET_SOCK_MSG

tcp_bpf.c is only included in the build if CONFIG_NET_SOCK_MSG is
selected. The declaration should therefore be guarded as such.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20200309111243.6982-5-lmb@cloudflare.com
4 years agobpf: tcp: Move assertions into tcp_bpf_get_proto
Lorenz Bauer [Mon, 9 Mar 2020 11:12:34 +0000 (11:12 +0000)]
bpf: tcp: Move assertions into tcp_bpf_get_proto

We need to ensure that sk->sk_prot uses certain callbacks, so that
code that directly calls e.g. tcp_sendmsg in certain corner cases
works. To avoid spurious asserts, we must to do this only if
sk_psock_update_proto has not yet been called. The same invariants
apply for tcp_bpf_check_v6_needs_rebuild, so move the call as well.

Doing so allows us to merge tcp_bpf_init and tcp_bpf_reinit.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20200309111243.6982-4-lmb@cloudflare.com
4 years agoskmsg: Update saved hooks only once
Lorenz Bauer [Mon, 9 Mar 2020 11:12:33 +0000 (11:12 +0000)]
skmsg: Update saved hooks only once

Only update psock->saved_* if psock->sk_proto has not been initialized
yet. This allows us to get rid of tcp_bpf_reinit_sk_prot.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20200309111243.6982-3-lmb@cloudflare.com
4 years agobpf: sockmap: Only check ULP for TCP sockets
Lorenz Bauer [Mon, 9 Mar 2020 11:12:32 +0000 (11:12 +0000)]
bpf: sockmap: Only check ULP for TCP sockets

The sock map code checks that a socket does not have an active upper
layer protocol before inserting it into the map. This requires casting
via inet_csk, which isn't valid for UDP sockets.

Guard checks for ULP by checking inet_sk(sk)->is_icsk first.

Signed-off-by: Lorenz Bauer <lmb@cloudflare.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Jakub Sitnicki <jakub@cloudflare.com>
Acked-by: John Fastabend <john.fastabend@gmail.com>
Link: https://lore.kernel.org/bpf/20200309111243.6982-2-lmb@cloudflare.com
4 years agobpf: Fix bpf_prog_test_run_tracing for !CONFIG_NET
KP Singh [Thu, 5 Mar 2020 22:01:27 +0000 (23:01 +0100)]
bpf: Fix bpf_prog_test_run_tracing for !CONFIG_NET

test_run.o is not built when CONFIG_NET is not set and
bpf_prog_test_run_tracing being referenced in bpf_trace.o causes the
linker error:

ld: kernel/trace/bpf_trace.o:(.rodata+0x38): undefined reference to
 `bpf_prog_test_run_tracing'

Add a __weak function in bpf_trace.c to handle this.

Fixes: da00d2f117a0 ("bpf: Add test ops for BPF_PROG_TYPE_TRACING")
Signed-off-by: KP Singh <kpsingh@google.com>
Reported-by: Randy Dunlap <rdunlap@infradead.org>
Acked-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200305220127.29109-1-kpsingh@chromium.org
4 years agobpf: Remove unnecessary CAP_MAC_ADMIN check
KP Singh [Thu, 5 Mar 2020 20:49:55 +0000 (21:49 +0100)]
bpf: Remove unnecessary CAP_MAC_ADMIN check

While well intentioned, checking CAP_MAC_ADMIN for attaching
BPF_MODIFY_RETURN tracing programs to "security_" functions is not
necessary as tracing BPF programs already require CAP_SYS_ADMIN.

Fixes: 6ba43b761c41 ("bpf: Attachment verification for BPF_MODIFY_RETURN")
Signed-off-by: KP Singh <kpsingh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200305204955.31123-1-kpsingh@chromium.org
4 years agoMAINTAINERS: Add entry for RV32G BPF JIT
Luke Nelson [Thu, 5 Mar 2020 05:02:07 +0000 (21:02 -0800)]
MAINTAINERS: Add entry for RV32G BPF JIT

Add a new entry for the 32-bit RISC-V JIT to MAINTAINERS and change
mailing list to netdev and bpf following the guidelines from
commit e42da4c62abb ("docs/bpf: Update bpf development Q/A file").

Signed-off-by: Xi Wang <xi.wang@gmail.com>
Signed-off-by: Luke Nelson <luke.r.nels@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Björn Töpel <bjorn.topel@gmail.com>
Acked-by: Björn Töpel <bjorn.topel@gmail.com>
Link: https://lore.kernel.org/bpf/20200305050207.4159-5-luke.r.nels@gmail.com
4 years agobpf, doc: Add BPF JIT for RV32G to BPF documentation
Luke Nelson [Thu, 5 Mar 2020 05:02:06 +0000 (21:02 -0800)]
bpf, doc: Add BPF JIT for RV32G to BPF documentation

Update filter.txt and admin-guide to mention the BPF JIT for RV32G.

Co-developed-by: Xi Wang <xi.wang@gmail.com>
Signed-off-by: Xi Wang <xi.wang@gmail.com>
Signed-off-by: Luke Nelson <luke.r.nels@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Björn Töpel <bjorn.topel@gmail.com>
Acked-by: Björn Töpel <bjorn.topel@gmail.com>
Link: https://lore.kernel.org/bpf/20200305050207.4159-4-luke.r.nels@gmail.com
4 years agoriscv, bpf: Add RV32G eBPF JIT
Luke Nelson [Thu, 5 Mar 2020 05:02:05 +0000 (21:02 -0800)]
riscv, bpf: Add RV32G eBPF JIT

This is an eBPF JIT for RV32G, adapted from the JIT for RV64G and
the 32-bit ARM JIT.

There are two main changes required for this to work compared to
the RV64 JIT.

First, eBPF registers are 64-bit, while RV32G registers are 32-bit.
BPF registers either map directly to 2 RISC-V registers, or reside
in stack scratch space and are saved and restored when used.

Second, many 64-bit ALU operations do not trivially map to 32-bit
operations. Operations that move bits between high and low words,
such as ADD, LSH, MUL, and others must emulate the 64-bit behavior
in terms of 32-bit instructions.

This patch also makes related changes to bpf_jit.h, such
as adding RISC-V instructions required by the RV32 JIT.

Supported features:

The RV32 JIT supports the same features and instructions as the
RV64 JIT, with the following exceptions:

- ALU64 DIV/MOD: Requires loops to implement on 32-bit hardware.

- BPF_XADD | BPF_DW: There's no 8-byte atomic instruction in RV32.

These features are also unsupported on other BPF JITs for 32-bit
architectures.

Testing:

- lib/test_bpf.c
test_bpf: Summary: 378 PASSED, 0 FAILED, [349/366 JIT'ed]
test_bpf: test_skb_segment: Summary: 2 PASSED, 0 FAILED

The tests that are not JITed are all due to use of 64-bit div/mod
or 64-bit xadd.

- tools/testing/selftests/bpf/test_verifier.c
Summary: 1415 PASSED, 122 SKIPPED, 43 FAILED

Tested both with and without BPF JIT hardening.

This is the same set of tests that pass using the BPF interpreter
with the JIT disabled.

Verification and synthesis:

We developed the RV32 JIT using our automated verification tool,
Serval. We have used Serval in the past to verify patches to the
RV64 JIT. We also used Serval to superoptimize the resulting code
through program synthesis.

You can find the tool and a guide to the approach and results here:
https://github.com/uw-unsat/serval-bpf/tree/rv32-jit-v5

Co-developed-by: Xi Wang <xi.wang@gmail.com>
Signed-off-by: Xi Wang <xi.wang@gmail.com>
Signed-off-by: Luke Nelson <luke.r.nels@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Björn Töpel <bjorn.topel@gmail.com>
Acked-by: Björn Töpel <bjorn.topel@gmail.com>
Link: https://lore.kernel.org/bpf/20200305050207.4159-3-luke.r.nels@gmail.com
4 years agoriscv, bpf: Factor common RISC-V JIT code
Luke Nelson [Thu, 5 Mar 2020 05:02:04 +0000 (21:02 -0800)]
riscv, bpf: Factor common RISC-V JIT code

This patch factors out code that can be used by both the RV64 and RV32
BPF JITs to a common bpf_jit.h and bpf_jit_core.c.

Move struct definitions and macro-like functions to header. Rename
rv_sb_insn/rv_uj_insn to rv_b_insn/rv_j_insn to match the RISC-V
specification.

Move reusable functions emit_body() and bpf_int_jit_compile() to
bpf_jit_core.c with minor simplifications. Rename emit_insn() and
build_{prologue,epilogue}() to be prefixed with "bpf_jit_" as they are
no longer static.

Rename bpf_jit_comp.c to bpf_jit_comp64.c to be more explicit.

Co-developed-by: Xi Wang <xi.wang@gmail.com>
Signed-off-by: Xi Wang <xi.wang@gmail.com>
Signed-off-by: Luke Nelson <luke.r.nels@gmail.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Reviewed-by: Björn Töpel <bjorn.topel@gmail.com>
Acked-by: Björn Töpel <bjorn.topel@gmail.com>
Link: https://lore.kernel.org/bpf/20200305050207.4159-2-luke.r.nels@gmail.com
4 years agoMerge branch 'bpf_modify_ret'
Alexei Starovoitov [Wed, 4 Mar 2020 21:41:06 +0000 (13:41 -0800)]
Merge branch 'bpf_modify_ret'

KP Singh says:

====================
v3 -> v4:

* Fix a memory leak noticed by Daniel.

v2 -> v3:

* bpf_trampoline_update_progs -> bpf_trampoline_get_progs + const
  qualification.
* Typos in commit messages.
* Added Andrii's Acks.

v1 -> v2:

* Adressed Andrii's feedback.
* Fixed a bug that Alexei noticed about nop generation.
* Rebase.

This was brought up in the KRSI v4 discussion and found to be useful
both for security and tracing programs.

  https://lore.kernel.org/bpf/20200225193108.GB22391@chromium.org/

The modify_return programs are allowed for security hooks (with an
extra CAP_MAC_ADMIN check) and functions whitelisted for error
injection (ALLOW_ERROR_INJECTION).

The "security_" check is expected to be cleaned up with the KRSI patch
series.

Here is an example of how a fmod_ret program behaves:

int func_to_be_attached(int a, int b)
{  <--- do_fentry

do_fmod_ret:
   <update ret by calling fmod_ret>
   if (ret != 0)
        goto do_fexit;

original_function:

    <side_effects_happen_here>

}  <--- do_fexit

ALLOW_ERROR_INJECTION(func_to_be_attached, ERRNO)

The fmod_ret program attached to this function can be defined as:

SEC("fmod_ret/func_to_be_attached")
int BPF_PROG(func_name, int a, int b, int ret)
{
        // This will skip the original function logic.
        return -1;
}
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
4 years agobpf: Add selftests for BPF_MODIFY_RETURN
KP Singh [Wed, 4 Mar 2020 19:18:53 +0000 (20:18 +0100)]
bpf: Add selftests for BPF_MODIFY_RETURN

Test for two scenarios:

  * When the fmod_ret program returns 0, the original function should
    be called along with fentry and fexit programs.
  * When the fmod_ret program returns a non-zero value, the original
    function should not be called, no side effect should be observed and
    fentry and fexit programs should be called.

The result from the kernel function call and whether a side-effect is
observed is returned via the retval attr of the BPF_PROG_TEST_RUN (bpf)
syscall.

Signed-off-by: KP Singh <kpsingh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200304191853.1529-8-kpsingh@chromium.org
4 years agobpf: Add test ops for BPF_PROG_TYPE_TRACING
KP Singh [Wed, 4 Mar 2020 19:18:52 +0000 (20:18 +0100)]
bpf: Add test ops for BPF_PROG_TYPE_TRACING

The current fexit and fentry tests rely on a different program to
exercise the functions they attach to. Instead of doing this, implement
the test operations for tracing which will also be used for
BPF_MODIFY_RETURN in a subsequent patch.

Also, clean up the fexit test to use the generated skeleton.

Signed-off-by: KP Singh <kpsingh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200304191853.1529-7-kpsingh@chromium.org
4 years agotools/libbpf: Add support for BPF_MODIFY_RETURN
KP Singh [Wed, 4 Mar 2020 19:18:51 +0000 (20:18 +0100)]
tools/libbpf: Add support for BPF_MODIFY_RETURN

Signed-off-by: KP Singh <kpsingh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200304191853.1529-6-kpsingh@chromium.org
4 years agobpf: Attachment verification for BPF_MODIFY_RETURN
KP Singh [Wed, 4 Mar 2020 19:18:50 +0000 (20:18 +0100)]
bpf: Attachment verification for BPF_MODIFY_RETURN

- Allow BPF_MODIFY_RETURN attachment only to functions that are:

    * Whitelisted for error injection by checking
      within_error_injection_list. Similar discussions happened for the
      bpf_override_return helper.

    * security hooks, this is expected to be cleaned up with the LSM
      changes after the KRSI patches introduce the LSM_HOOK macro:

        https://lore.kernel.org/bpf/20200220175250.10795-1-kpsingh@chromium.org/

- The attachment is currently limited to functions that return an int.
  This can be extended later other types (e.g. PTR).

Signed-off-by: KP Singh <kpsingh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200304191853.1529-5-kpsingh@chromium.org
4 years agobpf: Introduce BPF_MODIFY_RETURN
KP Singh [Wed, 4 Mar 2020 19:18:49 +0000 (20:18 +0100)]
bpf: Introduce BPF_MODIFY_RETURN

When multiple programs are attached, each program receives the return
value from the previous program on the stack and the last program
provides the return value to the attached function.

The fmod_ret bpf programs are run after the fentry programs and before
the fexit programs. The original function is only called if all the
fmod_ret programs return 0 to avoid any unintended side-effects. The
success value, i.e. 0 is not currently configurable but can be made so
where user-space can specify it at load time.

For example:

int func_to_be_attached(int a, int b)
{  <--- do_fentry

do_fmod_ret:
   <update ret by calling fmod_ret>
   if (ret != 0)
        goto do_fexit;

original_function:

    <side_effects_happen_here>

}  <--- do_fexit

The fmod_ret program attached to this function can be defined as:

SEC("fmod_ret/func_to_be_attached")
int BPF_PROG(func_name, int a, int b, int ret)
{
        // This will skip the original function logic.
        return 1;
}

The first fmod_ret program is passed 0 in its return argument.

Signed-off-by: KP Singh <kpsingh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200304191853.1529-4-kpsingh@chromium.org
4 years agobpf: JIT helpers for fmod_ret progs
KP Singh [Wed, 4 Mar 2020 19:18:48 +0000 (20:18 +0100)]
bpf: JIT helpers for fmod_ret progs

* Split the invoke_bpf program to prepare for special handling of
  fmod_ret programs introduced in a subsequent patch.
* Move the definition of emit_cond_near_jump and emit_nops as they are
  needed for fmod_ret.
* Refactor branch target alignment into its own generic helper function
  i.e. emit_align.

Signed-off-by: KP Singh <kpsingh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200304191853.1529-3-kpsingh@chromium.org
4 years agobpf: Refactor trampoline update code
KP Singh [Wed, 4 Mar 2020 19:18:47 +0000 (20:18 +0100)]
bpf: Refactor trampoline update code

As we need to introduce a third type of attachment for trampolines, the
flattened signature of arch_prepare_bpf_trampoline gets even more
complicated.

Refactor the prog and count argument to arch_prepare_bpf_trampoline to
use bpf_tramp_progs to simplify the addition and accounting for new
attachment types.

Signed-off-by: KP Singh <kpsingh@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Acked-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200304191853.1529-2-kpsingh@chromium.org
4 years agoselftests/bpf: Support out-of-tree vmlinux builds for VMLINUX_BTF
Andrii Nakryiko [Wed, 4 Mar 2020 18:43:36 +0000 (10:43 -0800)]
selftests/bpf: Support out-of-tree vmlinux builds for VMLINUX_BTF

Add detection of out-of-tree built vmlinux image for the purpose of
VMLINUX_BTF detection. According to Documentation/kbuild/kbuild.rst, O takes
precedence over KBUILD_OUTPUT.

Also ensure ~/path/to/build/dir also works by relying on wildcard's resolution
first, but then applying $(abspath) at the end to also handle
O=../../whatever cases.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200304184336.165766-1-andriin@fb.com
4 years agokbuild: Remove debug info from kallsyms linking
Kees Cook [Wed, 4 Mar 2020 02:18:34 +0000 (18:18 -0800)]
kbuild: Remove debug info from kallsyms linking

When CONFIG_DEBUG_INFO is enabled, the two kallsyms linking steps spend
time collecting and writing the dwarf sections to the temporary output
files. kallsyms does not need this information, and leaving it off
halves their linking time. This is especially noticeable without
CONFIG_DEBUG_INFO_REDUCED. The BTF linking stage, however, does still
need those details.

Refactor the BTF and kallsyms generation stages slightly for more
regularized temporary names. Skip debug during kallsyms links.
Additionally move "info BTF" to the correct place since commit
8959e39272d6 ("kbuild: Parameterize kallsyms generation and correct
reporting"), which added "info LD ..." to vmlinux_link calls.

For a full debug info build with BTF, my link time goes from 1m06s to
0m54s, saving about 12 seconds, or 18%.

Signed-off-by: Kees Cook <keescook@chromium.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Andrii Nakryiko <andriin@fb.com>
Link: https://lore.kernel.org/bpf/202003031814.4AEA3351@keescook
4 years agoMerge branch 'bpf-uapi-enums'
Daniel Borkmann [Wed, 4 Mar 2020 16:00:06 +0000 (17:00 +0100)]
Merge branch 'bpf-uapi-enums'

Andrii Nakryiko says:

====================
Convert BPF-related UAPI constants, currently defined as #define macro, into
anonymous enums. This has no difference in terms of usage of such constants in
C code (they are still could be used in all the compile-time contexts that
`#define`s can), but they are recorded as part of DWARF type info, and
subsequently get recorded as part of kernel's BTF type info. This allows those
constants to be emitted as part of vmlinux.h auto-generated header file and be
used from BPF programs. Which is especially convenient for all kinds of BPF
helper flags and makes CO-RE BPF programs nicer to write.

libbpf's btf_dump logic currently assumes enum values are signed 32-bit
values, but that doesn't match a typical case, so switch it to emit unsigned
values. Once BTF encoding of BTF_KIND_ENUM is extended to capture signedness
properly, this will be made more flexible.

As an immediate validation of the approach, runqslower's copy of
BPF_F_CURRENT_CPU #define is dropped in favor of its enum variant from
vmlinux.h.

v2->v3:
- convert only constants usable from BPF programs (BPF helper flags, map
  create flags, etc) (Alexei);
v1->v2:
- fix up btf_dump test to use max 32-bit unsigned value instead of negative one.
====================

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
4 years agotools/runqslower: Drop copy/pasted BPF_F_CURRENT_CPU definiton
Andrii Nakryiko [Tue, 3 Mar 2020 00:32:33 +0000 (16:32 -0800)]
tools/runqslower: Drop copy/pasted BPF_F_CURRENT_CPU definiton

With BPF_F_CURRENT_CPU being an enum, it is now captured in vmlinux.h and is
readily usable by runqslower. So drop local copy/pasted definition in favor of
the one coming from vmlinux.h.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200303003233.3496043-4-andriin@fb.com
4 years agolibbpf: Assume unsigned values for BTF_KIND_ENUM
Andrii Nakryiko [Tue, 3 Mar 2020 00:32:32 +0000 (16:32 -0800)]
libbpf: Assume unsigned values for BTF_KIND_ENUM

Currently, BTF_KIND_ENUM type doesn't record whether enum values should be
interpreted as signed or unsigned. In Linux, most enums are unsigned, though,
so interpreting them as unsigned matches real world better.

Change btf_dump test case to test maximum 32-bit value, instead of negative
value.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200303003233.3496043-3-andriin@fb.com
4 years agobpf: Switch BPF UAPI #define constants used from BPF program side to enums
Andrii Nakryiko [Tue, 3 Mar 2020 00:32:31 +0000 (16:32 -0800)]
bpf: Switch BPF UAPI #define constants used from BPF program side to enums

Switch BPF UAPI constants, previously defined as #define macro, to anonymous
enum values. This preserves constants values and behavior in expressions, but
has added advantaged of being captured as part of DWARF and, subsequently, BTF
type info. Which, in turn, greatly improves usefulness of generated vmlinux.h
for BPF applications, as it will not require BPF users to copy/paste various
flags and constants, which are frequently used with BPF helpers. Only those
constants that are used/useful from BPF program side are converted.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Link: https://lore.kernel.org/bpf/20200303003233.3496043-2-andriin@fb.com
4 years agolibbpf: Fix handling of optional field_name in btf_dump__emit_type_decl
Andrii Nakryiko [Tue, 3 Mar 2020 18:08:00 +0000 (10:08 -0800)]
libbpf: Fix handling of optional field_name in btf_dump__emit_type_decl

Internal functions, used by btf_dump__emit_type_decl(), assume field_name is
never going to be NULL. Ensure it's always the case.

Fixes: 9f81654eebe8 ("libbpf: Expose BTF-to-C type declaration emitting API")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200303180800.3303471-1-andriin@fb.com
4 years agoMerge branch 'bpf_gso_size'
Alexei Starovoitov [Wed, 4 Mar 2020 00:24:00 +0000 (16:24 -0800)]
Merge branch 'bpf_gso_size'

Willem de Bruijn says:

====================
See first patch for details.

Patch split across three parts { kernel feature, uapi header, tools }
following the custom for such __sk_buff changes.
====================

Acked-by: Petar Penkov <ppenkov@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
4 years agoselftests/bpf: Test new __sk_buff field gso_size
Willem de Bruijn [Tue, 3 Mar 2020 20:05:03 +0000 (15:05 -0500)]
selftests/bpf: Test new __sk_buff field gso_size

Analogous to the gso_segs selftests introduced in commit d9ff286a0f59
("bpf: allow BPF programs access skb_shared_info->gso_segs field").

Signed-off-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200303200503.226217-4-willemdebruijn.kernel@gmail.com
4 years agobpf: Sync uapi bpf.h to tools/
Willem de Bruijn [Tue, 3 Mar 2020 20:05:02 +0000 (15:05 -0500)]
bpf: Sync uapi bpf.h to tools/

sync tools/include/uapi/linux/bpf.h to match include/uapi/linux/bpf.h

Signed-off-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200303200503.226217-3-willemdebruijn.kernel@gmail.com
4 years agobpf: Add gso_size to __sk_buff
Willem de Bruijn [Tue, 3 Mar 2020 20:05:01 +0000 (15:05 -0500)]
bpf: Add gso_size to __sk_buff

BPF programs may want to know whether an skb is gso. The canonical
answer is skb_is_gso(skb), which tests that gso_size != 0.

Expose this field in the same manner as gso_segs. That field itself
is not a sufficient signal, as the comment in skb_shared_info makes
clear: gso_segs may be zero, e.g., from dodgy sources.

Also prepare net/bpf/test_run for upcoming BPF_PROG_TEST_RUN tests
of the feature.

Signed-off-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200303200503.226217-2-willemdebruijn.kernel@gmail.com
4 years agoMerge branch 'bpf_link'
Alexei Starovoitov [Tue, 3 Mar 2020 06:06:28 +0000 (22:06 -0800)]
Merge branch 'bpf_link'

Andrii Nakryiko says:

====================
This patch series adds bpf_link abstraction, analogous to libbpf's already
existing bpf_link abstraction. This formalizes and makes more uniform existing
bpf_link-like BPF program link (attachment) types (raw tracepoint and tracing
links), which are FD-based objects that are automatically detached when last
file reference is closed. These types of BPF program links are switched to
using bpf_link framework.

FD-based bpf_link approach provides great safety guarantees, by ensuring there
is not going to be an abandoned BPF program attached, if user process suddenly
exits or forgets to clean up after itself. This is especially important in
production environment and is what all the recent new BPF link types followed.

One of the previously existing  inconveniences of FD-based approach, though,
was the scenario in which user process wants to install BPF link and exit, but
let attached BPF program run. Now, with bpf_link abstraction in place, it's
easy to support pinning links in BPF FS, which is done as part of the same
patch #1. This allows FD-based BPF program links to survive exit of a user
process and original file descriptor being closed, by creating an file entry
in BPF FS. This provides great safety by default, with simple way to opt out
for cases where it's needed.

Corresponding libbpf APIs are added in the same patch set, as well as
selftests for this functionality.

Other types of BPF program attachments (XDP, cgroup, perf_event, etc) are
going to be converted in subsequent patches to follow similar approach.

v1->v2:
- use bpf_link_new_fd() uniformly (Alexei).
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
4 years agoselftests/bpf: Add link pinning selftests
Andrii Nakryiko [Tue, 3 Mar 2020 04:31:59 +0000 (20:31 -0800)]
selftests/bpf: Add link pinning selftests

Add selftests validating link pinning/unpinning and associated BPF link
(attachment) lifetime.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200303043159.323675-4-andriin@fb.com
4 years agolibbpf: Add bpf_link pinning/unpinning
Andrii Nakryiko [Tue, 3 Mar 2020 04:31:58 +0000 (20:31 -0800)]
libbpf: Add bpf_link pinning/unpinning

With bpf_link abstraction supported by kernel explicitly, add
pinning/unpinning API for links. Also allow to create (open) bpf_link from BPF
FS file.

This API allows to have an "ephemeral" FD-based BPF links (like raw tracepoint
or fexit/freplace attachments) surviving user process exit, by pinning them in
a BPF FS, which is an important use case for long-running BPF programs.

As part of this, expose underlying FD for bpf_link. While legacy bpf_link's
might not have a FD associated with them (which will be expressed as
a bpf_link with fd=-1), kernel's abstraction is based around FD-based usage,
so match it closely. This, subsequently, allows to have a generic
pinning/unpinning API for generalized bpf_link. For some types of bpf_links
kernel might not support pinning, in which case bpf_link__pin() will return
error.

With FD being part of generic bpf_link, also get rid of bpf_link_fd in favor
of using vanialla bpf_link.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200303043159.323675-3-andriin@fb.com
4 years agobpf: Introduce pinnable bpf_link abstraction
Andrii Nakryiko [Tue, 3 Mar 2020 04:31:57 +0000 (20:31 -0800)]
bpf: Introduce pinnable bpf_link abstraction

Introduce bpf_link abstraction, representing an attachment of BPF program to
a BPF hook point (e.g., tracepoint, perf event, etc). bpf_link encapsulates
ownership of attached BPF program, reference counting of a link itself, when
reference from multiple anonymous inodes, as well as ensures that release
callback will be called from a process context, so that users can safely take
mutex locks and sleep.

Additionally, with a new abstraction it's now possible to generalize pinning
of a link object in BPF FS, allowing to explicitly prevent BPF program
detachment on process exit by pinning it in a BPF FS and let it open from
independent other process to keep working with it.

Convert two existing bpf_link-like objects (raw tracepoint and tracing BPF
program attachments) into utilizing bpf_link framework, making them pinnable
in BPF FS. More FD-based bpf_links will be added in follow up patches.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200303043159.323675-2-andriin@fb.com
4 years agoselftests/bpf: Declare bpf_log_buf variables as static
Toke Høiland-Jørgensen [Mon, 2 Mar 2020 14:53:48 +0000 (15:53 +0100)]
selftests/bpf: Declare bpf_log_buf variables as static

The cgroup selftests did not declare the bpf_log_buf variable as static, leading
to a linker error with GCC 10 (which defaults to -fno-common). Fix this by
adding the missing static declarations.

Fixes: 257c88559f36 ("selftests/bpf: Convert test_cgroup_attach to prog_tests")
Signed-off-by: Toke Høiland-Jørgensen <toke@redhat.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Andrey Ignatov <rdna@fb.com>
Link: https://lore.kernel.org/bpf/20200302145348.559177-1-toke@redhat.com
4 years agobpf: Reliably preserve btf_trace_xxx types
Andrii Nakryiko [Sun, 1 Mar 2020 08:10:43 +0000 (00:10 -0800)]
bpf: Reliably preserve btf_trace_xxx types

btf_trace_xxx types, crucial for tp_btf BPF programs (raw tracepoint with
verifier-checked direct memory access), have to be preserved in kernel BTF to
allow verifier do its job and enforce type/memory safety. It was reported
([0]) that for kernels built with Clang current type-casting approach doesn't
preserve these types.

This patch fixes it by declaring an anonymous union for each registered
tracepoint, capturing both struct bpf_raw_event_map information, as well as
recording btf_trace_##call type reliably. Structurally, it's still the same
content as for a plain struct bpf_raw_event_map, so no other changes are
necessary.

  [0] https://github.com/iovisor/bcc/issues/2770#issuecomment-591007692

Fixes: e8c423fb31fa ("bpf: Add typecast to raw_tracepoints to help BTF generation")
Reported-by: Wenbo Zhang <ethercflow@gmail.com>
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Acked-by: Yonghong Song <yhs@fb.com>
Link: https://lore.kernel.org/bpf/20200301081045.3491005-2-andriin@fb.com
4 years agoMerge branch 'move_BPF_PROG_to_libbpf'
Alexei Starovoitov [Tue, 3 Mar 2020 00:25:15 +0000 (16:25 -0800)]
Merge branch 'move_BPF_PROG_to_libbpf'

Andrii Nakryiko says:

====================
Move BPF_PROG, BPF_KPROBE, and BPF_KRETPROBE helper macros from private
selftests helpers to public libbpf ones. These helpers are extremely helpful
for writing tracing BPF applications and have been requested to be exposed for
easy use (e.g., [0]).

As part of this move, fix up BPF_KRETPROBE to not allow for capturing input
arguments (as it's unreliable and they will be often clobbered). Also, add
vmlinux.h header guard to allow multi-time inclusion, if necessary; but also
to let PT_REGS_PARM do proper detection of struct pt_regs field names on x86
arch. See relevant patches for more details.

  [0] https://github.com/iovisor/bcc/pull/2778#issue-381642907
====================

Signed-off-by: Alexei Starovoitov <ast@kernel.org>
4 years agolibbpf: Merge selftests' bpf_trace_helpers.h into libbpf's bpf_tracing.h
Andrii Nakryiko [Sat, 29 Feb 2020 23:11:12 +0000 (15:11 -0800)]
libbpf: Merge selftests' bpf_trace_helpers.h into libbpf's bpf_tracing.h

Move BPF_PROG, BPF_KPROBE, and BPF_KRETPROBE macro into libbpf's bpf_tracing.h
header to make it available for non-selftests users.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200229231112.1240137-5-andriin@fb.com
4 years agoselftests/bpf: Fix BPF_KRETPROBE macro and use it in attach_probe test
Andrii Nakryiko [Sat, 29 Feb 2020 23:11:11 +0000 (15:11 -0800)]
selftests/bpf: Fix BPF_KRETPROBE macro and use it in attach_probe test

For kretprobes, there is no point in capturing input arguments from pt_regs,
as they are going to be, most probably, clobbered by the time probed kernel
function returns. So switch BPF_KRETPROBE to accept zero or one argument
(optional return result).

Fixes: ac065870d928 ("selftests/bpf: Add BPF_PROG, BPF_KPROBE, and BPF_KRETPROBE macros")
Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200229231112.1240137-4-andriin@fb.com
4 years agolibbpf: Fix use of PT_REGS_PARM macros with vmlinux.h
Andrii Nakryiko [Sat, 29 Feb 2020 23:11:10 +0000 (15:11 -0800)]
libbpf: Fix use of PT_REGS_PARM macros with vmlinux.h

Add detection of vmlinux.h to bpf_tracing.h header for PT_REGS macro.
Currently, BPF applications have to define __KERNEL__ symbol to use correct
definition of struct pt_regs on x86 arch. This is due to different field names
under internal kernel vs UAPI conditions. To make this more transparent for
users, detect vmlinux.h by checking __VMLINUX_H__ symbol.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200229231112.1240137-3-andriin@fb.com
4 years agobpftool: Add header guards to generated vmlinux.h
Andrii Nakryiko [Sat, 29 Feb 2020 23:11:09 +0000 (15:11 -0800)]
bpftool: Add header guards to generated vmlinux.h

Add canonical #ifndef/#define/#endif guard for generated vmlinux.h header with
__VMLINUX_H__ symbol. __VMLINUX_H__ is also going to play double role of
identifying whether vmlinux.h is being used, versus, say, BCC or non-CO-RE
libbpf modes with dependency on kernel headers. This will make it possible to
write helper macro/functions, agnostic to exact BPF program set up.

Signed-off-by: Andrii Nakryiko <andriin@fb.com>
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
Link: https://lore.kernel.org/bpf/20200229231112.1240137-2-andriin@fb.com
4 years agomvneta: add XDP ethtool errors stats for TX to driver
Jesper Dangaard Brouer [Mon, 2 Mar 2020 13:46:28 +0000 (14:46 +0100)]
mvneta: add XDP ethtool errors stats for TX to driver

Adding ethtool stats for when XDP transmitted packets overrun the TX
queue. This is recorded separately for XDP_TX and ndo_xdp_xmit. This
is an important aid for troubleshooting XDP based setups.

It is currently a known weakness and property of XDP that there isn't
any push-back or congestion feedback when transmitting frames via XDP.
It's easy to realise when redirecting from a higher speed link into a
slower speed link, or simply two ingress links into a single egress.
The situation can also happen when Ethernet flow control is active.

For testing the patch and provoking the situation to occur on my
Espressobin board, I configured the TX-queue to be smaller (434) than
RX-queue (512) and overload network with large MTU size frames (as a
larger frame takes longer to transmit).

Hopefully the upcoming XDP TX hook can be extended to provide insight
into these TX queue overflows, to allow programmable adaptation
strategies.

Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Acked-by: Lorenzo Bianconi <lorenzo@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
4 years agoMerge branch 'net-zl-array'
David S. Miller [Mon, 2 Mar 2020 19:16:35 +0000 (11:16 -0800)]
Merge branch 'net-zl-array'

More zero-length array transformations from Gustavo A. R. Silva.

Signed-off-by: David S. Miller <davem@davemloft.net>
4 years agotehuti: Replace zero-length array with flexible-array member
Gustavo A. R. Silva [Mon, 2 Mar 2020 12:28:26 +0000 (06:28 -0600)]
tehuti: Replace zero-length array with flexible-array member

The current codebase makes use of the zero-length array language
extension to the C90 standard, but the preferred mechanism to declare
variable-length types such as these ones is a flexible array member[1][2],
introduced in C99:

struct foo {
        int stuff;
        struct boo array[];
};

By making use of the mechanism above, we will get a compiler warning
in case the flexible array does not occur last in the structure, which
will help us prevent some kind of undefined behavior bugs from being
inadvertently introduced[3] to the codebase from now on.

Also, notice that, dynamic memory allocations won't be affected by
this change:

"Flexible array members have incomplete type, and so the sizeof operator
may not be applied. As a quirk of the original implementation of
zero-length arrays, sizeof evaluates to zero."[1]

This issue was found with the help of Coccinelle.

[1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
[2] https://github.com/KSPP/linux/issues/21
[3] commit 76497732932f ("cxgb3/l2t: Fix undefined behaviour")

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
4 years agor8152: Replace zero-length array with flexible-array member
Gustavo A. R. Silva [Mon, 2 Mar 2020 12:23:05 +0000 (06:23 -0600)]
r8152: Replace zero-length array with flexible-array member

The current codebase makes use of the zero-length array language
extension to the C90 standard, but the preferred mechanism to declare
variable-length types such as these ones is a flexible array member[1][2],
introduced in C99:

struct foo {
        int stuff;
        struct boo array[];
};

By making use of the mechanism above, we will get a compiler warning
in case the flexible array does not occur last in the structure, which
will help us prevent some kind of undefined behavior bugs from being
inadvertently introduced[3] to the codebase from now on.

Also, notice that, dynamic memory allocations won't be affected by
this change:

"Flexible array members have incomplete type, and so the sizeof operator
may not be applied. As a quirk of the original implementation of
zero-length arrays, sizeof evaluates to zero."[1]

This issue was found with the help of Coccinelle.

[1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
[2] https://github.com/KSPP/linux/issues/21
[3] commit 76497732932f ("cxgb3/l2t: Fix undefined behaviour")

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
4 years agonet: atlantic: Replace zero-length array with flexible-array member
Gustavo A. R. Silva [Mon, 2 Mar 2020 12:19:53 +0000 (06:19 -0600)]
net: atlantic: Replace zero-length array with flexible-array member

The current codebase makes use of the zero-length array language
extension to the C90 standard, but the preferred mechanism to declare
variable-length types such as these ones is a flexible array member[1][2],
introduced in C99:

struct foo {
        int stuff;
        struct boo array[];
};

By making use of the mechanism above, we will get a compiler warning
in case the flexible array does not occur last in the structure, which
will help us prevent some kind of undefined behavior bugs from being
inadvertently introduced[3] to the codebase from now on.

Also, notice that, dynamic memory allocations won't be affected by
this change:

"Flexible array members have incomplete type, and so the sizeof operator
may not be applied. As a quirk of the original implementation of
zero-length arrays, sizeof evaluates to zero."[1]

This issue was found with the help of Coccinelle.

[1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
[2] https://github.com/KSPP/linux/issues/21
[3] commit 76497732932f ("cxgb3/l2t: Fix undefined behaviour")

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
4 years agobna: bnad: Replace zero-length array with flexible-array member
Gustavo A. R. Silva [Mon, 2 Mar 2020 12:10:51 +0000 (06:10 -0600)]
bna: bnad: Replace zero-length array with flexible-array member

The current codebase makes use of the zero-length array language
extension to the C90 standard, but the preferred mechanism to declare
variable-length types such as these ones is a flexible array member[1][2],
introduced in C99:

struct foo {
        int stuff;
        struct boo array[];
};

By making use of the mechanism above, we will get a compiler warning
in case the flexible array does not occur last in the structure, which
will help us prevent some kind of undefined behavior bugs from being
inadvertently introduced[3] to the codebase from now on.

Also, notice that, dynamic memory allocations won't be affected by
this change:

"Flexible array members have incomplete type, and so the sizeof operator
may not be applied. As a quirk of the original implementation of
zero-length arrays, sizeof evaluates to zero."[1]

This issue was found with the help of Coccinelle.

[1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
[2] https://github.com/KSPP/linux/issues/21
[3] commit 76497732932f ("cxgb3/l2t: Fix undefined behaviour")

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
4 years agonet: inet_sock: Replace zero-length array with flexible-array member
Gustavo A. R. Silva [Mon, 2 Mar 2020 12:07:42 +0000 (06:07 -0600)]
net: inet_sock: Replace zero-length array with flexible-array member

The current codebase makes use of the zero-length array language
extension to the C90 standard, but the preferred mechanism to declare
variable-length types such as these ones is a flexible array member[1][2],
introduced in C99:

struct foo {
        int stuff;
        struct boo array[];
};

By making use of the mechanism above, we will get a compiler warning
in case the flexible array does not occur last in the structure, which
will help us prevent some kind of undefined behavior bugs from being
inadvertently introduced[3] to the codebase from now on.

Also, notice that, dynamic memory allocations won't be affected by
this change:

"Flexible array members have incomplete type, and so the sizeof operator
may not be applied. As a quirk of the original implementation of
zero-length arrays, sizeof evaluates to zero."[1]

This issue was found with the help of Coccinelle.

[1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
[2] https://github.com/KSPP/linux/issues/21
[3] commit 76497732932f ("cxgb3/l2t: Fix undefined behaviour")

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
4 years agonet: ip6_fib: Replace zero-length array with flexible-array member
Gustavo A. R. Silva [Mon, 2 Mar 2020 12:06:07 +0000 (06:06 -0600)]
net: ip6_fib: Replace zero-length array with flexible-array member

The current codebase makes use of the zero-length array language
extension to the C90 standard, but the preferred mechanism to declare
variable-length types such as these ones is a flexible array member[1][2],
introduced in C99:

struct foo {
        int stuff;
        struct boo array[];
};

By making use of the mechanism above, we will get a compiler warning
in case the flexible array does not occur last in the structure, which
will help us prevent some kind of undefined behavior bugs from being
inadvertently introduced[3] to the codebase from now on.

Also, notice that, dynamic memory allocations won't be affected by
this change:

"Flexible array members have incomplete type, and so the sizeof operator
may not be applied. As a quirk of the original implementation of
zero-length arrays, sizeof evaluates to zero."[1]

This issue was found with the help of Coccinelle.

[1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
[2] https://github.com/KSPP/linux/issues/21
[3] commit 76497732932f ("cxgb3/l2t: Fix undefined behaviour")

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
4 years agonet: ip_fib: Replace zero-length array with flexible-array member
Gustavo A. R. Silva [Mon, 2 Mar 2020 12:03:52 +0000 (06:03 -0600)]
net: ip_fib: Replace zero-length array with flexible-array member

The current codebase makes use of the zero-length array language
extension to the C90 standard, but the preferred mechanism to declare
variable-length types such as these ones is a flexible array member[1][2],
introduced in C99:

struct foo {
        int stuff;
        struct boo array[];
};

By making use of the mechanism above, we will get a compiler warning
in case the flexible array does not occur last in the structure, which
will help us prevent some kind of undefined behavior bugs from being
inadvertently introduced[3] to the codebase from now on.

Also, notice that, dynamic memory allocations won't be affected by
this change:

"Flexible array members have incomplete type, and so the sizeof operator
may not be applied. As a quirk of the original implementation of
zero-length arrays, sizeof evaluates to zero."[1]

This issue was found with the help of Coccinelle.

[1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
[2] https://github.com/KSPP/linux/issues/21
[3] commit 76497732932f ("cxgb3/l2t: Fix undefined behaviour")

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
4 years agodrop_monitor: Replace zero-length array with flexible-array member
Gustavo A. R. Silva [Mon, 2 Mar 2020 12:02:09 +0000 (06:02 -0600)]
drop_monitor: Replace zero-length array with flexible-array member

The current codebase makes use of the zero-length array language
extension to the C90 standard, but the preferred mechanism to declare
variable-length types such as these ones is a flexible array member[1][2],
introduced in C99:

struct foo {
        int stuff;
        struct boo array[];
};

By making use of the mechanism above, we will get a compiler warning
in case the flexible array does not occur last in the structure, which
will help us prevent some kind of undefined behavior bugs from being
inadvertently introduced[3] to the codebase from now on.

Also, notice that, dynamic memory allocations won't be affected by
this change:

"Flexible array members have incomplete type, and so the sizeof operator
may not be applied. As a quirk of the original implementation of
zero-length arrays, sizeof evaluates to zero."[1]

This issue was found with the help of Coccinelle.

[1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
[2] https://github.com/KSPP/linux/issues/21
[3] commit 76497732932f ("cxgb3/l2t: Fix undefined behaviour")

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
4 years agonet: mip6: Replace zero-length array with flexible-array member
Gustavo A. R. Silva [Mon, 2 Mar 2020 12:00:48 +0000 (06:00 -0600)]
net: mip6: Replace zero-length array with flexible-array member

The current codebase makes use of the zero-length array language
extension to the C90 standard, but the preferred mechanism to declare
variable-length types such as these ones is a flexible array member[1][2],
introduced in C99:

struct foo {
        int stuff;
        struct boo array[];
};

By making use of the mechanism above, we will get a compiler warning
in case the flexible array does not occur last in the structure, which
will help us prevent some kind of undefined behavior bugs from being
inadvertently introduced[3] to the codebase from now on.

Also, notice that, dynamic memory allocations won't be affected by
this change:

"Flexible array members have incomplete type, and so the sizeof operator
may not be applied. As a quirk of the original implementation of
zero-length arrays, sizeof evaluates to zero."[1]

This issue was found with the help of Coccinelle.

[1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
[2] https://github.com/KSPP/linux/issues/21
[3] commit 76497732932f ("cxgb3/l2t: Fix undefined behaviour")

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
4 years agonetdevice: Replace zero-length array with flexible-array member
Gustavo A. R. Silva [Mon, 2 Mar 2020 11:59:33 +0000 (05:59 -0600)]
netdevice: Replace zero-length array with flexible-array member

The current codebase makes use of the zero-length array language
extension to the C90 standard, but the preferred mechanism to declare
variable-length types such as these ones is a flexible array member[1][2],
introduced in C99:

struct foo {
        int stuff;
        struct boo array[];
};

By making use of the mechanism above, we will get a compiler warning
in case the flexible array does not occur last in the structure, which
will help us prevent some kind of undefined behavior bugs from being
inadvertently introduced[3] to the codebase from now on.

Also, notice that, dynamic memory allocations won't be affected by
this change:

"Flexible array members have incomplete type, and so the sizeof operator
may not be applied. As a quirk of the original implementation of
zero-length arrays, sizeof evaluates to zero."[1]

This issue was found with the help of Coccinelle.

[1] https://gcc.gnu.org/onlinedocs/gcc/Zero-Length.html
[2] https://github.com/KSPP/linux/issues/21
[3] commit 76497732932f ("cxgb3/l2t: Fix undefined behaviour")

Signed-off-by: Gustavo A. R. Silva <gustavo@embeddedor.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
4 years agoMerge branch 'net-thunderx-Miscellaneous-changes'
David S. Miller [Mon, 2 Mar 2020 19:13:58 +0000 (11:13 -0800)]
Merge branch 'net-thunderx-Miscellaneous-changes'

Sunil Goutham says:

====================
net: thunderx: Miscellaneous changes

This patchset has changes wrt driver performance optimization,
load time optimization. And a change to PCI device regiatration
table for timestamp device.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>