linux-2.6-microblaze.git
8 years agonet/hsr: Added support for HSR v1
Peter Heise [Wed, 13 Apr 2016 11:52:22 +0000 (13:52 +0200)]
net/hsr: Added support for HSR v1

This patch adds support for the newer version 1 of the HSR
networking standard. Version 0 is still default and the new
version has to be selected via iproute2.

Main changes are in the supervision frame handling and its
ethertype field.

Signed-off-by: Peter Heise <peter.heise@airbus.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'tcp-synflood-perf'
David S. Miller [Fri, 15 Apr 2016 20:45:45 +0000 (16:45 -0400)]
Merge branch 'tcp-synflood-perf'

Eric Dumazet says:

====================
tcp: final work on SYNFLOOD behavior

In the first patch, I remove the costly association of SYNACK+COOKIES
to a listener. I believe other parts of the stack should be ready.

The second patch removes a useless write into listener socket
in tcp_rcv_state_process(), incurring false sharing in
tcp_conn_request()

Performance under SYNFLOOD goes from 3.2 Mpps to 6 Mpps.

Test was using a single TCP listener, on a host with 8 RX queues
on the NIC, and 24 cores (48 ht)
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotcp: remove false sharing in tcp_rcv_state_process()
Eric Dumazet [Thu, 14 Apr 2016 05:05:40 +0000 (22:05 -0700)]
tcp: remove false sharing in tcp_rcv_state_process()

Last known hot point during SYNFLOOD attack is the clearing
of rx_opt.saw_tstamp in tcp_rcv_state_process()

It is not needed for a listener, so we move it where it matters.

Performance while a SYNFLOOD hits a single listener socket
went from 5 Mpps to 6 Mpps on my test server (24 cores, 8 NIC RX queues)

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotcp: do not mess with listener sk_wmem_alloc
Eric Dumazet [Thu, 14 Apr 2016 05:05:39 +0000 (22:05 -0700)]
tcp: do not mess with listener sk_wmem_alloc

When removing sk_refcnt manipulation on synflood, I missed that
using skb_set_owner_w() was racy, if sk->sk_wmem_alloc had already
transitioned to 0.

We should hold sk_refcnt instead, but this is a big deal under attack.
(Doing so increase performance from 3.2 Mpps to 3.8 Mpps only)

In this patch, I chose to not attach a socket to syncookies skb.

Performance is now 5 Mpps instead of 3.2 Mpps.

Following patch will remove last known false sharing in
tcp_rcv_state_process()

Fixes: 3b24d854cb35 ("tcp/dccp: do not touch listener sk_refcnt under synflood")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoqlge: Replace create_singlethread_workqueue with alloc_ordered_workqueue
Amitoj Kaur Chawla [Sat, 9 Apr 2016 11:57:45 +0000 (17:27 +0530)]
qlge: Replace create_singlethread_workqueue with alloc_ordered_workqueue

Replace deprecated create_singlethread_workqueue with
alloc_ordered_workqueue.

Work items include getting tx/rx frame sizes, resetting MPI processor,
setting asic recovery bit so ordering seems necessary as only one work
item should be in queue/executing at any given time, hence the use of
alloc_ordered_workqueue.

WQ_MEM_RECLAIM flag has been set since ethernet devices seem to sit in
memory reclaim path, so to guarantee forward progress regardless of
memory pressure.

Signed-off-by: Amitoj Kaur Chawla <amitoj1606@gmail.com>
Acked-by: Tejun Heo <tj@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'tipc-link-setup-improvements'
David S. Miller [Fri, 15 Apr 2016 20:09:07 +0000 (16:09 -0400)]
Merge branch 'tipc-link-setup-improvements'

Jon Maloy says:

====================
tipc: improvements to the link setup algorithm

This series addresses some smaller issues regarding the link setup
algorithm. The first commit fixes a rare bug we have discovered during
testing; the second one may have some future impact on cluster
scalabilty, while remaining ones can be regarded as cosmetic in
a wider sense of the word.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotipc: let first message on link be a state message
Jon Paul Maloy [Fri, 15 Apr 2016 17:33:07 +0000 (13:33 -0400)]
tipc: let first message on link be a state message

According to the link FSM, a received traffic packet can take a link
from state ESTABLISHING to ESTABLISHED, but the link can still not be
fully set up in one atomic operation. This means that even if the the
very first packet on the link is a traffic packet with sequence number
1 (one), it has to be dropped and retransmitted.

This can be avoided if we let the mentioned packet be preceded by a
LINK_PROTOCOL/STATE message, which takes up the endpoint before the
arrival of the traffic.

We add this small feature in this commit.

This is a fully compatible change.

Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotipc: ensure that first packets on link are sent in order
Jon Paul Maloy [Fri, 15 Apr 2016 17:33:06 +0000 (13:33 -0400)]
tipc: ensure that first packets on link are sent in order

In some link establishment scenarios we see that packet #2 may be sent
out before packet #1, forcing the receiver to demand retransmission of
the missing packet. This is harmless, but may cause confusion among
people tracing the packet flow.

Since this is extremely easy to fix, we do so by adding en extra send
call to the bearer immediately after the link has come up.

Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotipc: refactor function tipc_link_timeout()
Jon Paul Maloy [Fri, 15 Apr 2016 17:33:05 +0000 (13:33 -0400)]
tipc: refactor function tipc_link_timeout()

The function tipc_link_timeout() is unnecessary complex, and can
easily be made more readable.

We do that with this commit. The only functional change is that we
remove a redundant test for whether the broadcast link is up or not.

Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotipc: reduce transmission rate of reset messages when link is down
Jon Paul Maloy [Fri, 15 Apr 2016 17:33:04 +0000 (13:33 -0400)]
tipc: reduce transmission rate of reset messages when link is down

When a link is down, it will continuously try to re-establish contact
with the peer by sending out a RESET or an ACTIVATE message at each
timeout interval. The default value for this interval is currently
375 ms. This is wasteful, and may become a problem in very large
clusters with dozens or hundreds of nodes being down simultaneously.

We now introduce a simple backoff algorithm for these cases. The
first five messages are sent at default rate; thereafter a message
is sent only each 16th timer interval.

This will cover the vast majority of link recycling cases, since the
endpoint starting last will transmit at the higher speed, and the link
should normally be established well be before the rate needs to be
reduced.

The only case where we will see a degradation of link re-establishment
times is when the endpoints remain intact, and a glitch in the
transmission media is causing the link reset. We will then experience
a worst-case re-establishing time of 6 seconds, something we deem
acceptable.

Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotipc: guarantee peer bearer id exchange after reboot
Jon Paul Maloy [Fri, 15 Apr 2016 17:33:03 +0000 (13:33 -0400)]
tipc: guarantee peer bearer id exchange after reboot

When a link endpoint is going down locally, e.g., because its interface
is being stopped, it will spontaneously send out a RESET message to
its peer, informing it about this fact. This saves the peer from
detecting the failure via probing, and hence gives both speedier and
less resource consuming failure detection on the peer side.

According to the link FSM, a receiver of a RESET message, ignoring the
reason for it, must now consider the sender ready to come back up, and
starts periodically sending out ACTIVATE messages to the peer in order
to re-establish the link. Also, according to the FSM, the receiver of
an ACTIVATE message can now go directly to state ESTABLISHED and start
sending regular traffic packets. This is a well-proven and robust FSM.

However, in the case of a reboot, there is a small possibilty that link
endpoint on the rebooted node may have been re-created with a new bearer
identity between the moment it sent its (pre-boot) RESET and the moment
it receives the ACTIVATE from the peer. The new bearer identity cannot
be known by the peer according to this scenario, since traffic headers
don't convey such information. This is a problem, because both endpoints
need to know the correct value of the peer's bearer id at any moment in
time in order to be able to produce correct link events for their users.

The only way to guarantee this is to enforce a full setup message
exchange (RESET + ACTIVATE) even after the reboot, since those messages
carry the bearer idientity in their header.

In this commit we do this by introducing and setting a "stopping" bit in
the header of the spontaneously generated RESET messages, informing the
peer that the sender will not be immediately ready to re-establish the
link. A receiver seeing this bit must act as if this were a locally
detected connectivity failure, and hence has to go through a full two-
way setup message exchange before any link can be re-established.

Although never reported, this problem seems to have always been around.

This protocol addition is fully backwards compatible.

Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'mlxsw-next'
David S. Miller [Fri, 15 Apr 2016 17:02:43 +0000 (13:02 -0400)]
Merge branch 'mlxsw-next'

Jiri Pirko says:

====================
mlxsw: spectrum_buffers: couple of cosmetic patches

As suggested by David Laight
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agomlxsw: spectrum_buffers: Use MLXSW_SP_PB_UNUSED define for unused pb
Jiri Pirko [Fri, 15 Apr 2016 13:09:38 +0000 (15:09 +0200)]
mlxsw: spectrum_buffers: Use MLXSW_SP_PB_UNUSED define for unused pb

Suggested-by: David Laight <David.Laight@ACULAB.COM>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agomlxsw: spectrum_buffers: Use designated initializers for mlxsw_sp_pbs
Jiri Pirko [Fri, 15 Apr 2016 13:09:37 +0000 (15:09 +0200)]
mlxsw: spectrum_buffers: Use designated initializers for mlxsw_sp_pbs

Suggested-by: David Laight <David.Laight@ACULAB.COM>
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agodevlink: fix sb register stub in case devlink is disabled
Jiri Pirko [Fri, 15 Apr 2016 07:17:08 +0000 (09:17 +0200)]
devlink: fix sb register stub in case devlink is disabled

Reported-by: kbuild test robot <fengguang.wu@intel.com>
Fixes: bf7974710a40 ("devlink: add shared buffer configuration")
Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotun: use per cpu variables for stats accounting
Paolo Abeni [Wed, 13 Apr 2016 08:52:20 +0000 (10:52 +0200)]
tun: use per cpu variables for stats accounting

Currently the tun device accounting uses dev->stats without applying any
kind of protection, regardless that accounting happens in preemptible
process context.
This patch move the tun stats to a per cpu data structure, and protect
the updates with  u64_stats_update_begin()/u64_stats_update_end() or
this_cpu_inc according to the stat type. The per cpu stats are
aggregated by the newly added ndo_get_stats64 ops.

Signed-off-by: Paolo Abeni <pabeni@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'bpf-ARG_PTR_TO_RAW_STACK'
David S. Miller [Fri, 15 Apr 2016 01:40:53 +0000 (21:40 -0400)]
Merge branch 'bpf-ARG_PTR_TO_RAW_STACK'

Merge branch 'bpf-ARG_PTR_TO_RAW_STACK'

Daniel Borkmann says:

====================
BPF updates

This series adds a new verifier argument type called
ARG_PTR_TO_RAW_STACK and converts related helpers to make
use of it. Basic idea is that we can save init of stack
memory when the helper function is guaranteed to fully
fill out the passed buffer in every path. Series also adds
test cases and converts samples. For more details, please
see individual patches.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobpf, samples: add test cases for raw stack
Daniel Borkmann [Tue, 12 Apr 2016 22:10:54 +0000 (00:10 +0200)]
bpf, samples: add test cases for raw stack

This adds test cases mostly around ARG_PTR_TO_RAW_STACK to check the
verifier behaviour.

  [...]
  #84 raw_stack: no skb_load_bytes OK
  #85 raw_stack: skb_load_bytes, no init OK
  #86 raw_stack: skb_load_bytes, init OK
  #87 raw_stack: skb_load_bytes, spilled regs around bounds OK
  #88 raw_stack: skb_load_bytes, spilled regs corruption OK
  #89 raw_stack: skb_load_bytes, spilled regs corruption 2 OK
  #90 raw_stack: skb_load_bytes, spilled regs + data OK
  #91 raw_stack: skb_load_bytes, invalid access 1 OK
  #92 raw_stack: skb_load_bytes, invalid access 2 OK
  #93 raw_stack: skb_load_bytes, invalid access 3 OK
  #94 raw_stack: skb_load_bytes, invalid access 4 OK
  #95 raw_stack: skb_load_bytes, invalid access 5 OK
  #96 raw_stack: skb_load_bytes, invalid access 6 OK
  #97 raw_stack: skb_load_bytes, large access OK
  Summary: 98 PASSED, 0 FAILED

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobpf, samples: don't zero data when not needed
Daniel Borkmann [Tue, 12 Apr 2016 22:10:53 +0000 (00:10 +0200)]
bpf, samples: don't zero data when not needed

Remove the zero initialization in the sample programs where appropriate.
Note that this is an optimization which is now possible, old programs
still doing the zero initialization are just fine as well. Also, make
sure we don't have padding issues when we don't memset() the entire
struct anymore.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobpf: convert relevant helper args to ARG_PTR_TO_RAW_STACK
Daniel Borkmann [Tue, 12 Apr 2016 22:10:52 +0000 (00:10 +0200)]
bpf: convert relevant helper args to ARG_PTR_TO_RAW_STACK

This patch converts all helpers that can use ARG_PTR_TO_RAW_STACK as argument
type. For tc programs this is bpf_skb_load_bytes(), bpf_skb_get_tunnel_key(),
bpf_skb_get_tunnel_opt(). For tracing, this optimizes bpf_get_current_comm()
and bpf_probe_read(). The check in bpf_skb_load_bytes() for MAX_BPF_STACK can
also be removed since the verifier already makes sure we stay within bounds
on stack buffers.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobpf, verifier: add ARG_PTR_TO_RAW_STACK type
Daniel Borkmann [Tue, 12 Apr 2016 22:10:51 +0000 (00:10 +0200)]
bpf, verifier: add ARG_PTR_TO_RAW_STACK type

When passing buffers from eBPF stack space into a helper function, we have
ARG_PTR_TO_STACK argument type for helpers available. The verifier makes sure
that such buffers are initialized, within boundaries, etc.

However, the downside with this is that we have a couple of helper functions
such as bpf_skb_load_bytes() that fill out the passed buffer in the expected
success case anyway, so zero initializing them prior to the helper call is
unneeded/wasted instructions in the eBPF program that can be avoided.

Therefore, add a new helper function argument type called ARG_PTR_TO_RAW_STACK.
The idea is to skip the STACK_MISC check in check_stack_boundary() and color
the related stack slots as STACK_MISC after we checked all call arguments.

Helper functions using ARG_PTR_TO_RAW_STACK must make sure that every path of
the helper function will fill the provided buffer area, so that we cannot leak
any uninitialized stack memory. This f.e. means that error paths need to
memset() the buffers, but the expected fast-path doesn't have to do this
anymore.

Since there's no such helper needing more than at most one ARG_PTR_TO_RAW_STACK
argument, we can keep it simple and don't need to check for multiple areas.
Should in future such a use-case really appear, we have check_raw_mode() that
will make sure we implement support for it first.

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobpf, verifier: add bpf_call_arg_meta for passing meta data
Daniel Borkmann [Tue, 12 Apr 2016 22:10:50 +0000 (00:10 +0200)]
bpf, verifier: add bpf_call_arg_meta for passing meta data

Currently, when the verifier checks calls in check_call() function, we
call check_func_arg() for all 5 arguments e.g. to make sure expected types
are correct. In some cases, we collect meta data (here: map pointer) to
perform additional checks such as checking stack boundary on key/value
sizes for subsequent arguments. As we're going to extend the meta data,
add a generic struct bpf_call_arg_meta that we can use for passing into
check_func_arg().

Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Alexei Starovoitov <ast@kernel.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agosctp: add support for RPS and RFS
Marcelo Ricardo Leitner [Tue, 12 Apr 2016 21:11:31 +0000 (18:11 -0300)]
sctp: add support for RPS and RFS

This patch adds what's missing to properly support RPS and RFS on SCTP,
as some of it is already implemented in common calls.

Having support for RPS and RFS allows better scaling specially because
not all NICs support hashing SCTP headers.

Save the hash right when we dequeue a skb from inqueue so we do it only
once per skb instead of per chunk. New sockets will then inherit the
hash through sctp_copy_sock().

Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: validate_xmit_skb() changes
Eric Dumazet [Wed, 13 Apr 2016 04:50:07 +0000 (21:50 -0700)]
net: validate_xmit_skb() changes

skbs given to validate_xmit_skb() should not have a next
pointer anymore.

Also if a packet is dropped, increment dev->tx_dropped
__dev_queue_xmit() no longer has to change tx_dropped in this case.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agopacket: uses kfree_skb() for errors.
Weongyo Jeong [Thu, 14 Apr 2016 21:10:04 +0000 (14:10 -0700)]
packet: uses kfree_skb() for errors.

consume_skb() isn't for error cases that kfree_skb() is more proper
one.  At this patch, it fixed tpacket_rcv() and packet_rcv() to be
consistent for error or non-error cases letting perf trace its event
properly.

Signed-off-by: Weongyo Jeong <weongyo.linux@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotipc: fix a race condition leading to subscriber refcnt bug
Parthasarathy Bhuvaragan [Tue, 12 Apr 2016 11:05:21 +0000 (13:05 +0200)]
tipc: fix a race condition leading to subscriber refcnt bug

Until now, the requests sent to topology server are queued
to a workqueue by the generic server framework.
These messages are processed by worker threads and trigger the
registered callbacks.
To reduce latency on uniprocessor systems, explicit rescheduling
is performed using cond_resched() after MAX_RECV_MSG_COUNT(25)
messages.

This implementation on SMP systems leads to an subscriber refcnt
error as described below:
When a worker thread yields by calling cond_resched() in a SMP
system, a new worker is created on another CPU to process the
pending workitem. Sometimes the sleeping thread wakes up before
the new thread finishes execution.
This breaks the assumption on ordering and being single threaded.
The fault is more frequent when MAX_RECV_MSG_COUNT is lowered.

If the first thread was processing subscription create and the
second thread processing close(), the close request will free
the subscriber and the create request oops as follows:

[31.224137] WARNING: CPU: 2 PID: 266 at include/linux/kref.h:46 tipc_subscrb_rcv_cb+0x317/0x380         [tipc]
[31.228143] CPU: 2 PID: 266 Comm: kworker/u8:1 Not tainted 4.5.0+ #97
[31.228377] Workqueue: tipc_rcv tipc_recv_work [tipc]
[...]
[31.228377] Call Trace:
[31.228377]  [<ffffffff812fbb6b>] dump_stack+0x4d/0x72
[31.228377]  [<ffffffff8105a311>] __warn+0xd1/0xf0
[31.228377]  [<ffffffff8105a3fd>] warn_slowpath_null+0x1d/0x20
[31.228377]  [<ffffffffa0098067>] tipc_subscrb_rcv_cb+0x317/0x380 [tipc]
[31.228377]  [<ffffffffa00a4984>] tipc_receive_from_sock+0xd4/0x130 [tipc]
[31.228377]  [<ffffffffa00a439b>] tipc_recv_work+0x2b/0x50 [tipc]
[31.228377]  [<ffffffff81071925>] process_one_work+0x145/0x3d0
[31.246554] ---[ end trace c3882c9baa05a4fd ]---
[31.248327] BUG: spinlock bad magic on CPU#2, kworker/u8:1/266
[31.249119] BUG: unable to handle kernel NULL pointer dereference at 0000000000000428
[31.249323] IP: [<ffffffff81099d0c>] spin_dump+0x5c/0xe0
[31.249323] PGD 0
[31.249323] Oops: 0000 [#1] SMP

In this commit, we
- rename tipc_conn_shutdown() to tipc_conn_release().
- move connection release callback execution from tipc_close_conn()
  to a new function tipc_sock_release(), which is executed before
  we free the connection.
Thus we release the subscriber during connection release procedure
rather than connection shutdown procedure.

Signed-off-by: Parthasarathy Bhuvaragan <parthasarathy.bhuvaragan@ericsson.com>
Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'gro-fixed-id-gso-partial'
David S. Miller [Thu, 14 Apr 2016 20:23:42 +0000 (16:23 -0400)]
Merge branch 'gro-fixed-id-gso-partial'

Alexander Duyck says:

====================
GRO Fixed IPv4 ID support and GSO partial support

This patch series sets up a few different things.

First it adds support for GRO of frames with a fixed IP ID value.  This
will allow us to perform GRO for frames that go through things like an IPv6
to IPv4 header translation.

The second item we add is support for segmenting frames that are generated
this way.  Most devices only support an incrementing IP ID value, and in
the case of TCP the IP ID can be ignored in many cases since the DF bit
should be set.  So we can technically segment these frames using existing
TSO if we are willing to allow the IP ID to be mangled.  As such I have
added a matching feature for the new form of GRO/GSO called TCP IPv4 ID
mangling.  With this enabled we can assemble and disassemble a frame with
the sequence number fixed and the only ill effect will be that the IPv4 ID
will be altered which may or may not have any noticeable effect.  As such I
have defaulted the feature to disabled.

The third item this patch series adds is support for partial GSO
segmentation.  Partial GSO segmentation allows us to split a large frame
into two pieces.  The first piece will have an even multiple of MSS worth
of data and the headers before the one pointed to by csum_start will have
been updated so that they are correct for if the data payload had already
been segmented.  By doing this we can do things such as precompute the
outer header checksums for a frame to be segmented allowing us to perform
TSO on devices that don't support tunneling, or tunneling with outer header
checksums.

This patch set is based on the net-next tree, but I included "net: remove
netdevice gso_min_segs" in my tree as I assume it is likely to be applied
before this patch set will and I wanted to avoid a merge conflict.

v2: Fixed items reported by Jesse Gross
fixed missing GSO flag in MPLS check
adding DF check for MANGLEID
    Moved extra GSO feature checks into gso_features_check
    Rebased batches to account for "net: remove netdevice gso_min_segs"

Driver patches from the first patch set should still be compatible.  However
I do have a few changes in them so I will submit a v2 of those to Jeff
Kirsher once these patches are accepted into net-next.

Example driver patches for i40e, ixgbe, and igb:
https://patchwork.ozlabs.org/patch/608221/
https://patchwork.ozlabs.org/patch/608224/
https://patchwork.ozlabs.org/patch/608225/
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoDocumentation: Add documentation for TSO and GSO features
Alexander Duyck [Mon, 11 Apr 2016 01:45:09 +0000 (21:45 -0400)]
Documentation: Add documentation for TSO and GSO features

This document is a starting point for defining the TSO and GSO features.
The whole thing is starting to get a bit messy so I wanted to make sure we
have notes somwhere to start describing what does and doesn't work.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoGSO: Support partial segmentation offload
Alexander Duyck [Mon, 11 Apr 2016 01:45:03 +0000 (21:45 -0400)]
GSO: Support partial segmentation offload

This patch adds support for something I am referring to as GSO partial.
The basic idea is that we can support a broader range of devices for
segmentation if we use fixed outer headers and have the hardware only
really deal with segmenting the inner header.  The idea behind the naming
is due to the fact that everything before csum_start will be fixed headers,
and everything after will be the region that is handled by hardware.

With the current implementation it allows us to add support for the
following GSO types with an inner TSO_MANGLEID or TSO6 offload:
NETIF_F_GSO_GRE
NETIF_F_GSO_GRE_CSUM
NETIF_F_GSO_IPIP
NETIF_F_GSO_SIT
NETIF_F_UDP_TUNNEL
NETIF_F_UDP_TUNNEL_CSUM

In the case of hardware that already supports tunneling we may be able to
extend this further to support TSO_TCPV4 without TSO_MANGLEID if the
hardware can support updating inner IPv4 headers.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoGRO: Add support for TCP with fixed IPv4 ID field, limit tunnel IP ID values
Alexander Duyck [Mon, 11 Apr 2016 01:44:57 +0000 (21:44 -0400)]
GRO: Add support for TCP with fixed IPv4 ID field, limit tunnel IP ID values

This patch does two things.

First it allows TCP to aggregate TCP frames with a fixed IPv4 ID field.  As
a result we should now be able to aggregate flows that were converted from
IPv6 to IPv4.  In addition this allows us more flexibility for future
implementations of segmentation as we may be able to use a fixed IP ID when
segmenting the flow.

The second thing this does is that it places limitations on the outer IPv4
ID header in the case of tunneled frames.  Specifically it forces the IP ID
to be incrementing by 1 unless the DF bit is set in the outer IPv4 header.
This way we can avoid creating overlapping series of IP IDs that could
possibly be fragmented if the frame goes through GRO and is then
resegmented via GSO.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoGSO: Add GSO type for fixed IPv4 ID
Alexander Duyck [Mon, 11 Apr 2016 01:44:51 +0000 (21:44 -0400)]
GSO: Add GSO type for fixed IPv4 ID

This patch adds support for TSO using IPv4 headers with a fixed IP ID
field.  This is meant to allow us to do a lossless GRO in the case of TCP
flows that use a fixed IP ID such as those that convert IPv6 header to IPv4
headers.

In addition I am adding a feature that for now I am referring to TSO with
IP ID mangling.  Basically when this flag is enabled the device has the
option to either output the flow with incrementing IP IDs or with a fixed
IP ID regardless of what the original IP ID ordering was.  This is useful
in cases where the DF bit is set and we do not care if the original IP ID
value is maintained.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoethtool: Add support for toggling any of the GSO offloads
Alexander Duyck [Mon, 11 Apr 2016 01:44:44 +0000 (21:44 -0400)]
ethtool: Add support for toggling any of the GSO offloads

The strings were missing for several of the GSO offloads that are
available.  This patch provides the missing strings so that we can toggle
or query any of them via the ethtool command.

Signed-off-by: Alexander Duyck <aduyck@mirantis.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'mlxsw-devlink-shared-buffers'
David S. Miller [Thu, 14 Apr 2016 20:22:12 +0000 (16:22 -0400)]
Merge branch 'mlxsw-devlink-shared-buffers'

Jiri Pirko says:

====================
devlink + mlxsw: add support for config and control of shared buffers

ASICs implement shared buffer for packet forwarding purposes and enable
flexible partitioning of the shared buffer for different flows and ports,
enabling non-blocking progress of different flows as well as separation
of lossy traffic from loss-less traffic when using Per-Priority Flow
Control (PFC). The shared buffer optimizes the buffer utilization for better
absorption of packet bursts.

This patchset implements API which is based on the model SAI uses. That is
aligned with multiple ASIC vendors so this API should be vendor neutral.

Userspace counterpart patchset for devlink iproute2 tool can be found here:
https://github.com/jpirko/iproute2_mlxsw/tree/devlink_sb

Couple of examples of usage:

switch$ devlink sb help
Usage: devlink sb show [ DEV [ sb SB_INDEX ] ]
       devlink sb pool show [ DEV [ sb SB_INDEX ] pool POOL_INDEX ]
       devlink sb pool set DEV [ sb SB_INDEX ] pool POOL_INDEX
                           size POOL_SIZE thtype { static | dynamic }
       devlink sb port pool show [ DEV/PORT_INDEX [ sb SB_INDEX ]
                                   pool POOL_INDEX ]
       devlink sb port pool set DEV/PORT_INDEX [ sb SB_INDEX ]
                                pool POOL_INDEX th THRESHOLD
       devlink sb tc bind show [ DEV/PORT_INDEX [ sb SB_INDEX ] tc TC_INDEX ]
       devlink sb tc bind set DEV/PORT_INDEX [ sb SB_INDEX ] tc TC_INDEX
                              type { ingress | egress } pool POOL_INDEX
                              th THRESHOLD
       devlink sb occupancy show { DEV | DEV/PORT_INDEX } [ sb SB_INDEX ]
       devlink sb occupancy snapshot DEV [ sb SB_INDEX ]
       devlink sb occupancy clearmax DEV [ sb SB_INDEX ]

switch$ devlink sb show
pci/0000:03:00.0: sb 0 size 16777216 ing_pools 4 eg_pools 4 ing_tcs 8 eg_tcs 8

switch$ devlink sb pool show
pci/0000:03:00.0: sb 0 pool 0 type ingress size 12400032 thtype dynamic
pci/0000:03:00.0: sb 0 pool 1 type ingress size 0 thtype dynamic
pci/0000:03:00.0: sb 0 pool 2 type ingress size 0 thtype dynamic
pci/0000:03:00.0: sb 0 pool 3 type ingress size 200064 thtype dynamic
pci/0000:03:00.0: sb 0 pool 4 type egress size 13220064 thtype dynamic
pci/0000:03:00.0: sb 0 pool 5 type egress size 0 thtype dynamic
pci/0000:03:00.0: sb 0 pool 6 type egress size 0 thtype dynamic
pci/0000:03:00.0: sb 0 pool 7 type egress size 0 thtype dynamic

switch$ devlink sb port pool show sw0p7 pool 0
sw0p7: sb 0 pool 0 threshold 16

switch$ sudo devlink sb port pool set sw0p7 pool 0 th 15

switch$ devlink sb port pool show sw0p7 pool 0
sw0p7: sb 0 pool 0 threshold 15

switch$ devlink sb tc bind show sw0p7 tc 0 type ingress
sw0p7: sb 0 tc 0 type ingress pool 0 threshold 10

switch$ sudo devlink sb tc bind set sw0p7 tc 0 type ingress pool 0 th 9

switch$ devlink sb tc bind show sw0p7 tc 0 type ingress
sw0p7: sb 0 tc 0 type ingress pool 0 threshold 9

switch$ sudo devlink sb occupancy snapshot pci/0000:03:00.0

switch$ devlink sb occupancy show sw0p7
sw0p7:
  pool: 0:      82944/3217344 1:          0/0       2:          0/0       3:          0/0
        4:          0/384     5:          0/0       6:          0/0       7:          0/0
  itc:  0(0):   96768/3217344 1(0):       0/0       2(0):       0/0       3(0):       0/0
        4(0):       0/0       5(0):       0/0       6(0):       0/0       7(0):       0/0
  etc:  0(4):       0/384     1(4):       0/0       2(4):       0/0       3(4):       0/0
        4(4):       0/0       5(4):       0/0       6(4):       0/0       7(4):       0/0

switch$ sudo devlink sb occupancy clearmax pci/0000:03:00.0
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agomlxsw: spectrum_buffers: Implement occupancy monitoring
Jiri Pirko [Thu, 14 Apr 2016 16:19:30 +0000 (18:19 +0200)]
mlxsw: spectrum_buffers: Implement occupancy monitoring

Implement occupancy API introduced in devlink and mlxsw core. This is
done by accessing SBPM register for Port-Pool and SBSR for Port-TC
current and max occupancy values. Max clear is implemented using the
same registers.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agomlxsw: core: Introduce support for asynchronous EMAD register access
Jiri Pirko [Thu, 14 Apr 2016 16:19:29 +0000 (18:19 +0200)]
mlxsw: core: Introduce support for asynchronous EMAD register access

So far it was possible to have one EMAD register access at a time,
locked by mutex. This patch extends this interface to allow multiple
EMAD register accesses to be in fly at once. That allows faster
processing on firmware side avoiding unused time in between EMADs.
Measured speedup is ~30% for shared occupancy snapshot operation.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agomlxsw: core: Add mlxsw specific workqueue and use it for FDB notif. processing
Jiri Pirko [Thu, 14 Apr 2016 16:19:28 +0000 (18:19 +0200)]
mlxsw: core: Add mlxsw specific workqueue and use it for FDB notif. processing

Follow-up patch is going to need to use delayed work as well and
frequently. The FDB notification processing is already using that and
also quite frequently. It makes sense to create separate workqueue just
for mlxsw driver in this case and do not pollute system_wq.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agomlxsw: reg: Extend SBPM register for occupancy control
Jiri Pirko [Thu, 14 Apr 2016 16:19:27 +0000 (18:19 +0200)]
mlxsw: reg: Extend SBPM register for occupancy control

Since it is not possible to get and clear Port-Pool occupancy data using
SBSR register, there's a need to implement that using SBPM.
Extend pack helper and add unpack helper to get occupancy values.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agomlxsw: reg: Add Shared Buffer Status register definition
Jiri Pirko [Thu, 14 Apr 2016 16:19:26 +0000 (18:19 +0200)]
mlxsw: reg: Add Shared Buffer Status register definition

This register allows to query HW for current and maximal buffer usage.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agomlxsw: core: Add devlink shared buffer occupancy callbacks
Jiri Pirko [Thu, 14 Apr 2016 16:19:25 +0000 (18:19 +0200)]
mlxsw: core: Add devlink shared buffer occupancy callbacks

Add middle layer in mlxsw core code to forward shared buffer occupancy
calls into specific ASIC drivers.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agomlxsw: spectrum_buffers: Implement shared buffer configuration
Jiri Pirko [Thu, 14 Apr 2016 16:19:24 +0000 (18:19 +0200)]
mlxsw: spectrum_buffers: Implement shared buffer configuration

Implement previously introduced mlxsw core shared buffer API.
For Spectrum, that is done utilizing registers SBPR, SBCM and SBPM.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agomlxsw: core: Add mlxsw_core_port_driver_priv helper
Jiri Pirko [Thu, 14 Apr 2016 16:19:23 +0000 (18:19 +0200)]
mlxsw: core: Add mlxsw_core_port_driver_priv helper

Needed in following patch.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agomlxsw: spectrum_buffers: Get max_buff defaults into limits exposed to user
Jiri Pirko [Thu, 14 Apr 2016 16:19:22 +0000 (18:19 +0200)]
mlxsw: spectrum_buffers: Get max_buff defaults into limits exposed to user

Although the device supports max_buff magic values 0 and 0xff, these are
not exposed to the user via devlink.
Therefore, adjust the default values to be within configurable range.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agomlxsw: spectrum_buffers: Change initialization of PG 9
Jiri Pirko [Thu, 14 Apr 2016 16:19:21 +0000 (18:19 +0200)]
mlxsw: spectrum_buffers: Change initialization of PG 9

As explained in commit ff6551ec0c27 ("mlxsw: spectrum: Correctly
configure headroom size") control packets are directed to priority group
buffer 9 (PG9) in the ports' headroom buffers.

Since we don't want to drop control packets in case they can't be
admitted to the switch's shared buffer we bind PG9 to a different
ingress pool from the one used by all other PGs.

Unlike other PGs, we currently don't expose the binding between PG9 to a
pool and leave it fixed.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Signed-off-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agomlxsw: spectrum_buffers: Remove eg pool 3 default init and CPU port TC binding to it
Jiri Pirko [Thu, 14 Apr 2016 16:19:20 +0000 (18:19 +0200)]
mlxsw: spectrum_buffers: Remove eg pool 3 default init and CPU port TC binding to it

Since there is no congestion control for CPU port traffic, we can change
the CPU port TC binding to pool 0 with min_buff and max_buff zeroed.
Remove initialization for pool egress pool 3 since it is no longer used
by dafault.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agomlxsw: spectrum_buffers: Cache shared buffer configuration
Jiri Pirko [Thu, 14 Apr 2016 16:19:19 +0000 (18:19 +0200)]
mlxsw: spectrum_buffers: Cache shared buffer configuration

In order to achieve faster dumping of current setting and also in order
to provide possibility to get pool mode without a need to query hardware,
do cache the configuration in driver.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agomlxsw: spectrum_buffers: Rename "pool" to "pr" in initialization
Jiri Pirko [Thu, 14 Apr 2016 16:19:18 +0000 (18:19 +0200)]
mlxsw: spectrum_buffers: Rename "pool" to "pr" in initialization

Be consintent with rest of the registers (pm, cm) and use "pr" here.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agomlxsw: spectrum_buffers: Push out indexes and direction out of SB structs
Jiri Pirko [Thu, 14 Apr 2016 16:19:17 +0000 (18:19 +0200)]
mlxsw: spectrum_buffers: Push out indexes and direction out of SB structs

Structs are in arrays so use array index as pool/tc/prio index. With
that, there is need to maintain separate arrays for ingress and egress.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agomlxsw: spectrum_buffers: Push out shared buffer register writes
Jiri Pirko [Thu, 14 Apr 2016 16:19:16 +0000 (18:19 +0200)]
mlxsw: spectrum_buffers: Push out shared buffer register writes

Pushed them into helper functions.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agomlxsw: core: Add devlink shared buffer callbacks
Jiri Pirko [Thu, 14 Apr 2016 16:19:15 +0000 (18:19 +0200)]
mlxsw: core: Add devlink shared buffer callbacks

Add middle layer in mlxsw core code to forward shared buffer calls
into specific ASIC drivers.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agodevlink: implement shared buffer occupancy monitoring interface
Jiri Pirko [Thu, 14 Apr 2016 16:19:14 +0000 (18:19 +0200)]
devlink: implement shared buffer occupancy monitoring interface

User needs to monitor shared buffer occupancy. For that, he issues a
snapshot command in order to instruct hardware to catch current and
maximal occupancy values, and clear command in order to clear the
historical maximal values.

Also port-pool and tc-pool-bind command response messages are extended to
carry occupancy values.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agodevlink: add shared buffer configuration
Jiri Pirko [Thu, 14 Apr 2016 16:19:13 +0000 (18:19 +0200)]
devlink: add shared buffer configuration

Define userspace API and drivers API for configuration of shared
buffers. Four basic objects are defined:
shared buffer - attributes are size, number of pools and TCs
pool - chunk of sharedbuffer definition, it has some size and either
       static or dynamic threshold
port pool threshold - to set per-port threshold for each pool
port tc threshold bind - to bind port and TC to specified pool
                         with threshold.

Signed-off-by: Jiri Pirko <jiri@mellanox.com>
Reviewed-by: Ido Schimmel <idosch@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agogre: eliminate holes in ip_tunnel
stephen hemminger [Thu, 14 Apr 2016 00:02:21 +0000 (17:02 -0700)]
gre: eliminate holes in ip_tunnel

The structure can be packed denser by doing minor rearrangement
of existing elements.

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoravb: make ravb_ptp_interrupt() *void*
Sergei Shtylyov [Sun, 10 Apr 2016 20:55:15 +0000 (23:55 +0300)]
ravb: make ravb_ptp_interrupt() *void*

When we have the ISS.CGIS bit set, we already know that gPTP interrupt has
happened, so an extra GIS register check at the end of ravb_ptp_interrupt()
seems superfluous.  We can model the gPTP interrupt  handler like all other
dedicated interrupt handlers in the driver and make it *void*.

Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'qed-ethtool-rss'
David S. Miller [Thu, 14 Apr 2016 04:43:21 +0000 (00:43 -0400)]
Merge branch 'qed-ethtool-rss'

Yuval Mintz says:

====================
qed*: [mostly] Ethtool RSS configuration

Most of the content [code-wise] in this series is for allowing various
RSS-related configuration via ethtool.

In addition, this also removed an unnecessary versioning scheme between
the drivers and bump the driver version.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoqed* - bump driver versions to 8.7.1.20
Yuval Mintz [Sun, 10 Apr 2016 09:43:02 +0000 (12:43 +0300)]
qed* - bump driver versions to 8.7.1.20

Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoqede: add Rx flow hash/indirection support.
Sudarsana Reddy Kalluru [Sun, 10 Apr 2016 09:43:01 +0000 (12:43 +0300)]
qede: add Rx flow hash/indirection support.

Adds support for the following via ethtool:
  - UDP configuration of RSS based on 2-tuple/4-tuple.
  - RSS hash key.
  - RSS indirection table.

Signed-off-by: Sudarsana Reddy Kalluru <sudarsana.kalluru@qlogic.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoqed: add Rx flow hash/indirection support.
Sudarsana Reddy Kalluru [Sun, 10 Apr 2016 09:43:00 +0000 (12:43 +0300)]
qed: add Rx flow hash/indirection support.

Adds the required API for passing RSS-related configuration from qede.

Signed-off-by: Sudarsana Reddy Kalluru <sudarsana.kalluru@qlogic.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoqed*: remove version dependency
Rahul Verma [Sun, 10 Apr 2016 09:42:59 +0000 (12:42 +0300)]
qed*: remove version dependency

Inbox drivers don't need versioning scheme in order to guarantee
compatibility, as both qed and qede are compiled from same codebase.

Signed-off-by: Rahul Verma <rahul.verma@qlogic.com>
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'for-davem' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
David S. Miller [Thu, 14 Apr 2016 04:39:15 +0000 (00:39 -0400)]
Merge branch 'for-davem' of git://git./linux/kernel/git/viro/vfs

8 years agonet: remove netdevice gso_min_segs
Eric Dumazet [Sat, 9 Apr 2016 18:29:58 +0000 (11:29 -0700)]
net: remove netdevice gso_min_segs

After introduction of ndo_features_check(), we believe that very
specific checks for rare features should not be done in core
networking stack.

No driver uses gso_min_segs yet, so we revert this feature and save
few instructions per tx packet in fast path.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoqdisc: constify meta_type_ops structures
Julia Lawall [Sat, 9 Apr 2016 08:49:22 +0000 (10:49 +0200)]
qdisc: constify meta_type_ops structures

The meta_type_ops structures are never modified, so declare them as const.

Done with the help of Coccinelle.

Signed-off-by: Julia Lawall <Julia.Lawall@lip6.fr>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: bcmgenet: add BQL support
Petri Gynther [Sat, 9 Apr 2016 07:20:36 +0000 (00:20 -0700)]
net: bcmgenet: add BQL support

Add Byte Queue Limits (BQL) support to bcmgenet driver.

Signed-off-by: Petri Gynther <pgynther@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: bcmgenet: use __napi_schedule_irqoff()
Florian Fainelli [Sat, 9 Apr 2016 05:30:56 +0000 (22:30 -0700)]
net: bcmgenet: use __napi_schedule_irqoff()

bcmgenet_isr1() and bcmgenet_isr0() run in hard irq context,
we do not need to block irq again.

Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Acked-by: Petri Gynther <pgynther@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: bcmgenet: use napi_complete_done()
Eric Dumazet [Sat, 9 Apr 2016 05:06:40 +0000 (22:06 -0700)]
net: bcmgenet: use napi_complete_done()

By using napi_complete_done(), we allow fine tuning
of /sys/class/net/ethX/gro_flush_timeout for higher GRO aggregation
efficiency for a Gbit NIC.

Check commit 24d2e4a50737 ("tg3: use napi_complete_done()") for details.

Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Petri Gynther <pgynther@google.com>
Cc: Florian Fainelli <f.fainelli@gmail.com>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Acked-by: Petri Gynther <pgynther@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'sctp-delayed-wakeups'
David S. Miller [Thu, 14 Apr 2016 03:04:44 +0000 (23:04 -0400)]
Merge branch 'sctp-delayed-wakeups'

Marcelo Ricardo Leitner says:

====================
sctp: delay calls to sk_data_ready() as much as possible

1st patch is a preparation for the 2nd. The idea is to not call
->sk_data_ready() for every data chunk processed while processing
packets but only once before releasing the socket.

v2: patchset re-checked, small changelog fixes
v3: on patch 2, make use of local vars to make it more readable
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agosctp: delay calls to sk_data_ready() as much as possible
Marcelo Ricardo Leitner [Fri, 8 Apr 2016 19:41:28 +0000 (16:41 -0300)]
sctp: delay calls to sk_data_ready() as much as possible

Currently processing of multiple chunks in a single SCTP packet leads to
multiple calls to sk_data_ready, causing multiple wake up signals which
are costy and doesn't make it wake up any faster.

With this patch it will note that the wake up is pending and will do it
before leaving the state machine interpreter, latest place possible to
do it realiably and cleanly.

Note that sk_data_ready events are not dependent on asocs, unlike waking
up writers.

v2: series re-checked
v3: use local vars to cleanup the code, suggested by Jakub Sitnicki
Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agosctp: compress bit-wide flags to a bitfield on sctp_sock
Marcelo Ricardo Leitner [Fri, 8 Apr 2016 19:41:27 +0000 (16:41 -0300)]
sctp: compress bit-wide flags to a bitfield on sctp_sock

It wastes space and gets worse as we add new flags, so convert bit-wide
flags to a bitfield.

Currently it already saves 4 bytes in sctp_sock, which are left as holes
in it for now. The whole struct needs packing, which should be done in
another patch.

Note that do_auto_asconf cannot be merged, as explained in the comment
before it.

Signed-off-by: Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agodrivers/net/ethernet/jme.c: Deinline jme_reset_mac_processor, save 2816 bytes
Denys Vlasenko [Fri, 8 Apr 2016 18:39:47 +0000 (20:39 +0200)]
drivers/net/ethernet/jme.c: Deinline jme_reset_mac_processor, save 2816 bytes

This function compiles to 895 bytes of machine code.

Clearly, this isn't a time-critical function.
For one, it has a number of udelay(1) calls.

Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
CC: David S. Miller <davem@davemloft.net>
CC: linux-kernel@vger.kernel.org
CC: netdev@vger.kernel.org
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'bridge-sysfs-rtnl-notifications'
David S. Miller [Thu, 14 Apr 2016 02:42:33 +0000 (22:42 -0400)]
Merge branch 'bridge-sysfs-rtnl-notifications'

Xin Long says:

====================
bridge: support sending rntl info when we set attributes through sysfs/ioctl

This patchset is used to support sending rntl info to user in some places,
and ensure that whenever those attributes change internally or from sysfs,
that a netlink notification is sent out to listeners.

It also make some adjustment in bridge sysfs so that we can implement this
easily.

I've done some tests on this patchset, like:
[br_sysfs]
  1. change all the attribute values of br or brif:
  $ echo $value > /sys/class/net/br0/bridge/{*}
  $ echo $value > /sys/class/net/br0/brif/eth1/{*}

  2. meanwhile, on another terminal to observe the msg:
  $ bridge monitor

[br_ioctl]
  1. in bridge-utils package, do some changes in br_set, let brctl command
  use ioctl to set attribute:
         if ((ret = set_sysfs(path, value)) < 0) { -->
         if (1) {

  $ brctl set*

  2. meanwhile, on another terminal to observe the msg:
  $ bridge monitor

This test covers all the attributes that brctl and sysfs support to set.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobridge: a netlink notification should be sent when those attributes are changed by...
Xin Long [Fri, 8 Apr 2016 16:03:33 +0000 (00:03 +0800)]
bridge: a netlink notification should be sent when those attributes are changed by ioctl

Now when we change the attributes of bridge or br_port by netlink,
a relevant netlink notification will be sent, but if we change them
by ioctl or sysfs, no notification will be sent.

We should ensure that whenever those attributes change internally or from
sysfs/ioctl, that a netlink notification is sent out to listeners.

Also, NetworkManager will use this in the future to listen for out-of-band
bridge master attribute updates and incorporate them into the runtime
configuration.

This patch is used for ioctl.

Signed-off-by: Xin Long <lucien.xin@gmail.com>
Reviewed-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobridge: a netlink notification should be sent when those attributes are changed by...
Xin Long [Fri, 8 Apr 2016 16:03:32 +0000 (00:03 +0800)]
bridge: a netlink notification should be sent when those attributes are changed by br_sysfs_if

Now when we change the attributes of bridge or br_port by netlink,
a relevant netlink notification will be sent, but if we change them
by ioctl or sysfs, no notification will be sent.

We should ensure that whenever those attributes change internally or from
sysfs/ioctl, that a netlink notification is sent out to listeners.

Also, NetworkManager will use this in the future to listen for out-of-band
bridge master attribute updates and incorporate them into the runtime
configuration.

This patch is used for br_sysfs_if, and we also move br_ifinfo_notify out
of store_flag.

Signed-off-by: Xin Long <lucien.xin@gmail.com>
Reviewed-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobridge: a netlink notification should be sent when those attributes are changed by...
Xin Long [Fri, 8 Apr 2016 16:03:31 +0000 (00:03 +0800)]
bridge: a netlink notification should be sent when those attributes are changed by br_sysfs_br

Now when we change the attributes of bridge or br_port by netlink,
a relevant netlink notification will be sent, but if we change them
by ioctl or sysfs, no notification will be sent.

We should ensure that whenever those attributes change internally or from
sysfs/ioctl, that a netlink notification is sent out to listeners.

Also, NetworkManager will use this in the future to listen for out-of-band
bridge master attribute updates and incorporate them into the runtime
configuration.

This patch is used for br_sysfs_br. and we also need to remove some
rtnl_trylock in old functions so that we can call it in a common one.

For group_addr_store, we cannot make it use store_bridge_parm, because
it's not a string-to-long convert, we will add notification on it
individually.

Signed-off-by: Xin Long <lucien.xin@gmail.com>
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobridge: simplify the stp_state_store by calling store_bridge_parm
Xin Long [Fri, 8 Apr 2016 16:03:30 +0000 (00:03 +0800)]
bridge: simplify the stp_state_store by calling store_bridge_parm

There are some repetitive codes in stp_state_store, we can remove
them by calling store_bridge_parm.

Signed-off-by: Xin Long <lucien.xin@gmail.com>
Reviewed-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobridge: simplify the forward_delay_store by calling store_bridge_parm
Xin Long [Fri, 8 Apr 2016 16:03:29 +0000 (00:03 +0800)]
bridge: simplify the forward_delay_store by calling store_bridge_parm

There are some repetitive codes in forward_delay_store, we can remove
them by calling store_bridge_parm.

Signed-off-by: Xin Long <lucien.xin@gmail.com>
Reviewed-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agobridge: simplify the flush_store by calling store_bridge_parm
Xin Long [Fri, 8 Apr 2016 16:03:28 +0000 (00:03 +0800)]
bridge: simplify the flush_store by calling store_bridge_parm

There are some repetitive codes in flush_store, we can remove
them by calling store_bridge_parm, also, it would send rtnl notification
after we add it in store_bridge_parm in the following patches.

Signed-off-by: Xin Long <lucien.xin@gmail.com>
Reviewed-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: force inlining of netif_tx_start/stop_queue, sock_hold, __sock_put
Denys Vlasenko [Fri, 8 Apr 2016 15:51:54 +0000 (17:51 +0200)]
net: force inlining of netif_tx_start/stop_queue, sock_hold, __sock_put

Sometimes gcc mysteriously doesn't inline
very small functions we expect to be inlined. See
    https://gcc.gnu.org/bugzilla/show_bug.cgi?id=66122
Arguably, gcc should do better, but gcc people aren't willing
to invest time into it, asking to use __always_inline instead.

With this .config:
http://busybox.net/~vda/kernel_config_OPTIMIZE_INLINING_and_Os,
the following functions get deinlined many times.

netif_tx_stop_queue: 207 copies, 590 calls:
55                      push   %rbp
48 89 e5                mov    %rsp,%rbp
f0 80 8f e0 01 00 00 01 lock orb $0x1,0x1e0(%rdi)
5d                      pop    %rbp
c3                      retq

netif_tx_start_queue: 47 copies, 111 calls
55                      push   %rbp
48 89 e5                mov    %rsp,%rbp
f0 80 a7 e0 01 00 00 fe lock andb $0xfe,0x1e0(%rdi)
5d                      pop    %rbp
c3                      retq

sock_hold: 39 copies, 124 calls
55                      push   %rbp
48 89 e5                mov    %rsp,%rbp
f0 ff 87 80 00 00 00    lock incl 0x80(%rdi)
5d                      pop    %rbp
c3                      retq

__sock_put: 6 copies, 13 calls
55                      push   %rbp
48 89 e5                mov    %rsp,%rbp
f0 ff 8f 80 00 00 00    lock decl 0x80(%rdi)
5d                      pop    %rbp
c3                      retq

This patch fixes this via s/inline/__always_inline/.

Code size decrease after the patch is ~2.5k:

    text      data      bss       dec     hex filename
56719876  56364551 36196352 149280779 8e5d80b vmlinux_before
56717440  56364551 36196352 149278343 8e5ce87 vmlinux

Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
CC: David S. Miller <davem@davemloft.net>
CC: linux-kernel@vger.kernel.org
CC: netdev@vger.kernel.org
CC: netfilter-devel@vger.kernel.org
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoDoc: networking: Fix typo in dsa
Masanari Iida [Fri, 8 Apr 2016 15:00:25 +0000 (00:00 +0900)]
Doc: networking: Fix typo in dsa

This patch fix typos in Documentation/networking/dsa.

Signed-off-by: Masanari Iida <standby24x7@gmail.com>
Reviewed-by: Andrew Lunn <andrew@lunn.ch>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoipv6, token: allow for clearing the current device token
Daniel Borkmann [Fri, 8 Apr 2016 13:55:00 +0000 (15:55 +0200)]
ipv6, token: allow for clearing the current device token

The original tokenized iid support implemented via f53adae4eae5 ("net: ipv6:
add tokenized interface identifier support") didn't allow for clearing a
device token as it was intended that this addressing mode was the only one
active for globally scoped IPv6 addresses. Later we relaxed that restriction
via 617fe29d45bd ("net: ipv6: only invalidate previously tokenized addresses"),
and we should also allow for clearing tokens as there's no good reason why
it shouldn't be allowed.

Fixes: 617fe29d45bd ("net: ipv6: only invalidate previously tokenized addresses")
Reported-by: Robin H. Johnson <robbat2@gentoo.org>
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Cc: Hannes Frederic Sowa <hannes@stressinduktion.org>
Acked-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agosock: tigthen lockdep checks for sock_owned_by_user
Hannes Frederic Sowa [Fri, 8 Apr 2016 13:11:27 +0000 (15:11 +0200)]
sock: tigthen lockdep checks for sock_owned_by_user

sock_owned_by_user should not be used without socket lock held. It seems
to be a common practice to check .owned before lock reclassification, so
provide a little help to abstract this check away.

Cc: linux-cifs@vger.kernel.org
Cc: linux-bluetooth@vger.kernel.org
Cc: linux-nfs@vger.kernel.org
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: ethernet: stmmac: GMAC4.xx: Fix TX descriptor preparation
Alexandre TORGUE [Fri, 8 Apr 2016 09:18:03 +0000 (11:18 +0200)]
net: ethernet: stmmac: GMAC4.xx: Fix TX descriptor preparation

On GMAC4.xx each descriptor contains 2 buffers of 16KB (each).
Initially, those 2 buffers was filled in dwmac4_rd_prepare_tx_desc but
it is actually not needed. Indeed, stmmac driver supports frame up to
9000 bytes (jumbo). So only one buffer is needed.

Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: Alexandre TORGUE <alexandre.torgue@st.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'udp-hdrs-fixes'
David S. Miller [Thu, 14 Apr 2016 02:24:52 +0000 (22:24 -0400)]
Merge branch 'udp-hdrs-fixes'

Willem de Bruijn says:

====================
fix two more udp pull header issues

Follow up patches to the fixes to RxRPC and SunRPC. A scan of the
code showed two other interfaces that expect UDP packets to have
a udphdr when queued: read packet length with ioctl SIOCINQ and
receive payload checksum with socket option IP_CHECKSUM.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoudp: do not expect udp headers in recv cmsg IP_CMSG_CHECKSUM
Willem de Bruijn [Thu, 7 Apr 2016 22:12:59 +0000 (18:12 -0400)]
udp: do not expect udp headers in recv cmsg IP_CMSG_CHECKSUM

On udp sockets, recv cmsg IP_CMSG_CHECKSUM returns a checksum over
the packet payload. Since commit e6afc8ace6dd pulled the headers,
taking skb->data as the start of transport header is incorrect. Use
the transport header pointer.

Also, when peeking at an offset from the start of the packet, only
return a checksum from the start of the peeked data. Note that the
cmsg does not subtract a tail checkum when reading truncated data.

Fixes: e6afc8ace6dd ("udp: remove headers from UDP packets before queueing")

Signed-off-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoudp: do not expect udp headers on ioctl SIOCINQ
Willem de Bruijn [Thu, 7 Apr 2016 22:12:58 +0000 (18:12 -0400)]
udp: do not expect udp headers on ioctl SIOCINQ

On udp sockets, ioctl SIOCINQ returns the payload size of the first
packet. Since commit e6afc8ace6dd pulled the headers, the result is
incorrect when subtracting header length. Remove that operation.

Fixes: e6afc8ace6dd ("udp: remove headers from UDP packets before queueing")

Signed-off-by: Willem de Bruijn <willemb@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'dsa-refactoring-set-1'
David S. Miller [Wed, 13 Apr 2016 22:15:24 +0000 (18:15 -0400)]
Merge branch 'dsa-refactoring-set-1'

Andrew Lunn says:

====================
DSA refactoring: set 1

There has been a long running effort to refractor DSA probing to make
the switches true linux devices. Here are a small collection of
patches moving in this direction. Most have been seen before.

We take a little step forward by passing the dsa device point to the
driver, thus allowing it to perform resource allocations using the
normal mechanisms. This device structure will later be replaced by the
devices own device structure.

Future patches will add a true driver probe function, so we rename the
current probe function, cleaning up the namespace.

phys_port_mask continually confuses me, thinking it is about PHYs. But
it is actually about ports enabled to the outside world. So rename it to
enabled_port_mask.

Lots more patches yet to follow, this is just doing some ground work.

v2:
  enabled_port_mask instread of user_port_masks
  Added Tested-by's and Reviewed-by.
====================

Tested-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agodsa: mv88e6xxx: Use bus in mv88e6xxx_lookup_name()
Andrew Lunn [Wed, 13 Apr 2016 00:40:45 +0000 (02:40 +0200)]
dsa: mv88e6xxx: Use bus in mv88e6xxx_lookup_name()

mv88e6xxx_lookup_name() returns the model name of a switch at a given
address on an MII bus. Using mii_bus to identify the bus rather than
the host device is more logical, so change the parameter.

Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Tested-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Reviewed-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agodsa: Rename phys_port_mask to enabled_port_mask
Andrew Lunn [Wed, 13 Apr 2016 00:40:44 +0000 (02:40 +0200)]
dsa: Rename phys_port_mask to enabled_port_mask

The phys in phys_port_mask suggests this mask is about PHYs. In fact,
it means physical ports. Rename to enabled_port_mask, indicating
external enabled ports of the switch, which is hopefully less
confusing.

Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Tested-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: dsa: Rename DSA probe function.
Andrew Lunn [Wed, 13 Apr 2016 00:40:43 +0000 (02:40 +0200)]
net: dsa: Rename DSA probe function.

Rename the function called from the DSA to perform a probe for the
switch. This makes the normal _probe() name available for a standard
Linux device driver probe function.

Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Tested-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: dsa: Keep the mii bus and address in the private structure
Andrew Lunn [Wed, 13 Apr 2016 00:40:42 +0000 (02:40 +0200)]
net: dsa: Keep the mii bus and address in the private structure

Rather than looking up the mii bus and address every time, do it once
at probe, and keep it in the private structure. Centralise this probe
code in mv88e6xxx.

Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Tested-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: dsa: Remove allocation of driver private memory
Andrew Lunn [Wed, 13 Apr 2016 00:40:41 +0000 (02:40 +0200)]
net: dsa: Remove allocation of driver private memory

The drivers now allocate their own memory for private usage. Remove
the allocation from the core code.

Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Tested-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: dsa: Have the switch driver allocate there own private memory
Andrew Lunn [Wed, 13 Apr 2016 00:40:40 +0000 (02:40 +0200)]
net: dsa: Have the switch driver allocate there own private memory

Now the switch devices have a dev pointer, make use of it for allocating
the drivers private data structures using a devm_kzalloc().

Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Tested-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: dsa: Pass the dsa device to the switch drivers
Andrew Lunn [Wed, 13 Apr 2016 00:40:39 +0000 (02:40 +0200)]
net: dsa: Pass the dsa device to the switch drivers

By passing a device structure to the switch devices, it allows them
to use devm_* methods for resource management.

Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Acked-by: Florian Fainelli <f.fainelli@gmail.com>
Tested-by: Vivien Didelot <vivien.didelot@savoirfairelinux.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge tag 'mac80211-next-for-davem-2016-04-13' of git://git.kernel.org/pub/scm/linux...
David S. Miller [Wed, 13 Apr 2016 21:58:51 +0000 (17:58 -0400)]
Merge tag 'mac80211-next-for-davem-2016-04-13' of git://git./linux/kernel/git/jberg/mac80211-next

Johannes Berg says:

====================
To synchronize with Kalle, here's just a big change that affects
all drivers - removing the duplicated enum ieee80211_band and
replacing it by enum nl80211_band. On top of that, just a small
documentation update.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agotipc: remove remnants of old broadcast code
Jon Paul Maloy [Wed, 13 Apr 2016 15:45:47 +0000 (11:45 -0400)]
tipc: remove remnants of old broadcast code

We remove a couple of leftover fields in struct tipc_bearer. Those
were used by the old broadcast implementation, and are not needed
any longer. There is no functional changes in this commit.

Acked-by: Ying Xue <ying.xue@windriver.com>
Signed-off-by: Jon Maloy <jon.maloy@ericsson.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agoMerge branch 'mediatek-stress-test-fixes'
David S. Miller [Wed, 13 Apr 2016 02:41:33 +0000 (22:41 -0400)]
Merge branch 'mediatek-stress-test-fixes'

John Crispin says:

====================
net: mediatek: make the driver pass stress tests

While testing the driver we managed to get the TX path to stall and fail
to recover. When dual MAC support was added to the driver, the whole queue
stop/wake code was not properly adapted. There was also a regression in the
locking of the xmit function. The fact that watchdog_timeo was not set and
that the tx_timeout code failed to properly reset the dma, irq and queue
just made the mess complete.

This series make the driver pass stress testing. With this series applied
the testbed has been running for several days and still has not locked up.
We have a second setup that has a small hack patch applied to randomly stop
irqs and/or one of the queues and successfully manages to recover from these
simulated tx stalls.
====================

Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: mediatek: do not set the QID field in the TX DMA descriptors
John Crispin [Thu, 7 Apr 2016 22:54:11 +0000 (00:54 +0200)]
net: mediatek: do not set the QID field in the TX DMA descriptors

The QID field gets set to the mac id. This made the DMA linked list queue
the traffic of each MAC on a different internal queue. However during long
term testing we found that this will cause traffic stalls as the multi
queue setup requires a more complete initialisation which is not part of
the upstream driver yet.

This patch removes the code setting the QID field, resulting in all
traffic ending up in queue 0 which works without any special setup.

Signed-off-by: John Crispin <blogic@openwrt.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: mediatek: move the pending_work struct to the device generic struct
John Crispin [Thu, 7 Apr 2016 22:54:10 +0000 (00:54 +0200)]
net: mediatek: move the pending_work struct to the device generic struct

The worker always touches both netdevs. It is ethernet core and not MAC
specific. We only need one worker, which belongs into the ethernets core
struct.

Signed-off-by: John Crispin <blogic@openwrt.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: mediatek: fix mtk_pending_work
John Crispin [Thu, 7 Apr 2016 22:54:09 +0000 (00:54 +0200)]
net: mediatek: fix mtk_pending_work

The driver supports 2 MACs. Both run on the same DMA ring. If we hit a TX
timeout we need to stop both netdevs before restarting them again. If we
don't do this, mtk_stop() wont shutdown DMA and the consecutive call to
mtk_open() wont restart DMA and enable IRQs.

Signed-off-by: John Crispin <blogic@openwrt.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: mediatek: fix TX locking
John Crispin [Thu, 7 Apr 2016 22:54:08 +0000 (00:54 +0200)]
net: mediatek: fix TX locking

Inside the TX path there is a lock inside the tx_map function. This is
however too late. The patch moves the lock to the start of the xmit
function right before the free count check of the DMA ring happens.
If we do not do this, the code becomes racy leading to TX stalls and
dropped packets. This happens as there are 2 netdevs running on the
same physical DMA ring.

Signed-off-by: John Crispin <blogic@openwrt.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: mediatek: fix stop and wakeup of queue
John Crispin [Thu, 7 Apr 2016 22:54:07 +0000 (00:54 +0200)]
net: mediatek: fix stop and wakeup of queue

The driver supports 2 MACs. Both run on the same DMA ring. If we go
above/below the TX rings threshold value, we always need to wake/stop
the queue of both devices. Not doing to can cause TX stalls and packet
drops on one of the devices.

Signed-off-by: John Crispin <blogic@openwrt.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
8 years agonet: mediatek: remove superfluous reset call
John Crispin [Thu, 7 Apr 2016 22:54:06 +0000 (00:54 +0200)]
net: mediatek: remove superfluous reset call

HW reset is triggered in the mtk_hw_init() function. There is no need to
also reset the core during probe.

Signed-off-by: John Crispin <blogic@openwrt.org>
Signed-off-by: David S. Miller <davem@davemloft.net>