net: optimize skb_postpull_rcsum()
authorEric Dumazet <edumazet@google.com>
Wed, 24 Nov 2021 20:24:46 +0000 (12:24 -0800)
committerJakub Kicinski <kuba@kernel.org>
Fri, 26 Nov 2021 05:03:31 +0000 (21:03 -0800)
Remove one pair of add/adc instructions and their dependency
against carry flag.

We can leverage third argument to csum_partial():

  X = csum_block_sub(X, csum_partial(start, len, 0), 0);

  -->

  X = csum_block_add(X, ~csum_partial(start, len, 0), 0);

  -->

  X = ~csum_partial(start, len, ~X);

Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Jakub Kicinski <kuba@kernel.org>
include/linux/skbuff.h

index eba256a..eae4bd3 100644 (file)
@@ -3485,7 +3485,11 @@ __skb_postpull_rcsum(struct sk_buff *skb, const void *start, unsigned int len,
 static inline void skb_postpull_rcsum(struct sk_buff *skb,
                                      const void *start, unsigned int len)
 {
-       __skb_postpull_rcsum(skb, start, len, 0);
+       if (skb->ip_summed == CHECKSUM_COMPLETE)
+               skb->csum = ~csum_partial(start, len, ~skb->csum);
+       else if (skb->ip_summed == CHECKSUM_PARTIAL &&
+                skb_checksum_start_offset(skb) < 0)
+               skb->ip_summed = CHECKSUM_NONE;
 }
 
 static __always_inline void