diff options
author | Eric Dumazet <eric.dumazet@gmail.com> | 2010-04-27 15:13:20 -0700 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2010-04-27 15:13:20 -0700 |
commit | c377411f2494a931ff7facdbb3a6839b1266bcf6 (patch) | |
tree | 6846cdcec913f50839e3916856f78f7e059ff5fb /net/ipv6 | |
parent | 6e7676c1a76aed6e957611d8d7a9e5592e23aeba (diff) |
net: sk_add_backlog() take rmem_alloc into account
Current socket backlog limit is not enough to really stop DDOS attacks,
because user thread spend many time to process a full backlog each
round, and user might crazy spin on socket lock.
We should add backlog size and receive_queue size (aka rmem_alloc) to
pace writers, and let user run without being slow down too much.
Introduce a sk_rcvqueues_full() helper, to avoid taking socket lock in
stress situations.
Under huge stress from a multiqueue/RPS enabled NIC, a single flow udp
receiver can now process ~200.000 pps (instead of ~100 pps before the
patch) on a 8 core machine.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/ipv6')
-rw-r--r-- | net/ipv6/udp.c | 8 |
1 files changed, 8 insertions, 0 deletions
diff --git a/net/ipv6/udp.c b/net/ipv6/udp.c index 2850e35cee3d..3ead20ad9d07 100644 --- a/net/ipv6/udp.c +++ b/net/ipv6/udp.c @@ -584,6 +584,10 @@ static void flush_stack(struct sock **stack, unsigned int count, sk = stack[i]; if (skb1) { + if (sk_rcvqueues_full(sk, skb)) { + kfree_skb(skb1); + goto drop; + } bh_lock_sock(sk); if (!sock_owned_by_user(sk)) udpv6_queue_rcv_skb(sk, skb1); @@ -759,6 +763,10 @@ int __udp6_lib_rcv(struct sk_buff *skb, struct udp_table *udptable, /* deliver */ + if (sk_rcvqueues_full(sk, skb)) { + sock_put(sk); + goto discard; + } bh_lock_sock(sk); if (!sock_owned_by_user(sk)) udpv6_queue_rcv_skb(sk, skb); |