diff options
author | Eric Dumazet <edumazet@google.com> | 2015-01-28 05:47:11 -0800 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2015-01-28 23:24:47 -0800 |
commit | 811230cd853d62f09ed0addd0ce9a1b9b0e13fb5 (patch) | |
tree | 788fc77d259b21c55a2b04594824d0c688992385 | |
parent | 150ae0e94634714b23919f0c333fee28a5b199d5 (diff) |
tcp: ipv4: initialize unicast_sock sk_pacing_rate
When I added sk_pacing_rate field, I forgot to initialize its value
in the per cpu unicast_sock used in ip_send_unicast_reply()
This means that for sch_fq users, RST packets, or ACK packets sent
on behalf of TIME_WAIT sockets might be sent to slowly or even dropped
once we reach the per flow limit.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Fixes: 95bd09eb2750 ("tcp: TSO packets automatic sizing")
Signed-off-by: David S. Miller <davem@davemloft.net>
-rw-r--r-- | net/ipv4/ip_output.c | 1 |
1 files changed, 1 insertions, 0 deletions
diff --git a/net/ipv4/ip_output.c b/net/ipv4/ip_output.c index b50861b22b6b..38a20a9cca1a 100644 --- a/net/ipv4/ip_output.c +++ b/net/ipv4/ip_output.c @@ -1517,6 +1517,7 @@ static DEFINE_PER_CPU(struct inet_sock, unicast_sock) = { .sk_wmem_alloc = ATOMIC_INIT(1), .sk_allocation = GFP_ATOMIC, .sk_flags = (1UL << SOCK_USE_WRITE_QUEUE), + .sk_pacing_rate = ~0U, }, .pmtudisc = IP_PMTUDISC_WANT, .uc_ttl = -1, |