diff options
author | Yuchung Cheng <ycheng@google.com> | 2019-04-29 15:46:18 -0700 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2019-05-01 11:47:54 -0400 |
commit | 794200d66273cbfa32cab2dbcd59a5db6b57a5d1 (patch) | |
tree | 3d9c21b1511fb25b416aebb848df76b400405050 | |
parent | 8c3cfe19feac41065bb88bc14b36c318b26847a9 (diff) |
tcp: undo cwnd on Fast Open spurious SYNACK retransmit
This patch makes passive Fast Open reverts the cwnd to default
initial cwnd (10 packets) if the SYNACK timeout is spurious.
Passive Fast Open uses a full socket during handshake so it can
use the existing undo logic to detect spurious retransmission
by recording the first SYNACK timeout in key state variable
retrans_stamp. Upon receiving the ACK of the SYNACK, if the socket
has sent some data before the timeout, the spurious timeout
is detected by tcp_try_undo_recovery() in tcp_process_loss()
in tcp_ack().
But if the socket has not send any data yet, tcp_ack() does not
execute the undo code since no data is acknowledged. The fix is to
check such case explicitly after tcp_ack() during the ACK processing
in SYN_RECV state. In addition this is checked in FIN_WAIT_1 state
in case the server closes the socket before handshake completes.
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
-rw-r--r-- | net/ipv4/tcp_input.c | 3 |
1 files changed, 3 insertions, 0 deletions
diff --git a/net/ipv4/tcp_input.c b/net/ipv4/tcp_input.c index 53b4c5a3113b..3a40584cb473 100644 --- a/net/ipv4/tcp_input.c +++ b/net/ipv4/tcp_input.c @@ -6089,6 +6089,7 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb) * so release it. */ if (req) { + tcp_try_undo_loss(sk, false); inet_csk(sk)->icsk_retransmits = 0; reqsk_fastopen_remove(sk, req, false); /* Re-arm the timer because data may have been sent out. @@ -6143,6 +6144,8 @@ int tcp_rcv_state_process(struct sock *sk, struct sk_buff *skb) * our SYNACK so stop the SYNACK timer. */ if (req) { + tcp_try_undo_loss(sk, false); + inet_csk(sk)->icsk_retransmits = 0; /* We no longer need the request sock. */ reqsk_fastopen_remove(sk, req, false); tcp_rearm_rto(sk); |