diff options
author | Eric Dumazet <edumazet@google.com> | 2016-06-10 16:41:39 -0700 |
---|---|---|
committer | David S. Miller <davem@davemloft.net> | 2016-06-10 23:58:21 -0700 |
commit | 45f50bed1d808794e514e9eed0e579a8756ce2ba (patch) | |
tree | 05d9f30419d6a5ef9b503487661959ef29c2fec5 /net/sched/sch_cbq.c | |
parent | 42117927cab5a13192ecc227bea19da5059ffc6c (diff) |
net_sched: remove generic throttled management
__QDISC_STATE_THROTTLED bit manipulation is rather expensive
for HTB and few others.
I already removed it for sch_fq in commit f2600cf02b5b
("net: sched: avoid costly atomic operation in fq_dequeue()")
and so far nobody complained.
When one ore more packets are stuck in one or more throttled
HTB class, a htb dequeue() performs two atomic operations
to clear/set __QDISC_STATE_THROTTLED bit, while root qdisc
lock is held.
Removing this pair of atomic operations bring me a 8 % performance
increase on 200 TCP_RR tests, in presence of throttled classes.
This patch has no side effect, since nothing actually uses
disc_is_throttled() anymore.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/sched/sch_cbq.c')
-rw-r--r-- | net/sched/sch_cbq.c | 2 |
1 files changed, 0 insertions, 2 deletions
diff --git a/net/sched/sch_cbq.c b/net/sched/sch_cbq.c index 6e61f9aa8783..a29fd811d7b9 100644 --- a/net/sched/sch_cbq.c +++ b/net/sched/sch_cbq.c @@ -513,7 +513,6 @@ static enum hrtimer_restart cbq_undelay(struct hrtimer *timer) hrtimer_start(&q->delay_timer, time, HRTIMER_MODE_ABS_PINNED); } - qdisc_unthrottled(sch); __netif_schedule(qdisc_root(sch)); return HRTIMER_NORESTART; } @@ -819,7 +818,6 @@ cbq_dequeue(struct Qdisc *sch) if (skb) { qdisc_bstats_update(sch, skb); sch->q.qlen--; - qdisc_unthrottled(sch); return skb; } |