summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2024-06-06af_unix: Annotate data-races around sk->sk_state in unix_write_space() and ↵Kuniyuki Iwashima1-13/+12
poll(). unix_poll() and unix_dgram_poll() read sk->sk_state locklessly and calls unix_writable() which also reads sk->sk_state without holding unix_state_lock(). Let's use READ_ONCE() in unix_poll() and unix_dgram_poll() and pass it to unix_writable(). While at it, we remove TCP_SYN_SENT check in unix_dgram_poll() as that state does not exist for AF_UNIX socket since the code was added. Fixes: 1586a5877db9 ("af_unix: do not report POLLOUT on listeners") Fixes: 3c73419c09a5 ("af_unix: fix 'poll for write'/ connected DGRAM sockets") Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-06-06af_unix: Annotate data-race of sk->sk_state in unix_inq_len().Kuniyuki Iwashima1-1/+1
ioctl(SIOCINQ) calls unix_inq_len() that checks sk->sk_state first and returns -EINVAL if it's TCP_LISTEN. Then, for SOCK_STREAM sockets, unix_inq_len() returns the number of bytes in recvq. However, unix_inq_len() does not hold unix_state_lock(), and the concurrent listen() might change the state after checking sk->sk_state. If the race occurs, 0 is returned for the listener, instead of -EINVAL, because the length of skb with embryo is 0. We could hold unix_state_lock() in unix_inq_len(), but it's overkill given the result is true for pre-listen() TCP_CLOSE state. So, let's use READ_ONCE() for sk->sk_state in unix_inq_len(). Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-06-06af_unix: Annodate data-races around sk->sk_state for writers.Kuniyuki Iwashima1-6/+8
sk->sk_state is changed under unix_state_lock(), but it's read locklessly in many places. This patch adds WRITE_ONCE() on the writer side. We will add READ_ONCE() to the lockless readers in the following patches. Fixes: 83301b5367a9 ("af_unix: Set TCP_ESTABLISHED for datagram sockets too") Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-06-06af_unix: Set sk->sk_state under unix_state_lock() for truly disconencted peer.Kuniyuki Iwashima1-2/+8
When a SOCK_DGRAM socket connect()s to another socket, the both sockets' sk->sk_state are changed to TCP_ESTABLISHED so that we can register them to BPF SOCKMAP. When the socket disconnects from the peer by connect(AF_UNSPEC), the state is set back to TCP_CLOSE. Then, the peer's state is also set to TCP_CLOSE, but the update is done locklessly and unconditionally. Let's say socket A connect()ed to B, B connect()ed to C, and A disconnects from B. After the first two connect()s, all three sockets' sk->sk_state are TCP_ESTABLISHED: $ ss -xa Netid State Recv-Q Send-Q Local Address:Port Peer Address:PortProcess u_dgr ESTAB 0 0 @A 641 * 642 u_dgr ESTAB 0 0 @B 642 * 643 u_dgr ESTAB 0 0 @C 643 * 0 And after the disconnect, B's state is TCP_CLOSE even though it's still connected to C and C's state is TCP_ESTABLISHED. $ ss -xa Netid State Recv-Q Send-Q Local Address:Port Peer Address:PortProcess u_dgr UNCONN 0 0 @A 641 * 0 u_dgr UNCONN 0 0 @B 642 * 643 u_dgr ESTAB 0 0 @C 643 * 0 In this case, we cannot register B to SOCKMAP. So, when a socket disconnects from the peer, we should not set TCP_CLOSE to the peer if the peer is connected to yet another socket, and this must be done under unix_state_lock(). Note that we use WRITE_ONCE() for sk->sk_state as there are many lockless readers. These data-races will be fixed in the following patches. Fixes: 83301b5367a9 ("af_unix: Set TCP_ESTABLISHED for datagram sockets too") Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-06-06inet: remove (struct uncached_list)->quarantineEric Dumazet2-7/+2
This list is used to tranfert dst that are handled by rt_flush_dev() and rt6_uncached_list_flush_dev() out of the per-cpu lists. But quarantine list is not used later. If we simply use list_del_init(&rt->dst.rt_uncached), this also removes the dst from per-cpu list. This patch also makes the future calls to rt_del_uncached_list() and rt6_uncached_list_del() faster, because no spinlock acquisition is needed anymore. Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://lore.kernel.org/r/20240604165150.726382-1-edumazet@google.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-06-06net: use unrcu_pointer() helperEric Dumazet12-21/+19
Toke mentioned unrcu_pointer() existence, allowing to remove some of the ugly casts we have when using xchg() for rcu protected pointers. Also make inet_rcv_compat const. Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Toke Høiland-Jørgensen <toke@redhat.com> Reviewed-by: Toke Høiland-Jørgensen <toke@redhat.com> Link: https://lore.kernel.org/r/20240604111603.45871-1-edumazet@google.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-06-06net: wwan: iosm: Fix tainted pointer delete is case of region creation failAleksandr Mishin1-1/+1
In case of region creation fail in ipc_devlink_create_region(), previously created regions delete process starts from tainted pointer which actually holds error code value. Fix this bug by decreasing region index before delete. Found by Linux Verification Center (linuxtesting.org) with SVACE. Fixes: 4dcd183fbd67 ("net: wwan: iosm: devlink registration") Signed-off-by: Aleksandr Mishin <amishin@t-argos.ru> Acked-by: Sergey Ryazanov <ryazanov.s.a@gmail.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://lore.kernel.org/r/20240604082500.20769-1-amishin@t-argos.ru Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-06-06Merge branch 'improve-gbeth-performance-on-renesas-rz-g2l-and-related-socs'Paolo Abeni2-221/+270
Paul Barker says: ==================== Improve GbEth performance on Renesas RZ/G2L and related SoCs This series aims to improve performance of the GbEth IP in the Renesas RZ/G2L SoC family and the RZ/G3S SoC, which use the ravb driver. Along the way, we do some refactoring and ensure that napi_complete_done() is used in accordance with the NAPI documentation for both GbEth and R-Car code paths. Much of the performance improvement comes from enabling SW IRQ Coalescing for all SoCs using the GbEth IP, and NAPI Threaded mode for single core SoCs using the GbEth IP. These can be enabled/disabled at runtime via sysfs, but our goal is to set sensible defaults which get good performance on the affected SoCs. The rest of the performance improvement comes from using a page pool to allocate RX buffers, and reducing the allocation size from >8kB to 2kB. The overall performance impact of this patch series seen in testing with iperf3 is as follows (see patches 5-7 for more detailed results): * RZ/G2L: * TCP TX: +1.8% bandwidth * TCP RX: +1% bandwidth at 47% less CPU load * UDP RX: +1% bandwidth at 26% less CPU load * RZ/G2UL: * TCP TX: +37% bandwidth * TCP RX: +43% bandwidth * UDP TX: -8% bandwidth * UDP RX: +32500% bandwidth (!) * RZ/G3S: * TCP TX: +25% bandwidth * TCP RX: +76% bandwidth * UDP TX: -9% bandwidth * UDP RX: +37900% bandwidth (!) * RZ/Five: * TCP TX: +18% bandwidth * TCP RX: +212% bandwidth * UDP TX: +2% bandwidth * UDP RX: +inf bandwidth (test no longer crashes) There is no significant impact on bandwidth or CPU load in testing on RZ/G2H or R-Car M3N. Fixing the crash in UDP RX testing for RZ/Five is a cumulative effect of patches 1, 2, 5 & 6 so this is very difficult to break out as a bugfix for backporting. Changes v4->v5: * Added Sergey's Reviewed-by tags. * Improved the commit message for patch 2/7. * Re-wrapped to 80 cols, except where this would significantly impact readability. * Use lower case `skb` consistently in comments. * Included <net/page_pool/types.h> in ravb.h. * Moved rx_buffer_size so it is in the same place in ravb_hw_info as rx_max_desc_use was previously. * Used reverse xmas tree ordering in variable declarations. * Split lines after binary operators, instead of before. * Factor subtraction of sizeof(__sum16) out of the if condition in ravb_rx_csum_gbeth(). * Add blank lines after variable declarations where needed. * Used goto instead of break to handle napi_build_skb() failure in ravb_rx_gbeth(). Break was incorrectly scoped to the surrounding switch statement, when it's the outer loop we really want to break out of. * Used continue instead of break to handle NULL priv->rx_1st_skb in ravb_rx_gbeth() as we may still be able to process further descriptors. * Unconditionally set priv->rx_1st_skb = NULL after processing a packet in ravb_rx_gbeth(). We don't need to check die_dt as this will be a no-op for single descriptor packets. * Moved napi_build_skb() call after dma_sync_single_for_cpu() in ravb_rx_rcar() to align the order of operations with ravb_rx_gbeth() and ensure the data is sync'd before it is accessed. * Moved zeroing of rx_buff->page to the end of packet processing in ravb_rx_rcar() to align the order of operations with ravb_rx_gbeth(). Changes v3->v4: * Dependency patches have merged so this is no longer an RFC. * Fixed update of stats->rx_packets. * Simplified refactoring following feedback from Niklas and Sergey. * Renamed needs_irq_coalesce -> coalesce_irqs. * Used a separate page pool for each RX queue. * Passed struct ravb_rx_desc to ravb_alloc_rx_buffer() so that we can simplify the calling function. * Explained the calculation of rx_desc->ds_cc. * Added handling of nonlinear SKBs in ravb_rx_csum_gbeth(). * Used Niklas' suggested commit message for patch 2/7. * Added Sergey's Reviewed-by tags to patches 5/7 and 6/7. Changes v2->v3: * Incorporated feedback on RFC v2 from Sergey. * Split out bugfixes and rebased. This changed the order of what was the first 5 patches of v2 and things look a little different so I've not picked up Reviewed-by tags from v2. * Further refactoring and tidy up of RX ring refill and ravb_rx_gbeth(). * Switched to using a page pool to allocate RX buffers. * Re-tested and provided updated performance figures. Changes v1->v2: * Marked as RFC as the series depends on unmerged patches. * Refactored R-Car code paths as well as GbEth code paths. * Updated references to the patches this series depends on. ==================== Link: https://lore.kernel.org/r/20240604072825.7490-1-paul.barker.ct@bp.renesas.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-06-06net: ravb: Allocate RX buffers via page poolPaul Barker2-96/+175
This patch makes multiple changes that can't be separated: 1) Allocate plain RX buffers via a page pool instead of allocating SKBs, then use build_skb() when a packet is received. 2) For GbEth IP, reduce the RX buffer size to 2kB. 3) For GbEth IP, merge packets which span more than one RX descriptor as SKB fragments instead of copying data. Implementing (1) without (2) would require the use of an order-1 page pool (instead of an order-0 page pool split into page fragments) for GbEth. Implementing (2) without (3) would leave us no space to re-assemble packets which span more than one RX descriptor. Implementing (3) without (1) would not be possible as the network stack expects to use put_page() or page_pool_put_page() to free SKB fragments after an SKB is consumed. RX checksum offload support is adjusted to handle both linear and nonlinear (fragmented) packets. This patch gives the following improvements during testing with iperf3. * RZ/G2L: * TCP RX: same bandwidth at -43% CPU load (70% -> 40%) * UDP RX: same bandwidth at -17% CPU load (88% -> 74%) * RZ/G2UL: * TCP RX: +30% bandwidth (726Mbps -> 941Mbps) * UDP RX: +417% bandwidth (108Mbps -> 558Mbps) * RZ/G3S: * TCP RX: +64% bandwidth (562Mbps -> 920Mbps) * UDP RX: +420% bandwidth (90Mbps -> 468Mbps) * RZ/Five: * TCP RX: +217% bandwidth (145Mbps -> 459Mbps) * UDP RX: +470% bandwidth (20Mbps -> 114Mbps) There is no significant impact on bandwidth or CPU load in testing on RZ/G2H or R-Car M3N. Signed-off-by: Paul Barker <paul.barker.ct@bp.renesas.com> Reviewed-by: Sergey Shtylyov <s.shtylyov@omp.ru> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-06-06net: ravb: Use NAPI threaded mode on 1-core CPUs with GbEth IPPaul Barker1-1/+4
NAPI Threaded mode (along with the previously enabled SW IRQ Coalescing) is required to improve network stack performance for single core SoCs using the GbEth IP (currently the RZ/G2L SoC family and the RZ/G3S SoC). This patch gives the following improvements during testing with iperf3. * RZ/G2UL: * TCP TX: +32% bandwidth (638Mbps -> 841Mbps) * TXP RX: +8.8% bandwidth (667Mbps -> 726Mbps) * UDP RX: +104% bandwidth (53Mbps -> 108Mbps) * RZ/G3S: * TCP TX: 29% bandwidth (529Mbps -> 681Mbps) * UDP RX: +1290% bandwidth (6.46Mbps -> 90Mbps) * RZ/Five: * UDP RX: Test no longer crashes (0 -> 20 Mbps) This patch gives the following reductions in performance in the same testing: * RZ/G2UL: * UDP TX: -7.5% bandwidth (594Mbps -> 549Mbps) * RZ/G3S: * UDP TX: -5% bandwidth (625Mbps -> 594Mbps) These losses are considered acceptable given the benefits shown above. If UDP TX bandwidth must be maximised for a particular use case, NAPI threaded mode can be disabled at runtime via sysfs writes. The improvement of UDP RX bandwidth for the single core SoCs (RZ/G2UL & RZ/G3S) is particularly critical. Signed-off-by: Paul Barker <paul.barker.ct@bp.renesas.com> Reviewed-by: Sergey Shtylyov <s.shtylyov@omp.ru> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-06-06net: ravb: Enable SW IRQ Coalescing for GbEthPaul Barker2-0/+5
Software IRQ Coalescing is required to improve network stack performance in the RZ/G2L SoC family and the RZ/G3S SoC, i.e. the SoCs which use the GbEth IP. This patch gives the following improvements during testing with iperf3: * RZ/G2L: * TCP RX: same bandwidth with -6% CPU load (76% -> 71%) * UDP RX: same bandwidth with -10% CPU load (99% -> 89%) * RZ/G2UL: * UDP RX: +4200% bandwidth (1.23Mbps -> 53Mbps) * RZ/G3S: * UDP RX: +425% bandwidth (1.23Mbps -> 6.46Mbps) The improvement of UDP RX bandwidth for the single core SoCs (RZ/G2UL & RZ/G3S) is particularly critical. Signed-off-by: Paul Barker <paul.barker.ct@bp.renesas.com> Reviewed-by: Sergey Shtylyov <s.shtylyov@omp.ru> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-06-06net: ravb: Refactor GbEth RX code pathPaul Barker1-27/+32
We can reduce code duplication in ravb_rx_gbeth(). Signed-off-by: Paul Barker <paul.barker.ct@bp.renesas.com> Reviewed-by: Sergey Shtylyov <s.shtylyov@omp.ru> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-06-06net: ravb: Refactor RX ring refillPaul Barker1-93/+57
To reduce code duplication, we add a new RX ring refill function which can handle both the initial RX ring population (which was split between ravb_ring_init() and ravb_ring_format()) and the RX ring refill after polling (in ravb_rx()). Signed-off-by: Paul Barker <paul.barker.ct@bp.renesas.com> Reviewed-by: Sergey Shtylyov <s.shtylyov@omp.ru> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-06-06net: ravb: Align poll function with NAPI docsPaul Barker1-15/+11
Align ravb_poll() with the documentation in `Documentation/networking/kapi.rst` and `Documentation/networking/napi.rst`. The documentation says that we should prefer napi_complete_done() over napi_complete(), and using the former allows us to properly support busy polling. We should ensure that napi_complete_done() is only called if the work budget has not been exhausted, and we should only re-arm interrupts if it returns true. Signed-off-by: Paul Barker <paul.barker.ct@bp.renesas.com> Reviewed-by: Sergey Shtylyov <s.shtylyov@omp.ru> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-06-06net: ravb: Simplify poll & receive functionsPaul Barker2-16/+13
We don't need to pass the work budget to ravb_rx() by reference, it's cleaner to pass this by value and return the amount of work done. This allows us to simplify the ravb_poll() function and use the common `work_done` variable name seen in other network drivers for consistency and ease of understanding. This is a pure refactor and should not affect behaviour. Signed-off-by: Paul Barker <paul.barker.ct@bp.renesas.com> Reviewed-by: Sergey Shtylyov <s.shtylyov@omp.ru> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2024-06-05Merge branch 'net-mlx5e-shampo-enable-hw-gro-once-more'Jakub Kicinski10-178/+202
Tariq Toukan says: ==================== net/mlx5e: SHAMPO, Enable HW GRO once more This series enables hardware GRO for ConnectX-7 and newer NICs. SHAMPO stands for Split Header And Merge Payload Offload. The first part of the series contains important fixes and improvements. The second part reworks the HW GRO counters. Lastly, HW GRO is perf optimized and enabled. Here are the bandwidth numbers for a simple iperf3 test over a single rq where the application and irq are pinned to the same CPU: +---------+--------+--------+-----------+-------------+ | streams | SW GRO | HW GRO | Unit | Improvement | +---------+--------+--------+-----------+-------------+ | 1 | 36 | 57 | Gbits/sec | 1.6 x | | 4 | 34 | 50 | Gbits/sec | 1.5 x | | 8 | 31 | 43 | Gbits/sec | 1.4 x | +---------+--------+--------+-----------+-------------+ Benchmark details: VM based setup CPU: Intel(R) Xeon(R) Platinum 8380 CPU, 24 cores NIC: ConnectX-7 100GbE iperf3 and irq running on same CPU over a single receive queue ==================== Link: https://lore.kernel.org/r/20240603212219.1037656-1-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05net/mlx5e: SHAMPO, Coalesce skb fragments to page sizeDragos Tatulea1-6/+13
When doing hardware GRO (SHAMPO), the driver puts each data payload of a packet from the wire into one skb fragment. TCP Zero-Copy expects page sized skb fragments to be able to do it's page-flipping magic. With the current way of arranging fragments by the driver, only specific MTUs (page sized multiple + header size) will yield such page sized fragments in a high percentage. This change improves payload arrangement in the skb for hardware GRO by coalescing payloads into a single skb fragment when possible. To demonstrate the fix, running tcp_mmap with a MTU of 1500 yields: - Before: 0 % bytes mmap'ed - After : 81 % bytes mmap'ed More importantly, coalescing considerably improves the HW GRO performance. Here are the results for a iperf3 bandwidth benchmark: +---------+--------+--------+------------------------+-----------+ | streams | SW GRO | HW GRO | HW GRO with coalescing | Unit | |---------+--------+--------+------------------------+-----------| | 1 | 36 | 42 | 57 | Gbits/sec | | 4 | 34 | 39 | 50 | Gbits/sec | | 8 | 31 | 35 | 43 | Gbits/sec | +---------+--------+--------+------------------------+-----------+ Benchmark details: VM based setup CPU: Intel(R) Xeon(R) Platinum 8380 CPU, 24 cores NIC: ConnectX-7 100GbE iperf3 and irq running on same CPU over a single receive queue Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20240603212219.1037656-15-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05net/mlx5e: SHAMPO, Re-enable HW-GROYoray Zack2-5/+37
Add back HW-GRO to the reported features. As the current implementation of HW-GRO uses KSMs with a specific fixed buffer size (256B) to map its headers buffer, we reported the feature only if the NIC is supporting KSM and the minimum value for buffer size is below the requested one. iperf3 bandwidth comparison: +---------+--------+--------+-----------+ | streams | SW GRO | HW GRO | Unit | |---------+--------+--------+-----------| | 1 | 36 | 42 | Gbits/sec | | 4 | 34 | 39 | Gbits/sec | | 8 | 31 | 35 | Gbits/sec | +---------+--------+--------+-----------+ A downstream patch will add skb fragment coalescing which will improve performance considerably. Benchmark details: VM based setup CPU: Intel(R) Xeon(R) Platinum 8380 CPU, 24 cores NIC: ConnectX-7 100GbE iperf3 and irq running on same CPU over a single receive queue Signed-off-by: Yoray Zack <yorayz@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20240603212219.1037656-14-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05net/mlx5e: SHAMPO, Use KSMs instead of KLMsYoray Zack6-67/+71
KSM Mkey is KLM Mkey with a fixed buffer size. Due to this fact, it is a faster mechanism than KLM. SHAMPO feature used KLMs Mkeys for memory mappings of its headers buffer. As it used KLMs with the same buffer size for each entry, we can use KSMs instead. This commit changes the Mkeys that map the SHAMPO headers buffer from KLMs to KSMs. Signed-off-by: Yoray Zack <yorayz@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20240603212219.1037656-13-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05net/mlx5e: SHAMPO, Add header-only ethtool counters for header data splitTariq Toukan4-0/+20
Count the number of header-only packets and bytes from SHAMPO. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20240603212219.1037656-12-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05net/mlx5e: SHAMPO, Drop rx_gro_match_packets counterDragos Tatulea4-12/+0
After modifying rx_gro_packets to be more accurate, the rx_gro_match_packets counter is redundant. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20240603212219.1037656-11-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05net/mlx5e: SHAMPO, Make GRO counters more preciseDragos Tatulea2-9/+14
Don't count non GRO packets. A non GRO packet is a packet with a GRO cb count of 1. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20240603212219.1037656-10-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05net/mlx5e: SHAMPO, Skipping on duplicate flush of the same SHAMPO SKBYoray Zack1-1/+1
SHAMPO SKB can be flushed in mlx5e_shampo_complete_rx_cqe(). If the SKB was flushed, rq->hw_gro_data->skb was also set to NULL. We can skip on flushing the SKB in mlx5e_shampo_flush_skb if rq->hw_gro_data->skb == NULL. Signed-off-by: Yoray Zack <yorayz@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20240603212219.1037656-9-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05net/mlx5e: SHAMPO, Specialize mlx5e_fill_skb_data()Dragos Tatulea1-14/+11
mlx5e_fill_skb_data() used to have multiple callers. But after the XDP multibuf refactoring from commit 2cb0e27d43b4 ("net/mlx5e: RX, Prepare non-linear striding RQ for XDP multi-buffer support") the SHAMPO code path is the only caller. Take advantage of this and specialize the function: - Drop the redundant check. - Assume that data_bcnt is > 0. This is needed in a downstream patch. Rename the function as well to make things clear. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Suggested-by: Tariq Toukan <tariqt@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20240603212219.1037656-8-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05net/mlx5e: SHAMPO, Simplify header page release in teardownDragos Tatulea3-58/+17
The function that releases SHAMPO header pages (mlx5e_shampo_dealloc_hd) has some complicated logic that comes from the fact that it is called twice during teardown: 1) To release the posted header pages that didn't get any completions. 2) To release all remaining header pages. This flow is not necessary: all header pages can be released from the driver side in one go. Furthermore, the above flow is buggy. Taking the 8 headers per page example: 1) Release fragments 5-7. Page will be released. 2) Release remaining fragments 0-4. The bits in the header will indicate that the page needs releasing. But this is incorrect: page was released in step 1. This patch releases all header pages in one go. This simplifies the header page cleanup function. For consistency, the datapath header page release API (mlx5e_free_rx_shampo_hd_entry()) is used. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20240603212219.1037656-7-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05net/mlx5e: SHAMPO, Disable gso_size for non GRO packetsDragos Tatulea1-0/+2
When HW GRO is enabled, forwarding of packets is broken due to gso_size being set incorrectly on non GRO packets. Non GRO packets have a skb GRO count of 1. mlx5 always sets gso_size on the skb, even for non GRO packets. It leans on the fact that gso_size is normally reset in napi_gro_complete(). But this happens only for packets from GRO'able protocols (TCP/UDP) that have a gro_receive() handler. The problematic scenarios are: 1) Non GRO protocol packets are received, validate_xmit_skb() will drop them (see EPROTONOSUPPORT in skb_mac_gso_segment()). The fix for this case would be to not set gso_size at all for SHAMPO packets with header size 0. 2) Packets from a GRO'ed protocol (TCP) are received but immediately flushed because they are not GRO'able (TCP SYN for example). mlx5e_shampo_update_hdr(), which updates the remaining GRO state on the skb, is not called because skb GRO count is 1. The fix here would be to always call mlx5e_shampo_update_hdr(), regardless of skb GRO count. But this call is expensive The unified fix for both cases is to reset gso_size before calling napi_gro_receive(). It is a change that is more effective (no call to mlx5e_shampo_update_hdr() necessary) and simple (smallest code footprint). Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20240603212219.1037656-6-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05net/mlx5e: SHAMPO, Fix FCS config when HW GRO onDragos Tatulea1-3/+9
For the following scenario: ethtool --features eth3 rx-gro-hw on ethtool --features eth3 rx-fcs on ethtool --features eth3 rx-fcs off ... there is a firmware error because the driver enables HW GRO first while FCS is still enabled. This patch fixes this by swapping the order of HW GRO and FCS for this specific case. Take LRO into consideration as well for consistency. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20240603212219.1037656-5-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05net/mlx5e: SHAMPO, Fix invalid WQ linked list unlinkDragos Tatulea1-0/+3
When all the strides in a WQE have been consumed, the WQE is unlinked from the WQ linked list (mlx5_wq_ll_pop()). For SHAMPO, it is possible to receive CQEs with 0 consumed strides for the same WQE even after the WQE is fully consumed and unlinked. This triggers an additional unlink for the same wqe which corrupts the linked list. Fix this scenario by accepting 0 sized consumed strides without unlinking the WQE again. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20240603212219.1037656-4-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05net/mlx5e: SHAMPO, Fix incorrect page releaseDragos Tatulea1-1/+2
Under the following conditions: 1) No skb created yet 2) header_size == 0 (no SHAMPO header) 3) header_index + 1 % MLX5E_SHAMPO_WQ_HEADER_PER_PAGE == 0 (this is the last page fragment of a SHAMPO header page) a new skb is formed with a page that is NOT a SHAMPO header page (it is a regular data page). Further down in the same function (mlx5e_handle_rx_cqe_mpwrq_shampo()), a SHAMPO header page from header_index is released. This is wrong and it leads to SHAMPO header pages being released more than once. Signed-off-by: Dragos Tatulea <dtatulea@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20240603212219.1037656-3-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05net/mlx5e: SHAMPO, Use net_prefetch APITariq Toukan1-3/+3
Let the SHAMPO functions use the net-specific prefetch API, similar to all other usages. Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://lore.kernel.org/r/20240603212219.1037656-2-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05Merge branch 'intel-wired-lan-driver-updates-2024-05-29-ice-igc'Jakub Kicinski9-104/+244
Jacob Keller says: ==================== Intel Wired LAN Driver Updates 2024-05-29 (ice, igc) This series includes fixes for the ice driver as well as a fix for the igc driver. Jacob fixes two issues in the ice driver with reading the NVM for providing firmware data via devlink info. First, fix an off-by-one error when reading the Preserved Fields Area, resolving an infinite loop triggered on some NVMs which lack certain data in the NVM. Second, fix the reading of the NVM Shadow RAM on newer E830 and E825-C devices which have a variable sized CSS header rather than assuming this header is always the same fixed size as in the E810 devices. Larysa fixes three issues with the ice driver XDP logic that could occur if the number of queues is changed after enabling an XDP program. First, the af_xdp_zc_qps bitmap is removed and replaced by simpler logic to track whether queues are in zero-copy mode. Second, the reset and .ndo_bpf flows are distinguished to avoid potential races with a PF reset occuring simultaneously to .ndo_bpf callback from userspace. Third, the logic for mapping XDP queues to vectors is fixed so that XDP state is restored for XDP queues after a reconfiguration. Sasha fixes reporting of Energy Efficient Ethernet support via ethtool in the igc driver. v1: https://lore.kernel.org/r/20240530-net-2024-05-30-intel-net-fixes-v1-0-8b11c8c9bff8@intel.com ==================== Link: https://lore.kernel.org/r/20240603-net-2024-05-30-intel-net-fixes-v2-0-e3563aa89b0c@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05igc: Fix Energy Efficient Ethernet support declarationSasha Neftin2-2/+11
The commit 01cf893bf0f4 ("net: intel: i40e/igc: Remove setting Autoneg in EEE capabilities") removed SUPPORTED_Autoneg field but left inappropriate ethtool_keee structure initialization. When "ethtool --show <device>" (get_eee) invoke, the 'ethtool_keee' structure was accidentally overridden. Remove the 'ethtool_keee' overriding and add EEE declaration as per IEEE specification that allows reporting Energy Efficient Ethernet capabilities. Examples: Before fix: ethtool --show-eee enp174s0 EEE settings for enp174s0: EEE status: not supported After fix: EEE settings for enp174s0: EEE status: disabled Tx LPI: disabled Supported EEE link modes: 100baseT/Full 1000baseT/Full 2500baseT/Full Fixes: 01cf893bf0f4 ("net: intel: i40e/igc: Remove setting Autoneg in EEE capabilities") Suggested-by: Dima Ruinskiy <dima.ruinskiy@intel.com> Signed-off-by: Sasha Neftin <sasha.neftin@intel.com> Tested-by: Naama Meir <naamax.meir@linux.intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://lore.kernel.org/r/20240603-net-2024-05-30-intel-net-fixes-v2-6-e3563aa89b0c@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05ice: map XDP queues to vectors in ice_vsi_map_rings_to_vectors()Larysa Zaremba4-46/+68
ice_pf_dcb_recfg() re-maps queues to vectors with ice_vsi_map_rings_to_vectors(), which does not restore the previous state for XDP queues. This leads to no AF_XDP traffic after rebuild. Map XDP queues to vectors in ice_vsi_map_rings_to_vectors(). Also, move the code around, so XDP queues are mapped independently only through .ndo_bpf(). Fixes: 6624e780a577 ("ice: split ice_vsi_setup into smaller functions") Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Signed-off-by: Larysa Zaremba <larysa.zaremba@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Tested-by: Chandan Kumar Rout <chandanx.rout@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://lore.kernel.org/r/20240603-net-2024-05-30-intel-net-fixes-v2-5-e3563aa89b0c@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05ice: add flag to distinguish reset from .ndo_bpf in XDP rings configLarysa Zaremba3-14/+24
Commit 6624e780a577 ("ice: split ice_vsi_setup into smaller functions") has placed ice_vsi_free_q_vectors() after ice_destroy_xdp_rings() in the rebuild process. The behaviour of the XDP rings config functions is context-dependent, so the change of order has led to ice_destroy_xdp_rings() doing additional work and removing XDP prog, when it was supposed to be preserved. Also, dependency on the PF state reset flags creates an additional, fortunately less common problem: * PFR is requested e.g. by tx_timeout handler * .ndo_bpf() is asked to delete the program, calls ice_destroy_xdp_rings(), but reset flag is set, so rings are destroyed without deleting the program * ice_vsi_rebuild tries to delete non-existent XDP rings, because the program is still on the VSI * system crashes With a similar race, when requested to attach a program, ice_prepare_xdp_rings() can actually skip setting the program in the VSI and nevertheless report success. Instead of reverting to the old order of function calls, add an enum argument to both ice_prepare_xdp_rings() and ice_destroy_xdp_rings() in order to distinguish between calls from rebuild and .ndo_bpf(). Fixes: efc2214b6047 ("ice: Add support for XDP") Reviewed-by: Igor Bagnucki <igor.bagnucki@intel.com> Signed-off-by: Larysa Zaremba <larysa.zaremba@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Tested-by: Chandan Kumar Rout <chandanx.rout@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://lore.kernel.org/r/20240603-net-2024-05-30-intel-net-fixes-v2-4-e3563aa89b0c@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05ice: remove af_xdp_zc_qps bitmapLarysa Zaremba3-26/+27
Referenced commit has introduced a bitmap to distinguish between ZC and copy-mode AF_XDP queues, because xsk_get_pool_from_qid() does not do this for us. The bitmap would be especially useful when restoring previous state after rebuild, if only it was not reallocated in the process. This leads to e.g. xdpsock dying after changing number of queues. Instead of preserving the bitmap during the rebuild, remove it completely and distinguish between ZC and copy-mode queues based on the presence of a device associated with the pool. Fixes: e102db780e1c ("ice: track AF_XDP ZC enabled queues in bitmap") Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Signed-off-by: Larysa Zaremba <larysa.zaremba@intel.com> Reviewed-by: Simon Horman <horms@kernel.org> Tested-by: Chandan Kumar Rout <chandanx.rout@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://lore.kernel.org/r/20240603-net-2024-05-30-intel-net-fixes-v2-3-e3563aa89b0c@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05ice: fix reads from NVM Shadow RAM on E830 and E825-C devicesJacob Keller2-9/+93
The ice driver reads data from the Shadow RAM portion of the NVM during initialization, including data used to identify the NVM image and device, such as the ETRACK ID used to populate devlink dev info fw.bundle. Currently it is using a fixed offset defined by ICE_CSS_HEADER_LENGTH to compute the appropriate offset. This worked fine for E810 and E822 devices which both have CSS header length of 330 words. Other devices, including both E825-C and E830 devices have different sizes for their CSS header. The use of a hard coded value results in the driver reading from the wrong block in the NVM when attempting to access the Shadow RAM copy. This results in the driver reporting the fw.bundle as 0x0 in both the devlink dev info and ethtool -i output. The first E830 support was introduced by commit ba20ecb1d1bb ("ice: Hook up 4 E830 devices by adding their IDs") and the first E825-C support was introducted by commit f64e18944233 ("ice: introduce new E825C devices family") The NVM actually contains the CSS header length embedded in it. Remove the hard coded value and replace it with logic to read the length from the NVM directly. This is more resilient against all existing and future hardware, vs looking up the expected values from a table. It ensures the driver will read from the appropriate place when determining the ETRACK ID value used for populating the fw.bundle_id and for reporting in ethtool -i. The CSS header length for both the active and inactive flash bank is stored in the ice_bank_info structure to avoid unnecessary duplicate work when accessing multiple words of the Shadow RAM. Both banks are read in the unlikely event that the header length is different for the NVM in the inactive bank, rather than being different only by the overall device family. Fixes: ba20ecb1d1bb ("ice: Hook up 4 E830 devices by adding their IDs") Co-developed-by: Paul Greenwalt <paul.greenwalt@intel.com> Signed-off-by: Paul Greenwalt <paul.greenwalt@intel.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Tested-by: Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://lore.kernel.org/r/20240603-net-2024-05-30-intel-net-fixes-v2-2-e3563aa89b0c@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05ice: fix iteration of TLVs in Preserved Fields AreaJacob Keller1-7/+21
The ice_get_pfa_module_tlv() function iterates over the Type-Length-Value structures in the Preserved Fields Area (PFA) of the NVM. This is used by the driver to access data such as the Part Board Assembly identifier. The function uses simple logic to iterate over the PFA. First, the pointer to the PFA in the NVM is read. Then the total length of the PFA is read from the first word. A pointer to the first TLV is initialized, and a simple loop iterates over each TLV. The pointer is moved forward through the NVM until it exceeds the PFA area. The logic seems sound, but it is missing a key detail. The Preserved Fields Area length includes one additional final word. This is documented in the device data sheet as a dummy word which contains 0xFFFF. All NVMs have this extra word. If the driver tries to scan for a TLV that is not in the PFA, it will read past the size of the PFA. It reads and interprets the last dummy word of the PFA as a TLV with type 0xFFFF. It then reads the word following the PFA as a length. The PFA resides within the Shadow RAM portion of the NVM, which is relatively small. All of its offsets are within a 16-bit size. The PFA pointer and TLV pointer are stored by the driver as 16-bit values. In almost all cases, the word following the PFA will be such that interpreting it as a length will result in 16-bit arithmetic overflow. Once overflowed, the new next_tlv value is now below the maximum offset of the PFA. Thus, the driver will continue to iterate the data as TLVs. In the worst case, the driver hits on a sequence of reads which loop back to reading the same offsets in an endless loop. To fix this, we need to correct the loop iteration check to account for this extra word at the end of the PFA. This alone is sufficient to resolve the known cases of this issue in the field. However, it is plausible that an NVM could be misconfigured or have corrupt data which results in the same kind of overflow. Protect against this by using check_add_overflow when calculating both the maximum offset of the TLVs, and when calculating the next_tlv offset at the end of each loop iteration. This ensures that the driver will not get stuck in an infinite loop when scanning the PFA. Fixes: e961b679fb0b ("ice: add board identifier info to devlink .info_get") Co-developed-by: Paul Greenwalt <paul.greenwalt@intel.com> Signed-off-by: Paul Greenwalt <paul.greenwalt@intel.com> Reviewed-by: Przemek Kitszel <przemyslaw.kitszel@intel.com> Tested-by: Pucha Himasekhar Reddy <himasekharx.reddy.pucha@intel.com> Signed-off-by: Jacob Keller <jacob.e.keller@intel.com> Link: https://lore.kernel.org/r/20240603-net-2024-05-30-intel-net-fixes-v2-1-e3563aa89b0c@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05selftests: hsr: Extend the hsr_ping.sh test to use fixed MAC addressesLukasz Majewski1-0/+9
Fixed MAC addresses help with debugging as last four bytes identify the network namespace. Signed-off-by: Lukasz Majewski <lukma@denx.de> Link: https://lore.kernel.org/r/20240603093322.3150030-1-lukma@denx.de Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05selftests: hsr: Extend the hsr_redbox.sh test to use fixed MAC addressesLukasz Majewski1-0/+15
Fixed MAC addresses help with debugging as last four bytes identify the network namespace. Moreover, it allows to mimic the real life setup with for example bridge having the same MAC address on each port. Signed-off-by: Lukasz Majewski <lukma@denx.de> Link: https://lore.kernel.org/r/20240603093322.3150030-2-lukma@denx.de Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05Merge tag 'for-netdev' of ↵Jakub Kicinski9-35/+34
https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf Daniel Borkmann says: ==================== pull-request: bpf 2024-06-05 We've added 8 non-merge commits during the last 6 day(s) which contain a total of 9 files changed, 34 insertions(+), 35 deletions(-). The main changes are: 1) Fix a potential use-after-free in bpf_link_free when the link uses dealloc_deferred to free the link object but later still tests for presence of link->ops->dealloc, from Cong Wang. 2) Fix BPF test infra to set the run context for rawtp test_run callback where syzbot reported a crash, from Jiri Olsa. 3) Fix bpf_session_cookie BTF_ID in the special_kfunc_set list to exclude it for the case of !CONFIG_FPROBE, also from Jiri Olsa. 4) Fix a Coverity static analysis report to not close() a link_fd of -1 in the multi-uprobe feature detector, from Andrii Nakryiko. 5) Revert support for redirect to any xsk socket bound to the same umem as it can result in corrupted ring state which can lead to a crash when flushing rings. A different approach will be pursued for bpf-next to address it safely, from Magnus Karlsson. 6) Fix inet_csk_accept prototype in test_sk_storage_tracing.c which caused BPF CI failure after the last tree fast forwarding, from Andrii Nakryiko. 7) Fix a coccicheck warning in BPF devmap that iterator variable cannot be NULL, from Thorsten Blum. * tag 'for-netdev' of https://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf: Revert "xsk: Document ability to redirect to any socket bound to the same umem" Revert "xsk: Support redirect to any socket bound to the same umem" bpf: Set run context for rawtp test_run callback bpf: Fix a potential use-after-free in bpf_link_free() bpf, devmap: Remove unnecessary if check in for loop libbpf: don't close(-1) in multi-uprobe feature detector bpf: Fix bpf_session_cookie BTF_ID in special_kfunc_set list selftests/bpf: fix inet_csk_accept prototype in test_sk_storage_tracing.c ==================== Link: https://lore.kernel.org/r/20240605091525.22628-1-daniel@iogearbox.net Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05ptp: Fix error message on failed pin verificationKarol Kolacinski1-1/+2
On failed verification of PTP clock pin, error message prints channel number instead of pin index after "pin", which is incorrect. Fix error message by adding channel number to the message and printing pin number instead of channel number. Fixes: 6092315dfdec ("ptp: introduce programmable pins.") Signed-off-by: Karol Kolacinski <karol.kolacinski@intel.com> Acked-by: Richard Cochran <richardcochran@gmail.com> Link: https://lore.kernel.org/r/20240604120555.16643-1-karol.kolacinski@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05Merge branch 'vmxnet3-upgrade-to-version-9'Jakub Kicinski5-49/+266
Ronak Doshi says: ==================== vmxnet3: upgrade to version 9 vmxnet3 emulation has recently added timestamping feature which allows the hypervisor (ESXi) to calculate latency from guest virtual NIC driver to all the way up to the physical NIC. This patch series extends vmxnet3 driver to leverage these new feature. Compatibility is maintained using existing vmxnet3 versioning mechanism as follows: - new features added to vmxnet3 emulation are associated with new vmxnet3 version viz. vmxnet3 version 9. - emulation advertises all the versions it supports to the driver. - during initialization, vmxnet3 driver picks the highest version number supported by both the emulation and the driver and configures emulation to run at that version. In particular, following changes are introduced: Patch 1: This patch introduces utility macros for vmxnet3 version 9 comparison and updates Copyright information. Patch 2: This patch adds support to timestamp the packets so as to allow latency measurement in the ESXi. Patch 3: This patch adds support to disable certain offloads on the device based on the request specified by the user in the VM configuration. Patch 4: With all vmxnet3 version 9 changes incorporated in the vmxnet3 driver, with this patch, the driver can configure emulation to run at vmxnet3 version 9. ==================== Link: https://lore.kernel.org/r/20240531193050.4132-1-ronak.doshi@broadcom.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05vmxnet3: update to version 9Ronak Doshi2-39/+11
With all vmxnet3 version 9 changes incorporated in the vmxnet3 driver, the driver can configure emulation to run at vmxnet3 version 9, provided the emulation advertises support for version 9. Signed-off-by: Ronak Doshi <ronak.doshi@broadcom.com> Acked-by: Guolin Yang <guolin.yang@broadcom.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://lore.kernel.org/r/20240531193050.4132-5-ronak.doshi@broadcom.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05vmxnet3: add command to allow disabling of offloadsRonak Doshi3-0/+24
This patch adds a new command to disable certain offloads. This allows user to specify, using VM configuration, if certain offloads need to be disabled. Signed-off-by: Ronak Doshi <ronak.doshi@broadcom.com> Acked-by: Guolin Yang <guolin.yang@broadcom.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://lore.kernel.org/r/20240531193050.4132-4-ronak.doshi@broadcom.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05vmxnet3: add latency measurement support in vmxnet3Ronak Doshi3-5/+220
This patch enhances vmxnet3 to support latency measurement. This support will help to track the latency in packet processing between guest virtual nic driver and host. For this purpose, we introduce a new timestamp ring in vmxnet3 which will be per Tx/Rx queue. This ring will be used to carry timestamp of the packets which will be used to calculate the latency. User can enable latency measurement using realtime knob in vnic setting in VCenter. Signed-off-by: Ronak Doshi <ronak.doshi@broadcom.com> Acked-by: Guolin Yang <guolin.yang@broadcom.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://lore.kernel.org/r/20240531193050.4132-3-ronak.doshi@broadcom.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05vmxnet3: prepare for version 9 changesRonak Doshi5-5/+11
vmxnet3 is currently at version 7 and this patch initiates the preparation to accommodate changes for up to version 9. Introduced utility macros for vmxnet3 version 9 comparison and update Copyright information. Signed-off-by: Ronak Doshi <ronak.doshi@broadcom.com> Acked-by: Guolin Yang <guolin.yang@broadcom.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://lore.kernel.org/r/20240531193050.4132-2-ronak.doshi@broadcom.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05net/sched: taprio: always validate TCA_TAPRIO_ATTR_PRIOMAPEric Dumazet1-9/+6
If one TCA_TAPRIO_ATTR_PRIOMAP attribute has been provided, taprio_parse_mqprio_opt() must validate it, or userspace can inject arbitrary data to the kernel, the second time taprio_change() is called. First call (with valid attributes) sets dev->num_tc to a non zero value. Second call (with arbitrary mqprio attributes) returns early from taprio_parse_mqprio_opt() and bad things can happen. Fixes: a3d43c0d56f1 ("taprio: Add support adding an admin schedule") Reported-by: Noam Rathaus <noamr@ssd-disclosure.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Vinicius Costa Gomes <vinicius.gomes@intel.com> Reviewed-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://lore.kernel.org/r/20240604181511.769870-1-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05ionic: advertise 52-bit addressing limitation for MSI-XDavid Christensen1-0/+5
Current ionic devices only support 52 internal physical address lines. This is sufficient for x86_64 systems which have similar limitations but does not apply to all other architectures, notably IBM POWER (ppc64). To ensure that MSI/MSI-X vectors are not set outside the physical address limits of the NIC, set the no_64bit_msi value of the pci_dev structure during device probe. Signed-off-by: David Christensen <drc@linux.ibm.com> Reviewed-by: Shannon Nelson <shannon.nelson@amd.com> Link: https://lore.kernel.org/r/20240603212747.1079134-1-drc@linux.ibm.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-06-05Merge tag 'thermal-6.10-rc3' of ↵Linus Torvalds4-25/+50
git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm Pull thermal control fixes from Rafael Wysocki: "Fix issues related to the handling of invalid trip points in the thermal core and in the thermal debug code that have been overlooked by some recent thermal control core changes" * tag 'thermal-6.10-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: thermal: trip: Trigger trip down notifications when trips involved in mitigation become invalid thermal: core: Introduce thermal_trip_crossed() thermal/debugfs: Allow tze_seq_show() to print statistics for invalid trips thermal/debugfs: Print initial trip temperature and hysteresis in tze_seq_show()
2024-06-05Merge tag 'acpi-6.10-rc3' of ↵Linus Torvalds7-11/+21
git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm Pull ACPI fixes from Rafael Wysocki: "These fix the ACPI EC and AC drivers, the ACPI APEI error injection driver and build issues related to the dev_is_pnp() macro referring to pnp_bus_type that is not exported to modules. Specifics: - Fix error handling during EC operation region accesses in the ACPI EC driver (Armin Wolf) - Fix a memory leak in the APEI error injection driver introduced during its converion to a platform driver (Dan Williams) - Fix build failures related to the dev_is_pnp() macro by redefining it as a proper function and exporting it to modules as appropriate and unexport pnp_bus_type which need not be exported any more (Andy Shevchenko) - Update the ACPI AC driver to use power_supply_changed() to let the power supply core handle configuration changes properly (Thomas Weißschuh)" * tag 'acpi-6.10-rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm: ACPI: AC: Properly notify powermanagement core about changes PNP: Hide pnp_bus_type from the non-PNP code PNP: Make dev_is_pnp() to be a function and export it for modules ACPI: EC: Avoid returning AE_OK on errors in address space handler ACPI: EC: Abort address space access upon error ACPI: APEI: EINJ: Fix einj_dev release leak