summaryrefslogtreecommitdiff
AgeCommit message (Collapse)AuthorFilesLines
2024-10-04net_sched: sch_fq: add the ability to offload pacingJeffrey Ji2-6/+29
Some network devices have the ability to offload EDT (Earliest Departure Time) which is the model used for TCP pacing and FQ packet scheduler. Some of them implement the timing wheel mechanism described in https://saeed.github.io/files/carousel-sigcomm17.pdf with an associated 'timing wheel horizon'. This patchs adds to FQ packet scheduler TCA_FQ_OFFLOAD_HORIZON attribute. Its value is capped by the device max_pacing_offload_horizon, added in the prior patch. It allows FQ to let packets within pacing offload horizon to be delivered to the device, which will handle the needed delay without host involvement. Signed-off-by: Jeffrey Ji <jeffreyji@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Link: https://patch.msgid.link/20241003121219.2396589-3-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04net: add IFLA_MAX_PACING_OFFLOAD_HORIZON device attributeEric Dumazet6-0/+15
Some network devices have the ability to offload EDT (Earliest Departure Time) which is the model used for TCP pacing and FQ packet scheduler. Some of them implement the timing wheel mechanism described in https://saeed.github.io/files/carousel-sigcomm17.pdf with an associated 'timing wheel horizon'. This patch adds dev->max_pacing_offload_horizon expressing this timing wheel horizon in nsec units. This is a read-only attribute. Unless a driver sets it, dev->max_pacing_offload_horizon is zero. v2: addressed Jakub feedback ( https://lore.kernel.org/netdev/20240930152304.472767-2-edumazet@google.com/T/#mf6294d714c41cc459962154cc2580ce3c9693663 ) v3: added yaml doc (also per Jakub feedback) Signed-off-by: Eric Dumazet <edumazet@google.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Link: https://patch.msgid.link/20241003121219.2396589-2-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04selftest/ptp: update ptp selftest to exercise the gettimex optionsMahesh Bandewar1-5/+57
With the inclusion of commit c259acab839e ("ptp/ioctl: support MONOTONIC{,_RAW} timestamps for PTP_SYS_OFFSET_EXTENDED") clock_gettime() now allows retrieval of pre/post timestamps for CLOCK_MONOTONIC and CLOCK_MONOTONIC_RAW timebases along with the previously supported CLOCK_REALTIME. This patch adds a command line option 'y' to the testptp program to choose one of the allowed timebases [realtime aka system, monotonic, and monotonic-raw). Signed-off-by: Mahesh Bandewar <maheshb@google.com> Cc: Shuah Khan <shuah@kernel.org> Acked-by: Richard Cochran <richardcochran@gmail.com> Link: https://patch.msgid.link/20241003101506.769418-1-maheshb@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04Merge branch 'tcp-add-fast-path-in-timer-handlers'Jakub Kicinski8-24/+49
Eric Dumazet says: ==================== tcp: add fast path in timer handlers As mentioned in Netconf 2024: TCP retransmit and delack timers are not stopped from inet_csk_clear_xmit_timer() because we do not define INET_CSK_CLEAR_TIMERS. Enabling INET_CSK_CLEAR_TIMERS leads to lower performance, mainly because del_timer() and mod_timer() happen from different cpus quite often. What we can do instead is to add fast paths to tcp_write_timer() and tcp_delack_timer() to avoid socket spinlock acquisition. ==================== Link: https://patch.msgid.link/20241002173042.917928-1-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04tcp: add a fast path in tcp_delack_timer()Eric Dumazet5-6/+18
delack timer is not stopped from inet_csk_clear_xmit_timer() because we do not define INET_CSK_CLEAR_TIMERS. This is a conscious choice : inet_csk_clear_xmit_timer() is often called from another cpu. Calling del_timer() would cause false sharing and lock contention. This means that very often, tcp_delack_timer() is called at the timer expiration, while there is no ACK to transmit. This can be detected very early, avoiding the socket spinlock. Notes: - test about tp->compressed_ack is racy, but in the unlikely case there is a race, the dedicated compressed_ack_timer hrtimer would close it. - Even if the fast path is not taken, reading icsk->icsk_ack.pending and tp->compressed_ack before acquiring the socket spinlock reduces acquisition time and chances of contention. Signed-off-by: Eric Dumazet <edumazet@google.com> Link: https://patch.msgid.link/20241002173042.917928-4-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04tcp: add a fast path in tcp_write_timer()Eric Dumazet1-0/+5
retransmit timer is not stopped from inet_csk_clear_xmit_timer() because we do not define INET_CSK_CLEAR_TIMERS. This is a conscious choice : for active TCP flows, it is better to only call mod_timer(), because there is more chances of keeping the timer unchanged. Also inet_csk_clear_xmit_timer() is often called from another cpu, and calling del_timer() would cause false sharing and lock contention. This means that very often, tcp_write_timer() is called at the timer expiration, while there is nothing to retransmit. This can be detected very early, avoiding the socket spinlock. Signed-off-by: Eric Dumazet <edumazet@google.com> Link: https://patch.msgid.link/20241002173042.917928-3-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04tcp: annotate data-races around icsk->icsk_pendingEric Dumazet7-20/+28
icsk->icsk_pending can be read locklessly already. Following patch in the series will add another lockless read. Add smp_load_acquire() and smp_store_release() annotations because following patch will add a test in tcp_write_timer(), and READ_ONCE()/WRITE_ONCE() alone would possibly lead to races. Signed-off-by: Eric Dumazet <edumazet@google.com> Link: https://patch.msgid.link/20241002173042.917928-2-edumazet@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04Merge branch 'selftests-net-ioam-add-tunsrc-support'Jakub Kicinski2-790/+2129
Justin Iurman says: ==================== selftests: net: ioam: add tunsrc support TL;DR This patch comes from a discussion we had with Jakub and Paolo on aligning the ioam selftests with its new "tunsrc" feature. This patch updates the IOAM selftests to support the new "tunsrc" feature of IOAM. As a consequence, some changes were required. For example, the IPv6 header must be accessed to check some fields (i.e., the source address for the "tunsrc" feature), which is not possible AFAIK with IPv6 raw sockets. The latter is currently used with IPV6_RECVHOPOPTS and was introduced by commit 187bbb6968af ("selftests: ioam: refactoring to align with the fix") to fix an issue. But, we really need packet sockets actually... which is one of the changes in this patch (see the description of the topology at the top of ioam6.sh for explanations). Another change is that all IPv6 addresses used in the topology are now based on the documentation prefix (2001:db8::/32). Also, the tests have been improved and there are now many more of them. Overall, the script is more robust. ==================== Link: https://patch.msgid.link/20241002162731.19847-1-justin.iurman@uliege.be Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04selftests: net: add new ioam testsJustin Iurman4-0/+2787
This patch re-adds the (updated) ioam selftests with support for the tunsrc feature. Signed-off-by: Justin Iurman <justin.iurman@uliege.be> Link: https://patch.msgid.link/20241002162731.19847-3-justin.iurman@uliege.be Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04selftests: net: remove ioam testsJustin Iurman4-1448/+0
This patch entirely removes the ioam selftests to prepare for the next patch in this series, which re-adds the new ioam selftests for better readability. Signed-off-by: Justin Iurman <justin.iurman@uliege.be> Link: https://patch.msgid.link/20241002162731.19847-2-justin.iurman@uliege.be Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04net: dsa: mv88e6xxx: Support LED controlLinus Walleij7-2/+1031
This adds control over the hardware LEDs in the Marvell MV88E6xxx DSA switch and enables it for MV88E6352. This fixes an imminent problem on the Inteno XG6846 which has a WAN LED that simply do not work with hardware defaults: driver amendment is necessary. The patch is modeled after Christian Marangis LED support code for the QCA8k DSA switch, I got help with the register definitions from Tim Harvey. After this patch it is possible to activate hardware link indication like this (or with a similar script): cd /sys/class/leds/Marvell\ 88E6352:05:00:green:wan/ echo netdev > trigger echo 1 > link This makes the green link indicator come up on any link speed. It is also possible to be more elaborate, like this: cd /sys/class/leds/Marvell\ 88E6352:05:00:green:wan/ echo netdev > trigger echo 1 > link_1000 cd /sys/class/leds/Marvell\ 88E6352:05:01:amber:wan/ echo netdev > trigger echo 1 > link_100 Making the green LED come on for a gigabit link and the amber LED come on for a 100 mbit link. Each port has 2 LED slots (the hardware may use just one or none) and the hardware triggers are specified in four bits per LED, and some of the hardware triggers are only available on the SFP (fiber) uplink. The restrictions are described in the port.h header file where the registers are described. For example, selector 1 set for LED 1 on port 5 or 6 will indicate Fiber 1000 (gigabit) and activity with a blinking LED, but ONLY for an SFP connection. If port 5/6 is used with something not SFP, this selector is a noop: something else need to be selected. After the previous series rewriting the MV88E6xxx DT bindings to use YAML a "leds" subnode is already valid for each port, in my scratch device tree it looks like this: leds { #address-cells = <1>; #size-cells = <0>; led@0 { reg = <0>; color = <LED_COLOR_ID_GREEN>; function = LED_FUNCTION_LAN; default-state = "off"; linux,default-trigger = "netdev"; }; led@1 { reg = <1>; color = <LED_COLOR_ID_AMBER>; function = LED_FUNCTION_LAN; default-state = "off"; }; }; This DT config is not yet configuring everything: when the netdev default trigger is assigned the hw acceleration callbacks are not called, and there is no way to set the netdev sub-trigger type (such as link_1000) from the device tree, such as if you want a gigabit link indicator. This has to be done from userspace at this point. We add LED operations to all switches in the 6352 family: 6172, 6176, 6240 and 6352. Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/20241001-mv88e6xxx-leds-v4-1-cc11c4f49b18@linaro.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04hv_netvsc: Don't assume cpu_possible_mask is denseMichael Kelley1-1/+1
Current code allocates the pcpu_sum array with size num_possible_cpus(). This code assumes the cpu_possible_mask is dense, which is not true in the general case per [1]. If cpu_possible_mask is sparse, the array might be indexed by a value beyond the size of the array. However, the configurations that Hyper-V provides to guest VMs on x86 and ARM64 hardware, in combination with how architecture specific code assigns Linux CPU numbers, *does* always produce a dense cpu_possible_mask. So the dense assumption is not currently causing failures. But for robustness against future changes in how cpu_possible_mask is populated, update the code to no longer assume dense. The correct approach is to allocate and initialize the array using size "nr_cpu_ids". While this leaves unused array entries corresponding to holes in cpu_possible_mask, the holes are assumed to be minimal and hence the amount of memory wasted by unused entries is minimal. [1] https://lore.kernel.org/lkml/SN6PR02MB4157210CC36B2593F8572E5ED4692@SN6PR02MB4157.namprd02.prod.outlook.com/ Signed-off-by: Michael Kelley <mhklinux@outlook.com> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Link: https://patch.msgid.link/20241003035333.49261-6-mhklinux@outlook.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04ethtool: rss: fix rss key initialization warningDaniel Zahka1-0/+1
This warning is emitted when a driver does not default populate an rss key when one is not provided from userspace. Some devices do not support individual rss keys per context. For these devices, it is ok to leave the key zeroed out in ethtool_rxfh_context. Do not warn on zeroed key when ethtool_ops.rxfh_per_ctx_key == 0. Signed-off-by: Daniel Zahka <daniel.zahka@gmail.com> Link: https://patch.msgid.link/20241003162310.1310576-1-daniel.zahka@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04Merge branch '100GbE' of ↵Jakub Kicinski10-954/+910
git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue Tony Nguyen says: ==================== Intel Wired LAN Driver Updates 2024-10-01 (ice) This series contains updates to ice driver only. Karol cleans up current PTP GPIO pin handling, fixes minor bugs, refactors implementation for all products, introduces SDP (Software Definable Pins) for E825C and implements reading SDP section from NVM for E810 products. Sergey replaces multiple aux buses and devices used in the PTP support code with struct ice_adapter holding the necessary shared data. * '100GbE' of git://git.kernel.org/pub/scm/linux/kernel/git/tnguy/next-queue: ice: Drop auxbus use for PTP to finalize ice_adapter move ice: Use ice_adapter for PTP shared data instead of auxdev ice: Initial support for E825C hardware in ice_adapter ice: Add ice_get_ctrl_ptp() wrapper to simplify the code ice: Introduce ice_get_phy_model() wrapper ice: Enable 1PPS out from CGU for E825C products ice: Read SDP section from NVM for pin definitions ice: Disable shared pin on E810 on setfunc ice: Cache perout/extts requests and check flags ice: Align E810T GPIO to other products ice: Add SDPs support for E825C ice: Implement ice_ptp_pin_desc ==================== Link: https://patch.msgid.link/20241001201702.3252954-1-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04Merge branch 'add-option-to-provide-opt_id-value-via-cmsg'Jakub Kicinski20-48/+149
Vadim Fedorenko says: ==================== Add option to provide OPT_ID value via cmsg SOF_TIMESTAMPING_OPT_ID socket option flag gives a way to correlate TX timestamps and packets sent via socket. Unfortunately, there is no way to reliably predict socket timestamp ID value in case of error returned by sendmsg. For UDP sockets it's impossible because of lockless nature of UDP transmit, several threads may send packets in parallel. In case of RAW sockets MSG_MORE option makes things complicated. More details are in the conversation [1]. This patch adds new control message type to give user-space software an opportunity to control the mapping between packets and values by providing ID with each sendmsg. The first patch in the series adds all needed definitions and implements the function for UDP sockets. The explicit check of socket's type is not added because subsequent patches in the series will add support for other types of sockets. The documentation is also included into the first patch. Patch 2/4 adds support for TCP sockets. This part is simple and straight forward. Patch 3/4 adds support for RAW sockets. It's a bit tricky because sock_tx_timestamp functions has to be refactored to receive full socket cookie information to fill in ID. The commit b534dc46c8ae ("net_tstamp: add SOF_TIMESTAMPING_OPT_ID_TCP") did the conversion of sk_tsflags to u32 but sock_tx_timestamp functions were not converted and still receive 16b flags. It wasn't a problem because SOF_TIMESTAMPING_OPT_ID_TCP was not checked in these functions, that's why no backporting is needed. Patch 4/4 adds selftests for new feature. [1] https://lore.kernel.org/netdev/CALCETrU0jB+kg0mhV6A8mrHfTE1D1pr1SD_B9Eaa9aDPfgHdtA@mail.gmail.com/ ==================== Link: https://patch.msgid.link/20241001125716.2832769-1-vadfed@meta.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04selftests: txtimestamp: add SCM_TS_OPT_ID testVadim Fedorenko3-15/+43
Extend txtimestamp test to run with fixed tskey using SCM_TS_OPT_ID control message for all types of sockets. Reviewed-by: Jason Xing <kerneljasonxing@gmail.com> Reviewed-by: Willem de Bruijn <willemb@google.com> Signed-off-by: Vadim Fedorenko <vadfed@meta.com> Link: https://patch.msgid.link/20241001125716.2832769-4-vadfed@meta.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04net_tstamp: add SCM_TS_OPT_ID for RAW socketsVadim Fedorenko9-21/+31
The last type of sockets which supports SOF_TIMESTAMPING_OPT_ID is RAW sockets. To add new option this patch converts all callers (direct and indirect) of _sock_tx_timestamp to provide sockcm_cookie instead of tsflags. And while here fix __sock_tx_timestamp to receive tsflags as __u32 instead of __u16. Reviewed-by: Willem de Bruijn <willemb@google.com> Reviewed-by: Jason Xing <kerneljasonxing@gmail.com> Signed-off-by: Vadim Fedorenko <vadfed@meta.com> Link: https://patch.msgid.link/20241001125716.2832769-3-vadfed@meta.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04net_tstamp: add SCM_TS_OPT_ID to provide OPT_ID in control messageVadim Fedorenko11-12/+75
SOF_TIMESTAMPING_OPT_ID socket option flag gives a way to correlate TX timestamps and packets sent via socket. Unfortunately, there is no way to reliably predict socket timestamp ID value in case of error returned by sendmsg. For UDP sockets it's impossible because of lockless nature of UDP transmit, several threads may send packets in parallel. In case of RAW sockets MSG_MORE option makes things complicated. More details are in the conversation [1]. This patch adds new control message type to give user-space software an opportunity to control the mapping between packets and values by providing ID with each sendmsg for UDP sockets. The documentation is also added in this patch. [1] https://lore.kernel.org/netdev/CALCETrU0jB+kg0mhV6A8mrHfTE1D1pr1SD_B9Eaa9aDPfgHdtA@mail.gmail.com/ Reviewed-by: Willem de Bruijn <willemb@google.com> Reviewed-by: Jason Xing <kerneljasonxing@gmail.com> Signed-off-by: Vadim Fedorenko <vadfed@meta.com> Link: https://patch.msgid.link/20241001125716.2832769-2-vadfed@meta.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04Merge branch 'net-mlx5-hw-counters-refactor'Jakub Kicinski4-278/+147
Tariq Toukan says: ==================== net/mlx5: hw counters refactor This is a patchset re-post, see: https://lore.kernel.org/20240815054656.2210494-7-tariqt@nvidia.com In this patchset, Cosmin refactors hw counters and solves perf scaling issue. Series generated against: commit c824deb1a897 ("cxgb4: clip_tbl: Fix spelling mistake "wont" -> "won't"") HW counters are central to mlx5 driver operations. They are hardware objects created and used alongside most steering operations, and queried from a variety of places. Most counters are queried in bulk from a periodic task in fs_counters.c. Counter performance is important and as such, a variety of improvements have been done over the years. Currently, counters are allocated from pools, which are bulk allocated to amortize the cost of firmware commands. Counters are managed through an IDR, a doubly linked list and two atomic single linked lists. Adding/removing counters is a complex dance between user contexts requesting it and the mlx5_fc_stats_work task which does most of the work. Under high load (e.g. from connection tracking flow insertion/deletion), the counter code becomes a bottleneck, as seen on flame graphs. Whenever a counter is deleted, it gets added to a list and the wq task is scheduled to run immediately to actually delete it. This is done via mod_delayed_work which uses an internal spinlock. In some tests, waiting for this spinlock took up to 66% of all samples. This series refactors the counter code to use a more straight-forward approach, avoiding the mod_delayed_work problem and making the code easier to understand. For that: - patch #1 moves counters data structs to a more appropriate place. - patch #2 simplifies the bulk query allocation scheme by using vmalloc. - patch #3 replaces the IDR+3 lists with an xarray. This is the main patch of the series, solving the spinlock congestion issue. - patch #4 removes an unnecessary cacheline alignment causing a lot of memory to be wasted. - patches #5 and #6 are small cleanups enabled by the refactoring. ==================== Link: https://patch.msgid.link/20241001103709.58127-1-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04net/mlx5: hw counters: Remove mlx5_fc_create_exCosmin Ratiu3-10/+2
It no longer serves any purpose and is identical to mlx5_fc_create upon which it was originally based of. Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20241001103709.58127-7-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04net/mlx5: hw counters: Don't maintain a counter countCosmin Ratiu1-22/+18
num_counters is only used for deciding whether to grow the bulk query buffer, which is done once more counters than a small initial threshold are present. After that, maintaining num_counters serves no purpose. This commit replaces that with an actual xarray traversal to count the counters. This appears expensive at first sight, but is only done when the number of counters is less than the initial threshold (8) and only once every sampling interval. Once the number of counters goes above the threshold, the bulk query buffer is grown to max size and the xarray traversal is never done again. Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20241001103709.58127-6-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04net/mlx5: hw counters: Drop unneeded cacheline alignmentCosmin Ratiu1-1/+1
The mlx5_fc struct has a cache for values queried from hw, which is cacheline aligned. On x86_64, this results in: struct mlx5_fc { u32 id; /* 0 4 */ bool aging; /* 4 1 */ /* XXX 3 bytes hole, try to pack */ struct mlx5_fc_bulk * bulk; /* 8 8 */ /* XXX 48 bytes hole, try to pack */ /* --- cacheline 1 boundary (64 bytes) --- */ struct mlx5_fc_cache cache __attribute__((__aligned__(64))); /* 64 24 */ u64 lastpackets; /* 88 8 */ u64 lastbytes; /* 96 8 */ /* size: 128, cachelines: 2, members: 6 */ /* sum members: 53, holes: 2, sum holes: 51 */ /* padding: 24 */ /* forced aligns: 1, forced holes: 1, sum forced holes: 48 */ } __attribute__((__aligned__(64))); (output from pahole). ...So a 48+24=72 byte waste. As far as I can determine, this serves no purpose other than maybe making sure that the values in the cache do not span two cachelines in the worst case scenario, but that's not a valid enough reason to waste 72 bytes per counter, especially since this code is not performance-critical. There could potentially be hundreds of thousands of counters (e.g. for connection-tracking), so this quickly adds up to multiple MB wasted. This commit removes the alignment, resulting in: struct mlx5_fc { [...] /* size: 56, cachelines: 1, members: 6 */ /* sum members: 53, holes: 1, sum holes: 3 */ /* last cacheline: 56 bytes */ }; Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20241001103709.58127-5-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04net/mlx5: hw counters: Replace IDR+lists with xarrayCosmin Ratiu1-194/+82
Previously, managing counters was a complicated affair involving an IDR, a sorted double linked list, two single linked lists and a complex dance between a non-periodic wq task and users adding/deleting counters. Adding was done by inserting new counters into the IDR and into a single linked list, leaving the wq to process the list and actually add the counters into the double linked list, maintained sorted with the IDR. Deleting involved adding the counter into another single linked list, leaving the wq to actually unlink the counter from the other structures and release it. Dumping the counters is done with the bulk query API, which relies on the counter list being sorted and unmutable during querying to efficiently retrieve cached counter values. Finally, the IDR data struct is deprecated. This commit replaces all of that with an xarray. Adding is now done directly, by using xa_lock. Deleting is also done directly, under the xa_lock. Querying is done from a periodic task running every sampling_interval (default 1s) and uses the bulk query API for efficiency. It works by iterating over the xarray: - when a new bulk needs to be started, the bulk information is computed under the xa_lock. - the xa iteration state is saved and the xa_lock dropped. - the HW is queried for bulk counter values. - the xa_lock is reacquired. - counter caches with ids covered by the bulk response are updated. Querying always requests the max bulk length, for simplicity. Counters could be added/deleted while the HW is queried. This is safe, as the HW API simply returns unknown values for counters not in HW, but those values won't be accessed. Only counters present in xarray before bulk query will actually read queried cache values. This cuts down the size of mlx5_fc by 4 pointers (88->56 bytes), which amounts to ~3MB / 100K counters. But more importantly, this solves the wq spinlock congestion issue seen happening on high-rate counter insertion+deletion. Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20241001103709.58127-4-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04net/mlx5: hw counters: Use kvmalloc for bulk query bufferCosmin Ratiu1-37/+22
The bulk query buffer starts out small (see [1]) and as soon as the number of counters goes past the initial threshold grows to max size (32K entries, 512KB) with a retry scheme. This commit switches to using kvmalloc for the buffer, which has a near zero likelihood of failing, and thus the explicit retry scheme becomes superfluous and is taken out. On the low chance the allocation fails, it will still be retried every sampling_interval, when the wq task runs. [1] commit b247f32aecad ("net/mlx5: Dynamically resize flow counters query buffer") Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20241001103709.58127-3-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04net/mlx5: hw counters: Make fc_stats & fc_pool privateCosmin Ratiu2-52/+60
The mlx5_fc_stats and mlx5_fc_pool structs are only used from fs_counters.c. As such, make them private there. mlx5_fc_pool is not used or referenced at all outside fs_counters. mlx5_fc_stats is referenced from mlx5_core_dev, so instead of having it as a direct member (which requires exporting it from fs_counters), store a pointer to it, allocate it on init and clear it on destroy. One caveat is that a simple container_of to get from a 'work' struct to the outermost mlx5_core_dev struct directly no longer works, so an extra pointer had to be added to mlx5_fc_stats back to the parent dev. Signed-off-by: Cosmin Ratiu <cratiu@nvidia.com> Signed-off-by: Tariq Toukan <tariqt@nvidia.com> Link: https://patch.msgid.link/20241001103709.58127-2-tariqt@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04octeontx2-af: Change block parameter to const pointer in get_lf_str_listRiyan Dhiman1-7/+7
Convert struct rvu_block block to const struct rvu_block *block in get_lf_str_list() function parameter. This improves efficiency by avoiding structure copying and reflects the function's read-only access to block. Signed-off-by: Riyan Dhiman <riyandhiman14@gmail.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20241001110542.5404-2-riyandhiman14@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04net: macb: Adding support for Jumbo Frames up to 10240 Bytes in SAMA5D2Aleksander Jan Bajkowski1-1/+2
As per the SAMA5D2 device specification it supports Jumbo frames. But the suggested flag and length of bytes it supports was not updated in this driver config_structure. The maximum jumbo frames the device supports: 10240 bytes as per the device spec. While changing the MTU value greater than 1500, it threw error: sudo ifconfig eth1 mtu 9000 SIOCSIFMTU: Invalid argument Add this support to driver so that it works as expected and designed. Signed-off-by: Aleksander Jan Bajkowski <olek2@wp.pl> Reviewed-by: Simon Horman <horms@kernel.org> Acked-by: Nicolas Ferre <nicolas.ferre@microchip.com> Link: https://patch.msgid.link/20241003171941.8814-1-olek2@wp.pl Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04Merge branch 'net-airoha-fix-pse-memory-configuration'Jakub Kicinski1-6/+14
Lorenzo Bianconi says: ==================== net: airoha: Fix PSE memory configuration Align PSE memory configuration to vendor SDK. Increase initial value of PSE reserved memory in airoha_fe_pse_ports_init() by the value used for the second Packet Processor Engine (PPE2). Do not overwrite the default value for the number of PSE reserved pages in airoha_fe_set_pse_oq_rsv(). These changes fix issues which are not visible to the user. v1: https://lore.kernel.org/20240930-airoha-eth-pse-fix-v1-0-f41f2f35abb9@kernel.org ==================== Link: https://patch.msgid.link/20241001-airoha-eth-pse-fix-v2-0-9a56cdffd074@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04net: airoha: fix PSE memory configuration in airoha_fe_pse_ports_init()Lorenzo Bianconi1-2/+4
Align PSE memory configuration to vendor SDK. In particular, increase initial value of PSE reserved memory in airoha_fe_pse_ports_init() routine by the value used for the second Packet Processor Engine (PPE2) and do not overwrite the default value. Introduced by commit 23020f049327 ("net: airoha: Introduce ethernet support for EN7581 SoC") Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20241001-airoha-eth-pse-fix-v2-2-9a56cdffd074@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04net: airoha: read default PSE reserved pages value before updatingLorenzo Bianconi1-4/+10
Store the default value for the number of PSE reserved pages in orig_val at the beginning of airoha_fe_set_pse_oq_rsv routine, before updating it with airoha_fe_set_pse_queue_rsv_pages(). Introduce airoha_fe_get_pse_all_rsv utility routine. Introduced by commit 23020f049327 ("net: airoha: Introduce ethernet support for EN7581 SoC") Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20241001-airoha-eth-pse-fix-v2-1-9a56cdffd074@kernel.org Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04Merge branch 'net-switch-to-scoped-device_for_each_child_node'Jakub Kicinski2-10/+4
Javier Carrasco says: ==================== net: switch to scoped device_for_each_child_node() This series switches from the device_for_each_child_node() macro to its scoped variant. This makes the code more robust if new early exits are added to the loops, because there is no need for explicit calls to fwnode_handle_put(), which also simplifies existing code. The non-scoped macros to walk over nodes turn error-prone as soon as the loop contains early exits (break, goto, return), and patches to fix them show up regularly, sometimes due to new error paths in an existing loop [1]. Note that the child node is now declared in the macro, and therefore the explicit declaration is no longer required. The general functionality should not be affected by this modification. If functional changes are found, please report them back as errors. Link: https://lore.kernel.org/20240901160829.709296395@linuxfoundation.org v1: https://lore.kernel.org/r/20240930-net-device_for_each_child_node_scoped-v1-0-bbdd7f9fd649@gmail.com ==================== Link: https://patch.msgid.link/20240930-net-device_for_each_child_node_scoped-v2-0-35f09333c1d7@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04net: hns: hisilicon: hns_dsaf_mac: switch to scoped device_for_each_child_node()Javier Carrasco1-7/+3
Use device_for_each_child_node_scoped() to simplify the code by removing the need for explicit calls to fwnode_handle_put() in every error path. This approach also accounts for any error path that could be added. Signed-off-by: Javier Carrasco <javier.carrasco.cruz@gmail.com> Link: https://patch.msgid.link/20240930-net-device_for_each_child_node_scoped-v2-2-35f09333c1d7@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04net: mdio: thunder: switch to scoped device_for_each_child_node()Javier Carrasco1-3/+1
There has already been an issue with the handling of early exits from device_for_each_child() in this driver, and it was solved with commit b1de5c78ebe9 ("net: mdio: thunder: Add missing fwnode_handle_put()") by adding a call to fwnode_handle_put() right after the loop. That solution is valid indeed, but if a new error path with a 'return' is added to the loop, this solution will fail. A more secure approach is using the scoped variant of the macro, which automatically decrements the refcount of the child node when it goes out of scope, removing the need for explicit calls to fwnode_handle_put(). Signed-off-by: Javier Carrasco <javier.carrasco.cruz@gmail.com> Link: https://patch.msgid.link/20240930-net-device_for_each_child_node_scoped-v2-1-35f09333c1d7@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04Merge branch 'qed-ethtool-d-faster-less-latency'Jakub Kicinski3-29/+18
Michal Schmidt says: ==================== qed: 'ethtool -d' faster, less latency Here is a patch to make 'ethtool -d' on a qede network device a lot faster and 3 patches to make it cause less latency for other tasks on non-preemptible kernels. ==================== Link: https://patch.msgid.link/20240930201307.330692-1-mschmidt@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04qed: put cond_resched() in qed_dmae_operation_wait()Michal Schmidt1-0/+1
It is OK to sleep in qed_dmae_operation_wait, because it is called only in process context, while holding p_hwfn->dmae_info.mutex from one of the qed_dmae_{host,grc}2{host,grc} functions. The udelay(DMAE_MIN_WAIT_TIME=2) in the function is too short to replace with usleep_range, but at least it's a suitable point for checking if we should give up the CPU with cond_resched(). This lowers the latency caused by 'ethtool -d' from 10 ms to less than 2 ms on my test system with voluntary preemption. Signed-off-by: Michal Schmidt <mschmidt@redhat.com> Link: https://patch.msgid.link/20240930201307.330692-5-mschmidt@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04qed: allow the callee of qed_mcp_nvm_read() to sleepMichal Schmidt1-8/+1
qed_mcp_nvm_read has a loop where it calls qed_mcp_nvm_rd_cmd with the argument b_can_sleep=false. And it sleeps once every 0x1000 bytes read. Simplify this by letting qed_mcp_nvm_rd_cmd itself sleep (b_can_sleep=true). It will have slept at least once when successful (in the "Wait for the MFW response" loop). So the extra sleep once every 0x1000 bytes becomes superfluous. Delete it. On my test system with voluntary preemption, this lowers the latency caused by 'ethtool -d' from 53 ms to 10 ms. Signed-off-by: Michal Schmidt <mschmidt@redhat.com> Link: https://patch.msgid.link/20240930201307.330692-4-mschmidt@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04qed: put cond_resched() in qed_grc_dump_ctx_data()Michal Schmidt1-0/+1
On a kernel with preemption none or voluntary, 'ethtool -d' on a qede network device can cause a big latency spike. The biggest part of it is the loop in qed_grc_dump_ctx_data. The function is called only from the .get_size and .perform_dump callbacks for the "grc" feature defined in qed_features_lookup[]. As far as I can see, they are used in: - qed's devlink healh reporter .dump op - qede's ethtool get_regs/get_regs_len/get_dump_data ops - qedf's qedf_get_grc_dump, called from: - qedf_sysfs_write_grcdump - "grcdump" sysfs attribute write - qedf_wq_grcdump - a workqueue It is safe to sleep in all of them. Let's insert a cond_resched() in the outer loop to let other tasks run. Measured using this script: #!/bin/bash DEV=ens3f1 echo wakeup_rt > /sys/kernel/tracing/current_tracer echo 0 > /sys/kernel/tracing/tracing_max_latency echo 1 > /sys/kernel/tracing/tracing_on echo "Setting the task CPU affinity" taskset -p 1 $$ > /dev/null echo "Starting the real-time task" chrt -f 50 bash -c 'while sleep 0.01; do :; done' & sleep 1 echo "Running: ethtool -d $DEV" time ethtool -d $DEV > /dev/null kill %1 echo 0 > /sys/kernel/tracing/tracing_on echo "Measured latency: $(</sys/kernel/tracing/tracing_max_latency) us" echo "To see the latency trace: less /sys/kernel/tracing/trace" The patch lowers the latency from 180 ms to 53 ms on my test system with voluntary preemption. Signed-off-by: Michal Schmidt <mschmidt@redhat.com> Link: https://patch.msgid.link/20240930201307.330692-3-mschmidt@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04qed: make 'ethtool -d' 10 times fasterMichal Schmidt1-21/+15
As a side effect of commit 5401c3e09928 ("qed: allow sleep in qed_mcp_trace_dump()"), 'ethtool -d' became much slower. Almost all the time is spent collecting the "mcp_trace". It is caused by sleeping too long in _qed_mcp_cmd_and_union. When called with sleeping not allowed, the function delays for 10 µs between firmware polls. But if sleeping is allowed, it sleeps for 10 ms instead. The sleeps in _qed_mcp_cmd_and_union are unnecessarily long. Replace msleep with usleep_range, which allows to achieve a similar polling interval like in the no-sleeping mode (10 - 20 µs). The only caller, qed_mcp_cmd_and_union, can stop doing the multiplication/division of the usecs/max_retries. The polling interval and the number of retries do not need to be parameters at all. On my test system, 'ethtool -d' now takes 4 seconds instead of 44. Signed-off-by: Michal Schmidt <mschmidt@redhat.com> Link: https://patch.msgid.link/20240930201307.330692-2-mschmidt@redhat.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04Merge branch 'net-mv643xx-devm-fixes'Jakub Kicinski1-21/+7
Rosen Penev says: ==================== net: mv643xx: devm fixes Small simplification and a fix for a seemingly wrong function usage. ==================== Link: https://patch.msgid.link/20240930202951.297737-1-rosenp@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04net: mv643xx: fix wrong devm_clk_get usageRosen Penev1-13/+4
This clock should be optional. In addition, PTR_ERR can be -EPROBE_DEFER in which case it should return. devm_clk_get_optional_enabled also allows removing explicit clock enable and disable calls. Signed-off-by: Rosen Penev <rosenp@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/20240930202951.297737-3-rosenp@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04net: mv643xx: use devm_platform_ioremap_resourceRosen Penev1-8/+3
This combines multiple steps in one function. Signed-off-by: Rosen Penev <rosenp@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/20240930202951.297737-2-rosenp@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04Merge branch 'net-ag71xx-small-cleanups'Jakub Kicinski1-23/+14
Rosen Penev says: ==================== net: ag71xx: small cleanups More devm and some loose ends. ==================== Link: https://patch.msgid.link/20240930181823.288892-1-rosenp@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04net: ag71xx: move assignment into main loopRosen Penev1-2/+1
Effectively what's going on here is there's a main loop and an identical one below with a single assignment. Simpler to move it up. Signed-off-by: Rosen Penev <rosenp@gmail.com> Link: https://patch.msgid.link/20240930181823.288892-6-rosenp@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04net: ag71xx: replace INIT_LIST_HEADRosen Penev1-3/+1
LIST_HEAD is a shorter macro. No real difference. Signed-off-by: Rosen Penev <rosenp@gmail.com> Link: https://patch.msgid.link/20240930181823.288892-5-rosenp@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04net: ag71xx: remove platform_set_drvdataRosen Penev1-3/+0
platform_get_drvdata is never called as a result of all the devm conversions. Signed-off-by: Rosen Penev <rosenp@gmail.com> Link: https://patch.msgid.link/20240930181823.288892-4-rosenp@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04net: ag71xx: use some dev_err_probeRosen Penev1-12/+9
These functions can return EPROBE_DEFER. Don't warn in such a case. Signed-off-by: Rosen Penev <rosenp@gmail.com> Link: https://patch.msgid.link/20240930181823.288892-3-rosenp@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04net: ag71xx: use devm_ioremap_resourceRosen Penev1-3/+3
We can just use res directly. Signed-off-by: Rosen Penev <rosenp@gmail.com> Link: https://patch.msgid.link/20240930181823.288892-2-rosenp@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-04appletalk: Remove deadcodeDr. David Alan Gilbert4-56/+2
alloc_ltalkdev in net/appletalk/dev.c is dead since commit 00f3696f7555 ("net: appletalk: remove cops support") Removing it (and it's helper) leaves dev.c and if_ltalk.h empty; remove them and the Makefile entry. tun.c was including that if_ltalk.h but actually wanted the uapi version for LTALK_ALEN, fix up the path. Signed-off-by: Dr. David Alan Gilbert <linux@treblig.org> Reviewed-by: Willem de Bruijn <willemb@google.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2024-10-03ibmvnic: Add stat for tx direct vs tx batchedNick Child2-8/+18
Allow tracking of packets sent with send_subcrq direct vs indirect. `ethtool -S <dev>` will now provide a counter of the number of uses of each xmit method. This metric will be useful in performance debugging. Signed-off-by: Nick Child <nnac123@linux.ibm.com> Reviewed-by: Simon Horman <horms@kernel.org> Link: https://patch.msgid.link/20241001163531.1803152-1-nnac123@linux.ibm.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2024-10-03net: marvell: mvmdio: use clk_get_optionalRosen Penev1-7/+4
The code seems to be handling EPROBE_DEFER explicitly and if there's no error, enables the clock. clk_get_optional exists for that. Signed-off-by: Rosen Penev <rosenp@gmail.com> Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://patch.msgid.link/20240930211628.330703-1-rosenp@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>