diff options
author | Linus Torvalds <torvalds@linux-foundation.org> | 2024-09-16 06:02:27 +0200 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2024-09-16 06:02:27 +0200 |
commit | 9410645520e9b820069761f3450ef6661418e279 (patch) | |
tree | 597a1047e43d27aceeeaf34ffefc1326aaedd0fe /drivers/net/ethernet/xilinx/xilinx_axienet_main.c | |
parent | 98f7e32f20d28ec452afb208f9cffc08448a2652 (diff) | |
parent | 3561373114c8b3359114e2da27259317dc51145a (diff) |
Merge tag 'net-next-6.12' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next
Pull networking updates from Jakub Kicinski:
"The zero-copy changes are relatively significant, but regression risk
should be contained. The feature needs to be used to cause trouble.
Also it feels like we got an order of magnitude more semi-automated
"refactoring" chaff than usual, I wonder if it's just us.
Core & protocols:
- Support Device Memory TCP, ability to zero-copy receive TCP
payloads to a DMABUF region of memory while packet headers land
separately in normal kernel buffers, and TCP processes then as
usual.
- The ability to read the PTP PHC (Physical Hardware Clock) alongside
MONOTONIC_RAW timestamps with PTP_SYS_OFFSET_EXTENDED. Previously
only CLOCK_REALTIME was supported.
- Allow matching on all bits of IP DSCP for routing decisions.
Previously we only supported on matching TOS bits in IPv4 which is
a narrower interpretation of the same header field.
- Increase the range of weights used for multi-path routing from
8 bits to 16 bits.
- Add support for IPv6 PIO p flag in the Prefix Information Option
per draft-ietf-6man-pio-pflag.
- IPv6 IOAM6 support for new tunsrc encap mode for better
performance.
- Detect destinations which blackhole MPTCP traffic and avoid
initiating MPTCP connections to them for a certain period of time,
1h by default.
- Improve IPsec control path performance by removing the inexact
policies list.
- AF_VSOCK: add support for SIOCOUTQ ioctl.
- Add enum for reasons TCP reset was sent for easier tracing.
- Add SMC ringbufs usage statistics.
Drivers:
- Handle netconsole setup failures more gracefully, don't fail
loading, retain the specified target as disabled.
- Extend bonding's IPsec offload pass thru capabilities (ESN, stats).
Filtering:
- Add TCP_BPF_SOCK_OPS_CB_FLAGS to bpf_*sockopt() to address the case
when long-lived sockets miss a chance to set additional callbacks
if a sockops program was not attached early in their lifetime.
- Support using BPF skb helpers in tracepoints.
- Conntrack Netlink: support CTA_FILTER for flush.
- Improve SCTP support in nfnetlink_queue.
- Improve performance of large nftables flush transactions.
Things we sprinkled into general kernel code:
- selftests: support setting an "interpreter" for script files; make
it easy to run as separate cases tests where one "interpreter" is
fed various test descriptions (in our case packet sequences).
Driver API:
- Extend core and ethtool APIs to support many PHYs connected to a
single interface (PHY topologies).
- Extend cable diagnostics to specify whether Time Domain
Reflectometry (TDR) or Active Link Cable Diagnostic (ALCD) was
used.
- Add library for implementing MAC-PHY Ethernet drivers for SPI
devices compatible with Open Alliance 10BASE-T1x MAC-PHY Serial
Interface (TC6) standard.
- Add helpers to the PHY framework, for PHYs following the Open
Alliance standards:
- 1000BaseT1 link settings
- cable test and diagnostics
- Support listing / dumping all allocated RSS contexts.
- Add configuration for frequency Embedded SYNC in DPLL, which
magically embeds sync pulses into Ethernet signaling.
Device drivers:
- Ethernet high-speed NICs:
- Broadcom (bnxt):
- use better FW APIs for queue reset
- support QOS and TPID settings for the SR-IOV VLAN
- support dynamic MSI-X allocation
- Intel (100G, ice, idpf):
- ice: support PCIe subfunctions
- iavf: add support for TC U32 filters on VFs
- ice: support Embedded SYNC in DPLL
- nVidia/Mellanox (mlx5):
- support HW managed steering tables
- support PCIe PTM cross timestamping
- AMD/Pensando:
- ionic: use page_pool to increase Rx performance
- Cisco (enic):
- report per-queue statistics
- Ethernet virtual:
- Microsoft vNIC:
- mana: support configuring ring length
- netvsc: enable more channels on systems with many CPUs
- IBM veth:
- optimize polling to improve TCP_RR performance
- optimize performance of Tx handling
- VirtIO net:
- synchronize the operstate with the admin state to allow a
lower virtio-net to propagate the link status to an upper
device like macvlan
- Ethernet NICs consumer, and embedded:
- Add driver for Realtek automotive PCIe devices (RTL9054,
RTL9068, RTL9072, RTL9075, RTL9068, RTL9071)
- Add driver for Microchip LAN8650/1 10BASE-T1S MAC-PHY.
- Microchip:
- lan743x: use phylink - support WOL, EEE, pause, link settings
- add Wake-on-LAN support for KSZ87xx family
- add KSZ8895/KSZ8864 switch support
- factor out FDMA code and use it in sparx5 and lan966x
(including DCB support in both)
- Synopsys (stmmac):
- support frame preemption (configured using TC and ethtool)
- support Loongson DWMAC (GMAC v3.73)
- support RockChips RK3576 DWMAC
- TI:
- am65-cpsw: add multi queue RX support
- icssg-prueth: HSR offload support
- Cadence (macb):
- enable software (hrtimer based) IRQ coalescing by default
- Xilinx (axinet):
- expose HW statistics
- improve multicast filtering
- relax Rx checksum offload constraints
- MediaTek:
- mt7530: add EN7581 support
- Aspeed (ftgmac100):
- report link speed and duplex
- Intel:
- igc: add mqprio offload
- igc: report EEE configuration
- RealTek (r8169):
- add support for RTL8126A rev.b
- Vitesse (vsc73xx):
- implement FDB add/del/dump operations
- Freescale (fs_enet):
- use phylink
- Ethernet PHYs:
- vitesse: implement downshift and MDI-X in vsc73xx PHYs
- microchip: support LAN887x, supporting IEEE 802.3bw (100BASE-T1)
and IEEE 802.3bp (1000BASE-T1) specifications
- add Applied Micro QT2025 PHY driver (in Rust)
- add Motorcomm yt8821 2.5G Ethernet PHY driver
- CAN:
- add driver for Rockchip RK3568 CAN-FD controller
- flexcan: add wakeup support for imx95
- kvaser_usb: set hardware timestamp on transmitted packets
- WiFi:
- mac80211/cfg80211:
- EHT rate support in AQL airtime fairness
- handle DFS (radar detection) per link in Multi-Link Operation
- RealTek (rtw89):
- support RTL8852BT and 8852BE-VT (WiFi 6)
- support hardware rfkill
- support HW encryption in unicast management frames
- support Wake-on-WLAN with supported network detection
- RealTek (rtw89):
- improve Rx performance by using USB frame aggregation
- support USB 3 with RTL8822CU/RTL8822BU
- Intel (iwlwifi/mvm):
- offload RLC/SMPS functionality to firmware
- Marvell (mwifiex):
- add host based MLME to enable WPA3
- Bluetooth:
- add support for Amlogic HCI UART protocol
- add support for ISO data/packets to Intel and NXP drivers"
* tag 'net-next-6.12' of git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next: (1303 commits)
net/mlx5: HWS, check the correct variable in hws_send_ring_alloc_sq()
netfilter: nft_socket: Fix a NULL vs IS_ERR() bug in nft_socket_cgroup_subtree_level()
ice: Fix a NULL vs IS_ERR() check in probe()
ice: Fix a couple NULL vs IS_ERR() bugs
net: ethernet: fs_enet: Make the per clock optional
net: ti: icssg-prueth: Add multicast filtering support in HSR mode
net: ti: icssg-prueth: Enable HSR Tx duplication, Tx Tag and Rx Tag offload
net: ti: icssg-prueth: Add support for HSR frame forward offload
net: ti: icssg-prueth: Stop hardcoding def_inc
net: ti: icss-iep: Move icss_iep structure
net: ibm: emac: get rid of wol_irq
net: ibm: emac: remove all waiting code
net: ibm: emac: replace of_get_property
net: ibm: emac: use netdev's phydev directly
net: ibm: emac: use devm for register_netdev
net: ibm: emac: remove mii_bus with devm
net: ibm: emac: use devm for of_iomap
net: ibm: emac: manage emac_irq with devm
net: ibm: emac: use devm for alloc_etherdev
octeontx2-af: debugfs: Add Channel info to RPM map
...
Diffstat (limited to 'drivers/net/ethernet/xilinx/xilinx_axienet_main.c')
-rw-r--r-- | drivers/net/ethernet/xilinx/xilinx_axienet_main.c | 401 |
1 files changed, 352 insertions, 49 deletions
diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c index 9eb300fc3590..ea7d7c03f48e 100644 --- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c +++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c @@ -415,6 +415,7 @@ static void axienet_set_mac_address(struct net_device *ndev, static int netdev_set_mac_address(struct net_device *ndev, void *p) { struct sockaddr *addr = p; + axienet_set_mac_address(ndev, addr->sa_data); return 0; } @@ -436,24 +437,27 @@ static void axienet_set_multicast_list(struct net_device *ndev) u32 reg, af0reg, af1reg; struct axienet_local *lp = netdev_priv(ndev); - if (ndev->flags & (IFF_ALLMULTI | IFF_PROMISC) || - netdev_mc_count(ndev) > XAE_MULTICAST_CAM_TABLE_NUM) { - /* We must make the kernel realize we had to move into - * promiscuous mode. If it was a promiscuous mode request - * the flag is already set. If not we set it. - */ - ndev->flags |= IFF_PROMISC; - reg = axienet_ior(lp, XAE_FMI_OFFSET); + reg = axienet_ior(lp, XAE_FMI_OFFSET); + reg &= ~XAE_FMI_PM_MASK; + if (ndev->flags & IFF_PROMISC) reg |= XAE_FMI_PM_MASK; + else + reg &= ~XAE_FMI_PM_MASK; + axienet_iow(lp, XAE_FMI_OFFSET, reg); + + if (ndev->flags & IFF_ALLMULTI || + netdev_mc_count(ndev) > XAE_MULTICAST_CAM_TABLE_NUM) { + reg &= 0xFFFFFF00; axienet_iow(lp, XAE_FMI_OFFSET, reg); - dev_info(&ndev->dev, "Promiscuous mode enabled.\n"); + axienet_iow(lp, XAE_AF0_OFFSET, 1); /* Multicast bit */ + axienet_iow(lp, XAE_AF1_OFFSET, 0); + axienet_iow(lp, XAE_AM0_OFFSET, 1); /* ditto */ + axienet_iow(lp, XAE_AM1_OFFSET, 0); + axienet_iow(lp, XAE_FFE_OFFSET, 1); + i = 1; } else if (!netdev_mc_empty(ndev)) { struct netdev_hw_addr *ha; - reg = axienet_ior(lp, XAE_FMI_OFFSET); - reg &= ~XAE_FMI_PM_MASK; - axienet_iow(lp, XAE_FMI_OFFSET, reg); - netdev_for_each_mc_addr(ha, ndev) { if (i >= XAE_MULTICAST_CAM_TABLE_NUM) break; @@ -466,25 +470,21 @@ static void axienet_set_multicast_list(struct net_device *ndev) af1reg = (ha->addr[4]); af1reg |= (ha->addr[5] << 8); - reg = axienet_ior(lp, XAE_FMI_OFFSET) & 0xFFFFFF00; + reg &= 0xFFFFFF00; reg |= i; axienet_iow(lp, XAE_FMI_OFFSET, reg); axienet_iow(lp, XAE_AF0_OFFSET, af0reg); axienet_iow(lp, XAE_AF1_OFFSET, af1reg); + axienet_iow(lp, XAE_AM0_OFFSET, 0xffffffff); + axienet_iow(lp, XAE_AM1_OFFSET, 0x0000ffff); axienet_iow(lp, XAE_FFE_OFFSET, 1); i++; } - } else { - reg = axienet_ior(lp, XAE_FMI_OFFSET); - reg &= ~XAE_FMI_PM_MASK; - - axienet_iow(lp, XAE_FMI_OFFSET, reg); - dev_info(&ndev->dev, "Promiscuous mode disabled.\n"); } for (; i < XAE_MULTICAST_CAM_TABLE_NUM; i++) { - reg = axienet_ior(lp, XAE_FMI_OFFSET) & 0xFFFFFF00; + reg &= 0xFFFFFF00; reg |= i; axienet_iow(lp, XAE_FMI_OFFSET, reg); axienet_iow(lp, XAE_FFE_OFFSET, 0); @@ -519,11 +519,55 @@ static void axienet_setoptions(struct net_device *ndev, u32 options) lp->options |= options; } +static u64 axienet_stat(struct axienet_local *lp, enum temac_stat stat) +{ + u32 counter; + + if (lp->reset_in_progress) + return lp->hw_stat_base[stat]; + + counter = axienet_ior(lp, XAE_STATS_OFFSET + stat * 8); + return lp->hw_stat_base[stat] + (counter - lp->hw_last_counter[stat]); +} + +static void axienet_stats_update(struct axienet_local *lp, bool reset) +{ + enum temac_stat stat; + + write_seqcount_begin(&lp->hw_stats_seqcount); + lp->reset_in_progress = reset; + for (stat = 0; stat < STAT_COUNT; stat++) { + u32 counter = axienet_ior(lp, XAE_STATS_OFFSET + stat * 8); + + lp->hw_stat_base[stat] += counter - lp->hw_last_counter[stat]; + lp->hw_last_counter[stat] = counter; + } + write_seqcount_end(&lp->hw_stats_seqcount); +} + +static void axienet_refresh_stats(struct work_struct *work) +{ + struct axienet_local *lp = container_of(work, struct axienet_local, + stats_work.work); + + mutex_lock(&lp->stats_lock); + axienet_stats_update(lp, false); + mutex_unlock(&lp->stats_lock); + + /* Just less than 2^32 bytes at 2.5 GBit/s */ + schedule_delayed_work(&lp->stats_work, 13 * HZ); +} + static int __axienet_device_reset(struct axienet_local *lp) { u32 value; int ret; + /* Save statistics counters in case they will be reset */ + mutex_lock(&lp->stats_lock); + if (lp->features & XAE_FEATURE_STATS) + axienet_stats_update(lp, true); + /* Reset Axi DMA. This would reset Axi Ethernet core as well. The reset * process of Axi DMA takes a while to complete as all pending * commands/transfers will be flushed or completed during this @@ -538,7 +582,7 @@ static int __axienet_device_reset(struct axienet_local *lp) XAXIDMA_TX_CR_OFFSET); if (ret) { dev_err(lp->dev, "%s: DMA reset timeout!\n", __func__); - return ret; + goto out; } /* Wait for PhyRstCmplt bit to be set, indicating the PHY reset has finished */ @@ -548,10 +592,29 @@ static int __axienet_device_reset(struct axienet_local *lp) XAE_IS_OFFSET); if (ret) { dev_err(lp->dev, "%s: timeout waiting for PhyRstCmplt\n", __func__); - return ret; + goto out; } - return 0; + /* Update statistics counters with new values */ + if (lp->features & XAE_FEATURE_STATS) { + enum temac_stat stat; + + write_seqcount_begin(&lp->hw_stats_seqcount); + lp->reset_in_progress = false; + for (stat = 0; stat < STAT_COUNT; stat++) { + u32 counter = + axienet_ior(lp, XAE_STATS_OFFSET + stat * 8); + + lp->hw_stat_base[stat] += + lp->hw_last_counter[stat] - counter; + lp->hw_last_counter[stat] = counter; + } + write_seqcount_end(&lp->hw_stats_seqcount); + } + +out: + mutex_unlock(&lp->stats_lock); + return ret; } /** @@ -614,8 +677,7 @@ static int axienet_device_reset(struct net_device *ndev) lp->options |= XAE_OPTION_VLAN; lp->options &= (~XAE_OPTION_JUMBO); - if ((ndev->mtu > XAE_MTU) && - (ndev->mtu <= XAE_JUMBO_MTU)) { + if (ndev->mtu > XAE_MTU && ndev->mtu <= XAE_JUMBO_MTU) { lp->max_frm_size = ndev->mtu + VLAN_ETH_HLEN + XAE_TRL_SIZE; @@ -1126,9 +1188,7 @@ static int axienet_rx_poll(struct napi_struct *napi, int budget) csumstatus == XAE_IP_UDP_CSUM_VALIDATED) { skb->ip_summed = CHECKSUM_UNNECESSARY; } - } else if ((lp->features & XAE_FEATURE_PARTIAL_RX_CSUM) != 0 && - skb->protocol == htons(ETH_P_IP) && - skb->len > 64) { + } else if (lp->features & XAE_FEATURE_PARTIAL_RX_CSUM) { skb->csum = be32_to_cpu(cur_p->app3 & 0xFFFF); skb->ip_summed = CHECKSUM_COMPLETE; } @@ -1297,7 +1357,7 @@ static irqreturn_t axienet_eth_irq(int irq, void *_ndev) ndev->stats.rx_missed_errors++; if (pending & XAE_INT_RXRJECT_MASK) - ndev->stats.rx_frame_errors++; + ndev->stats.rx_dropped++; axienet_iow(lp, XAE_IS_OFFSET, pending); return IRQ_HANDLED; @@ -1516,8 +1576,6 @@ static int axienet_open(struct net_device *ndev) int ret; struct axienet_local *lp = netdev_priv(ndev); - dev_dbg(&ndev->dev, "%s\n", __func__); - /* When we do an Axi Ethernet reset, it resets the complete core * including the MDIO. MDIO must be disabled before resetting. * Hold MDIO bus lock to avoid MDIO accesses during the reset. @@ -1534,6 +1592,9 @@ static int axienet_open(struct net_device *ndev) phylink_start(lp->phylink); + /* Start the statistics refresh work */ + schedule_delayed_work(&lp->stats_work, 0); + if (lp->use_dmaengine) { /* Enable interrupts for Axi Ethernet core (if defined) */ if (lp->eth_irq > 0) { @@ -1558,6 +1619,7 @@ err_free_eth_irq: if (lp->eth_irq > 0) free_irq(lp->eth_irq, ndev); err_phy: + cancel_delayed_work_sync(&lp->stats_work); phylink_stop(lp->phylink); phylink_disconnect_phy(lp->phylink); return ret; @@ -1578,8 +1640,6 @@ static int axienet_stop(struct net_device *ndev) struct axienet_local *lp = netdev_priv(ndev); int i; - dev_dbg(&ndev->dev, "axienet_close()\n"); - if (!lp->use_dmaengine) { WRITE_ONCE(lp->stopping, true); flush_work(&lp->dma_err_task); @@ -1588,6 +1648,8 @@ static int axienet_stop(struct net_device *ndev) napi_disable(&lp->napi_rx); } + cancel_delayed_work_sync(&lp->stats_work); + phylink_stop(lp->phylink); phylink_disconnect_phy(lp->phylink); @@ -1662,6 +1724,7 @@ static int axienet_change_mtu(struct net_device *ndev, int new_mtu) static void axienet_poll_controller(struct net_device *ndev) { struct axienet_local *lp = netdev_priv(ndev); + disable_irq(lp->tx_irq); disable_irq(lp->rx_irq); axienet_rx_irq(lp->tx_irq, ndev); @@ -1700,6 +1763,35 @@ axienet_get_stats64(struct net_device *dev, struct rtnl_link_stats64 *stats) stats->tx_packets = u64_stats_read(&lp->tx_packets); stats->tx_bytes = u64_stats_read(&lp->tx_bytes); } while (u64_stats_fetch_retry(&lp->tx_stat_sync, start)); + + if (!(lp->features & XAE_FEATURE_STATS)) + return; + + do { + start = read_seqcount_begin(&lp->hw_stats_seqcount); + stats->rx_length_errors = + axienet_stat(lp, STAT_RX_LENGTH_ERRORS); + stats->rx_crc_errors = axienet_stat(lp, STAT_RX_FCS_ERRORS); + stats->rx_frame_errors = + axienet_stat(lp, STAT_RX_ALIGNMENT_ERRORS); + stats->rx_errors = axienet_stat(lp, STAT_UNDERSIZE_FRAMES) + + axienet_stat(lp, STAT_FRAGMENT_FRAMES) + + stats->rx_length_errors + + stats->rx_crc_errors + + stats->rx_frame_errors; + stats->multicast = axienet_stat(lp, STAT_RX_MULTICAST_FRAMES); + + stats->tx_aborted_errors = + axienet_stat(lp, STAT_TX_EXCESS_COLLISIONS); + stats->tx_fifo_errors = + axienet_stat(lp, STAT_TX_UNDERRUN_ERRORS); + stats->tx_window_errors = + axienet_stat(lp, STAT_TX_LATE_COLLISIONS); + stats->tx_errors = axienet_stat(lp, STAT_TX_EXCESS_DEFERRAL) + + stats->tx_aborted_errors + + stats->tx_fifo_errors + + stats->tx_window_errors; + } while (read_seqcount_retry(&lp->hw_stats_seqcount, start)); } static const struct net_device_ops axienet_netdev_ops = { @@ -1992,6 +2084,213 @@ static int axienet_ethtools_nway_reset(struct net_device *dev) return phylink_ethtool_nway_reset(lp->phylink); } +static void axienet_ethtools_get_ethtool_stats(struct net_device *dev, + struct ethtool_stats *stats, + u64 *data) +{ + struct axienet_local *lp = netdev_priv(dev); + unsigned int start; + + do { + start = read_seqcount_begin(&lp->hw_stats_seqcount); + data[0] = axienet_stat(lp, STAT_RX_BYTES); + data[1] = axienet_stat(lp, STAT_TX_BYTES); + data[2] = axienet_stat(lp, STAT_RX_VLAN_FRAMES); + data[3] = axienet_stat(lp, STAT_TX_VLAN_FRAMES); + data[6] = axienet_stat(lp, STAT_TX_PFC_FRAMES); + data[7] = axienet_stat(lp, STAT_RX_PFC_FRAMES); + data[8] = axienet_stat(lp, STAT_USER_DEFINED0); + data[9] = axienet_stat(lp, STAT_USER_DEFINED1); + data[10] = axienet_stat(lp, STAT_USER_DEFINED2); + } while (read_seqcount_retry(&lp->hw_stats_seqcount, start)); +} + +static const char axienet_ethtool_stats_strings[][ETH_GSTRING_LEN] = { + "Received bytes", + "Transmitted bytes", + "RX Good VLAN Tagged Frames", + "TX Good VLAN Tagged Frames", + "TX Good PFC Frames", + "RX Good PFC Frames", + "User Defined Counter 0", + "User Defined Counter 1", + "User Defined Counter 2", +}; + +static void axienet_ethtools_get_strings(struct net_device *dev, u32 stringset, u8 *data) +{ + switch (stringset) { + case ETH_SS_STATS: + memcpy(data, axienet_ethtool_stats_strings, + sizeof(axienet_ethtool_stats_strings)); + break; + } +} + +static int axienet_ethtools_get_sset_count(struct net_device *dev, int sset) +{ + struct axienet_local *lp = netdev_priv(dev); + + switch (sset) { + case ETH_SS_STATS: + if (lp->features & XAE_FEATURE_STATS) + return ARRAY_SIZE(axienet_ethtool_stats_strings); + fallthrough; + default: + return -EOPNOTSUPP; + } +} + +static void +axienet_ethtools_get_pause_stats(struct net_device *dev, + struct ethtool_pause_stats *pause_stats) +{ + struct axienet_local *lp = netdev_priv(dev); + unsigned int start; + + if (!(lp->features & XAE_FEATURE_STATS)) + return; + + do { + start = read_seqcount_begin(&lp->hw_stats_seqcount); + pause_stats->tx_pause_frames = + axienet_stat(lp, STAT_TX_PAUSE_FRAMES); + pause_stats->rx_pause_frames = + axienet_stat(lp, STAT_RX_PAUSE_FRAMES); + } while (read_seqcount_retry(&lp->hw_stats_seqcount, start)); +} + +static void +axienet_ethtool_get_eth_mac_stats(struct net_device *dev, + struct ethtool_eth_mac_stats *mac_stats) +{ + struct axienet_local *lp = netdev_priv(dev); + unsigned int start; + + if (!(lp->features & XAE_FEATURE_STATS)) + return; + + do { + start = read_seqcount_begin(&lp->hw_stats_seqcount); + mac_stats->FramesTransmittedOK = + axienet_stat(lp, STAT_TX_GOOD_FRAMES); + mac_stats->SingleCollisionFrames = + axienet_stat(lp, STAT_TX_SINGLE_COLLISION_FRAMES); + mac_stats->MultipleCollisionFrames = + axienet_stat(lp, STAT_TX_MULTIPLE_COLLISION_FRAMES); + mac_stats->FramesReceivedOK = + axienet_stat(lp, STAT_RX_GOOD_FRAMES); + mac_stats->FrameCheckSequenceErrors = + axienet_stat(lp, STAT_RX_FCS_ERRORS); + mac_stats->AlignmentErrors = + axienet_stat(lp, STAT_RX_ALIGNMENT_ERRORS); + mac_stats->FramesWithDeferredXmissions = + axienet_stat(lp, STAT_TX_DEFERRED_FRAMES); + mac_stats->LateCollisions = + axienet_stat(lp, STAT_TX_LATE_COLLISIONS); + mac_stats->FramesAbortedDueToXSColls = + axienet_stat(lp, STAT_TX_EXCESS_COLLISIONS); + mac_stats->MulticastFramesXmittedOK = + axienet_stat(lp, STAT_TX_MULTICAST_FRAMES); + mac_stats->BroadcastFramesXmittedOK = + axienet_stat(lp, STAT_TX_BROADCAST_FRAMES); + mac_stats->FramesWithExcessiveDeferral = + axienet_stat(lp, STAT_TX_EXCESS_DEFERRAL); + mac_stats->MulticastFramesReceivedOK = + axienet_stat(lp, STAT_RX_MULTICAST_FRAMES); + mac_stats->BroadcastFramesReceivedOK = + axienet_stat(lp, STAT_RX_BROADCAST_FRAMES); + mac_stats->InRangeLengthErrors = + axienet_stat(lp, STAT_RX_LENGTH_ERRORS); + } while (read_seqcount_retry(&lp->hw_stats_seqcount, start)); +} + +static void +axienet_ethtool_get_eth_ctrl_stats(struct net_device *dev, + struct ethtool_eth_ctrl_stats *ctrl_stats) +{ + struct axienet_local *lp = netdev_priv(dev); + unsigned int start; + + if (!(lp->features & XAE_FEATURE_STATS)) + return; + + do { + start = read_seqcount_begin(&lp->hw_stats_seqcount); + ctrl_stats->MACControlFramesTransmitted = + axienet_stat(lp, STAT_TX_CONTROL_FRAMES); + ctrl_stats->MACControlFramesReceived = + axienet_stat(lp, STAT_RX_CONTROL_FRAMES); + ctrl_stats->UnsupportedOpcodesReceived = + axienet_stat(lp, STAT_RX_CONTROL_OPCODE_ERRORS); + } while (read_seqcount_retry(&lp->hw_stats_seqcount, start)); +} + +static const struct ethtool_rmon_hist_range axienet_rmon_ranges[] = { + { 64, 64 }, + { 65, 127 }, + { 128, 255 }, + { 256, 511 }, + { 512, 1023 }, + { 1024, 1518 }, + { 1519, 16384 }, + { }, +}; + +static void +axienet_ethtool_get_rmon_stats(struct net_device *dev, + struct ethtool_rmon_stats *rmon_stats, + const struct ethtool_rmon_hist_range **ranges) +{ + struct axienet_local *lp = netdev_priv(dev); + unsigned int start; + + if (!(lp->features & XAE_FEATURE_STATS)) + return; + + do { + start = read_seqcount_begin(&lp->hw_stats_seqcount); + rmon_stats->undersize_pkts = + axienet_stat(lp, STAT_UNDERSIZE_FRAMES); + rmon_stats->oversize_pkts = + axienet_stat(lp, STAT_RX_OVERSIZE_FRAMES); + rmon_stats->fragments = + axienet_stat(lp, STAT_FRAGMENT_FRAMES); + + rmon_stats->hist[0] = + axienet_stat(lp, STAT_RX_64_BYTE_FRAMES); + rmon_stats->hist[1] = + axienet_stat(lp, STAT_RX_65_127_BYTE_FRAMES); + rmon_stats->hist[2] = + axienet_stat(lp, STAT_RX_128_255_BYTE_FRAMES); + rmon_stats->hist[3] = + axienet_stat(lp, STAT_RX_256_511_BYTE_FRAMES); + rmon_stats->hist[4] = + axienet_stat(lp, STAT_RX_512_1023_BYTE_FRAMES); + rmon_stats->hist[5] = + axienet_stat(lp, STAT_RX_1024_MAX_BYTE_FRAMES); + rmon_stats->hist[6] = + rmon_stats->oversize_pkts; + + rmon_stats->hist_tx[0] = + axienet_stat(lp, STAT_TX_64_BYTE_FRAMES); + rmon_stats->hist_tx[1] = + axienet_stat(lp, STAT_TX_65_127_BYTE_FRAMES); + rmon_stats->hist_tx[2] = + axienet_stat(lp, STAT_TX_128_255_BYTE_FRAMES); + rmon_stats->hist_tx[3] = + axienet_stat(lp, STAT_TX_256_511_BYTE_FRAMES); + rmon_stats->hist_tx[4] = + axienet_stat(lp, STAT_TX_512_1023_BYTE_FRAMES); + rmon_stats->hist_tx[5] = + axienet_stat(lp, STAT_TX_1024_MAX_BYTE_FRAMES); + rmon_stats->hist_tx[6] = + axienet_stat(lp, STAT_TX_OVERSIZE_FRAMES); + } while (read_seqcount_retry(&lp->hw_stats_seqcount, start)); + + *ranges = axienet_rmon_ranges; +} + static const struct ethtool_ops axienet_ethtool_ops = { .supported_coalesce_params = ETHTOOL_COALESCE_MAX_FRAMES | ETHTOOL_COALESCE_USECS, @@ -2008,6 +2307,13 @@ static const struct ethtool_ops axienet_ethtool_ops = { .get_link_ksettings = axienet_ethtools_get_link_ksettings, .set_link_ksettings = axienet_ethtools_set_link_ksettings, .nway_reset = axienet_ethtools_nway_reset, + .get_ethtool_stats = axienet_ethtools_get_ethtool_stats, + .get_strings = axienet_ethtools_get_strings, + .get_sset_count = axienet_ethtools_get_sset_count, + .get_pause_stats = axienet_ethtools_get_pause_stats, + .get_eth_mac_stats = axienet_ethtool_get_eth_mac_stats, + .get_eth_ctrl_stats = axienet_ethtool_get_eth_ctrl_stats, + .get_rmon_stats = axienet_ethtool_get_rmon_stats, }; static struct axienet_local *pcs_to_axienet_local(struct phylink_pcs *pcs) @@ -2280,6 +2586,10 @@ static int axienet_probe(struct platform_device *pdev) u64_stats_init(&lp->rx_stat_sync); u64_stats_init(&lp->tx_stat_sync); + mutex_init(&lp->stats_lock); + seqcount_mutex_init(&lp->hw_stats_seqcount, &lp->stats_lock); + INIT_DEFERRABLE_WORK(&lp->stats_work, axienet_refresh_stats); + lp->axi_clk = devm_clk_get_optional(&pdev->dev, "s_axi_lite_clk"); if (!lp->axi_clk) { /* For backward compatibility, if named AXI clock is not present, @@ -2320,42 +2630,35 @@ static int axienet_probe(struct platform_device *pdev) /* Setup checksum offload, but default to off if not specified */ lp->features = 0; + if (axienet_ior(lp, XAE_ABILITY_OFFSET) & XAE_ABILITY_STATS) + lp->features |= XAE_FEATURE_STATS; + ret = of_property_read_u32(pdev->dev.of_node, "xlnx,txcsum", &value); if (!ret) { switch (value) { case 1: - lp->csum_offload_on_tx_path = - XAE_FEATURE_PARTIAL_TX_CSUM; lp->features |= XAE_FEATURE_PARTIAL_TX_CSUM; - /* Can checksum TCP/UDP over IPv4. */ - ndev->features |= NETIF_F_IP_CSUM; + /* Can checksum any contiguous range */ + ndev->features |= NETIF_F_HW_CSUM; break; case 2: - lp->csum_offload_on_tx_path = - XAE_FEATURE_FULL_TX_CSUM; lp->features |= XAE_FEATURE_FULL_TX_CSUM; /* Can checksum TCP/UDP over IPv4. */ ndev->features |= NETIF_F_IP_CSUM; break; - default: - lp->csum_offload_on_tx_path = XAE_NO_CSUM_OFFLOAD; } } ret = of_property_read_u32(pdev->dev.of_node, "xlnx,rxcsum", &value); if (!ret) { switch (value) { case 1: - lp->csum_offload_on_rx_path = - XAE_FEATURE_PARTIAL_RX_CSUM; lp->features |= XAE_FEATURE_PARTIAL_RX_CSUM; + ndev->features |= NETIF_F_RXCSUM; break; case 2: - lp->csum_offload_on_rx_path = - XAE_FEATURE_FULL_RX_CSUM; lp->features |= XAE_FEATURE_FULL_RX_CSUM; + ndev->features |= NETIF_F_RXCSUM; break; - default: - lp->csum_offload_on_rx_path = XAE_NO_CSUM_OFFLOAD; } } /* For supporting jumbo frames, the Axi Ethernet hardware must have @@ -2405,7 +2708,7 @@ static int axienet_probe(struct platform_device *pdev) goto cleanup_clk; } - if (!of_find_property(pdev->dev.of_node, "dmas", NULL)) { + if (!of_property_present(pdev->dev.of_node, "dmas")) { /* Find the DMA node, map the DMA registers, and decode the DMA IRQs */ np = of_parse_phandle(pdev->dev.of_node, "axistream-connected", 0); |