diff options
author | Daniel Borkmann <daniel@iogearbox.net> | 2020-11-14 02:29:00 +0100 |
---|---|---|
committer | Daniel Borkmann <daniel@iogearbox.net> | 2020-11-14 02:30:03 +0100 |
commit | c14d61fca0d10498bf267c0ab1f381dd0b35d96b (patch) | |
tree | 710de0411e07d3ebe0233c8eeb5f80245cf3b63b /tools/include | |
parent | 6f100640ca5b2a2ff67b001c9fd3de21f7b12cf2 (diff) | |
parent | b87c57ae12dbecd50471b437e09e3f7dc916d8bc (diff) |
Merge branch 'xdp-redirect-bulk'
Lorenzo Bianconi says:
====================
XDP bulk APIs introduce a defer/flush mechanism to return
pages belonging to the same xdp_mem_allocator object
(identified via the mem.id field) in bulk to optimize
I-cache and D-cache since xdp_return_frame is usually
run inside the driver NAPI tx completion loop.
Convert mvneta, mvpp2 and mlx5 drivers to xdp_return_frame_bulk APIs.
More details on benchmarks run on mlx5 can be found here:
https://github.com/xdp-project/xdp-project/blob/master/areas/mem/xdp_bulk_return01.org
Changes since v5:
- do not keep looping over ptr_ring if the cache is full but release leftover
pages running page_pool_return_page
Changes since v4:
- fix comments
- introduce xdp_frame_bulk_init utility routine
- compiler annotations for I-cache code layout
- move rcu_read_lock outside fast-path
- mlx5 xdp bulking code optimization
Changes since v3:
- align DEV_MAP_BULK_SIZE to XDP_BULK_QUEUE_SIZE
- refactor page_pool_put_page_bulk to avoid code duplication
Changes since v2:
- move mvneta changes in a dedicated patch
Changes since v1:
- improve comments
- rework xdp_return_frame_bulk routine logic
- move count and xa fields at the beginning of xdp_frame_bulk struct
- invert logic in page_pool_put_page_bulk for loop
====================
Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
Acked-by: Jesper Dangaard Brouer <brouer@redhat.com>
Diffstat (limited to 'tools/include')
0 files changed, 0 insertions, 0 deletions