Merge tag 'mlx5-XDP-100Mpps' of git://git.kernel.org/pub/scm/linux/kernel/git/saeed/linux
Saeed Mahameed says:
====================
mlx5-XDP-100Mpps
This series from Tariq, mainly adds the support of mlx5 Multi Packet WQE
(TX descriptor) - ConnectX-5 and above - for XDP TX, which allows us to
overcome the 70Mpps PCIe bottleneck of conventional TX queues (single TX
descriptor per packet), and achieve the 100Mpps milestone with the MPWQE
approach.
In the first five patches, Tariq did minor improvements to mlx5 tx path,
for better debug-ability and code structuring.
Next two patches lay down the foundation for MPWQE implementation to store
the in-flight XDP TX information for multiple packets of one descriptor
(WQE).
Next: Support Enhanced Multi-Packet TX WQE for XDP
In this patch we add support for the HW feature, which is supported
starting from ConnectX-5.
Performance:
Tested packet rate for UDP 64Byte multi-stream over ConnectX-5 NICs.
CPU: Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
XDP_TX:
We see a huge gain on single port ConnectX-5, and reach the 100 Mpps
milestone.
* Single-port HCA:
Before: 70 Mpps
After: 100 Mpps (+42.8%)
* Dual-port HCA:
Before: 51.7 Mpps
After: 57.3 Mpps (+10.8%)
* In both cases we tested traffic on one port and for now On Dual-port
HCAs we see only a small gain, we are working to overcome this
bottleneck, but for the moment only with experimental firmware on dual
port HCAs we can reach the wanted numbers as seen on Single-port HCAs.
XDP_REDIRECT:
Redirect from (A) ConnectX-5 to (B) ConnectX-5.
Due to a setup limitation, (A) and (B) are on different NUMA nodes,
so absolute performance numbers are not optimal.
- Note:
Below is the transmit rate of (B), not the redirect rate of (A)
which is in some cases higher.
* (B) is single-port:
Before: 77 Mpps
After: 90 Mpps (+16.8%)
* (B) is dual-port:
Before: 61 Mpps
After: 72 Mpps (+18%)
Last patch adds a knob in mlx5 ethtool private flag to turn on/off
XDP TX MPWQE.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>