TestPMD is a lightweight application running in user space, utilizing ovs-dpdk, that can be used for testing DPDK in packet forwarding mode.
In this example we want to setup TestPMD on a RHEL VM running in our SR-IOV capable Red Hat OpenStack 10 overcloud. Our passthrough adapters are Intel X520s. Our plan here is to run performance tests via an external load generator.
Before we can get started we need to build a test VM.
VM Details
- RHEL 7.x
- Two VFs
- eth0 – ssh access via admin network
- eth1 – load generator private network
- 4 vCPUs
- 4096 MB Mem
- 150 GB Disk
Deploy RHEL VM
Your first step is to deploy your RHEL VM, configure your primary network interface (eth0 for ssh) via VM console. Eth1 needs to be upĀ and configured to start at boot, but do not assign it an IP address. Next, register your VM with your local satellite server or with RH CDN.
Download DKDP
Use the link below to download the “Latest Major” version of DPDK. Place the tarfile in /root on the VM and untar.
Install Prerequisites
Before we can compile DPDK, we need to install a few prereqs.
Install gcc
#yum -y install gcc
Install lubnuma-devel
#yum -y install glibc-devel
Install Kernel Headers and Devel
#yum -y install kernel-headers.x86_64 kernel-devel.x86_64
Install NUMA Packages
#yum -y install numad.x86_64 numactl-libs.x86_64 numactl-devel.x86_64
Install libpcap
#yum -y install libpcap.x86_64 libpcap-devel.x86_64
Install Tuned Profiles
#yum -y install tuned-profiles-cpu-partitioning.noarch
Install DPDK Tools Package
#yum -y install dpdk-tools.x86_64
Compile DPDK
Compile using the “Quick Start” guide below
http://dpdk.org/doc/quick-start
Determine the PCI address of your test interface using ethtool.
# ethtool -i eth1
driver: ixgbevf
version: 1.5.10-k
firmware-version: 5.02 0x80002390 1.1313.0
expansion-rom-version:
bus-info: 0000:00:05.0
supports-statistics: yes
supports-test: yes
supports-eeprom-access: yes
supports-register-dump: yes
supports-priv-flags: yes
Here the PCI address for eth1 is 0000:00:05.0
Load DPDK Driver for Testing Interface
In this test we are using the Intel x520 NIC, which is directly accessible to our VM via SR-IOV passthrough. If you are passing through a different NIC, your process will differ.
#modprobe vfio-pci
#dpdk-devbind –bind=vfio-pci 0000:00:05.0
Verify Driver using dpdk-devbind
# dpdk-devbind –s
Network devices using DPDK-compatible driver
============================================
0000:00:05.0 ‘82599 Ethernet Controller Virtual Function’ drv=igb_uio unused=ixgbevf,vfio-pciNetwork devices using kernel driver
===================================
0000:00:03.0 ‘Virtio network device’ if=eth0 drv=virtio-pci unused=virtio_pci,igb_uio,vfio-pci *Active*
Example TestPDM Start Script
In the example script below, we are going to start TestPDM. By default, TestPDM will forward any packets recieved on eth1 back to the sending MAC
#!/bin/bash
#VARS
NICADDRESS1=’0000:00:05.0′/root/dpdk-17.08/build/app/testpmd -l 0,1,2,3 –socket-mem 512 -n 4 –proc-type auto –file-prefix pg -w $NICADDRESS1 — –disable-rss –nb-cores=2 –portmask=1 –rxq=1 –txq=1 –rxd=256 –txd=256 –port-topology=chained –forward-mode=macswap -i –auto-start
The example below works for testing an MTU up to 9200 bytes
#works for 9200 byte packet
/root/dpdk-17.08/x86_64-native-linuxapp-gcc/app/testpmd –log-level 8 –huge-dir=/mnt/huge -l 0,1,2,3 -n 4 –proc-type auto –file-prefix pg -w $NICADDRESS1 — –disable-rss –nb-cores=2 –portmask=1 –rxq=2 –txq=2 –rxd=256 –txd=256 –port-topology=chained –forward-mode=mac –eth-peer=0,00:10:94:00:00:06 –mbuf-size=10240 –total-num-mbufs=32768 –max-pkt-len=9200 -i –auto-start
TestPMD has a plethora of options.
For additional information on the options used above, refer to the user guide.
Additional Resources