How to virtualise with SR-IOV and FPGA

22/06/17

SR-IOV is a technology that significantly accelerates communication between virtual machines and the network card.

What is SR-IOV?

Single-Root Input/Output Virtualization is a technology which allows a PCI device (such as Netcope’s NFB/NPC/NSF) to be seen by the operating system as multiple independent devices. This allows higher performance in virtualized environments, where virtual machines (VMs, guests) are created on a physical machine (host) and need to communicate with the device.

Without SR-IOV, the host cannot allow guests to use the device, because they would disrupt each other’s operations. Consider, for example, one guest turning the device off while another transmits data. As a result, if the guests need access to some device, they must either be granted exclusive access to it (that is, the device must be assigned to a single guest, which can then use it), or the host needs to emulate the device, providing to each guest a virtual device, manage individual guests’ requests and communicate with the real device on the guests’ behalf.

With SR-IOV, the device manifests in the system as a physical function (PF) and a number of virtual functions (VF). Note that we could say “physical device” and “virtual device”, but in the world of PCI devices, multiple devices on a single board are called “functions”, and so these terms are retained in SR-IOV context as well. Each VF can be exclusively assigned to a VM, allowing the VM to communicate with the hardware directly. This enables native-like I/O performance for the VMs, as the hypervisor and the host is completely by-passed. The VF usually supports a limited set of operations, in order to prevent collisions with the other VMs. For example, it cannot turn the whole device off. The PF usually supports full set of operations (+ configuration of the SR-IOV itself) and usually stays assigned with the host.

SR-IOV in Netcope

Netcope products can utilize the SR-IOV technology to allow independent data transfers from multiple VMs with high performance.

For data transfers, Netcope products use several DMA channels (RX/TX queues). The products are able to distribute the traffic among the channels. With SR-IOV straightforward mapping, each VF is assigned one DMA channel (in both directions, RX and TX). Arbitrary mappings between DMA channels and VFs require DeviceTree drivers implementation. At all times, though, a DMA channel shall only be assigned to a single VF.

The number of available VFs is limited by the number of supported DMA channels and by the hardware properties of the FPGA chip used in the cards. The table at the end of this document sums up supported PF/VF configurations for Xilinx and Altera manufacturers.

Configuration available to the individual VFs is dependent on the behavior of the product. The basic implementation on the NIC firmware contains 8 physical interfaces and 8+8 DMA channels, making it possible to have each VF control its physical interface (RX BUF and TX BUF). In products like NPC, where data distribution is performed by centralized filter, the configuration shall be done from the PF only. In general, the extent of configuration possible from a VF shall depend on the particular use-case the product fulfills.
 

Manufacturer Chip Name Supported PFs Supported VFs Information Source
Xilinx Virtex 7 2 6 XAPP 1177
Xilinx UltraScale 2 6 Ultrascale Devices
Xilinx UltraScale+ 4 252 Ultrascale Devices
Altera Stratix V 2 128 Stratix V User Guide
Altera Aria 10 4 2048 Aria 10 User Guide


 

 

Cookies help us deliver our services. By using this website, you agree to the use of cookies.  More information

close