Accelerating Virtual Network Functions


The era of NFV is likely to be brought about by FPGA chips mounted on PCI-E network cards.

In a recent blog post, SDxCentral’s Scott Raynovich attempts to cool down all the excited sentiments about the rapid rise of NFV. However, he states that “If we argued that NFV really started a couple years ago (which is aggressive, since the technology is only now coming to market), it’s not unreasonable to expect NFV to start hitting full stride around 2019”. IHS estimates the volume of NFV market to reach $15 billion by 2020. Intel’s executive vice president Diane Bryant is optimistic, as she says that “By 2020, a third of all servers inside all the major cloud computing companies will include FPGAs.” The FPGA technology is precisely what will carry us to the NFV era.

With the rise of NFV, large telcos and data centers are starting to recognize a unique opportunity to drive the revolution in the networking industry. The number of physical machines connected to a network cannot grow indefinitely, since each new box means rise in power consumption, rack space consumption, increased risk of crashes and downtime. Proprietary hardware also lacks scalability. When upgrading the infrastructure, it is usually necessary to buy a new physical machine, not just software.

Separation of network functions from hardware and transferring them to software enables switching, fixing or updating of network functions independently on one another. The VNF software runs on commodity hardware, so there is no need to pay the big buck for proprietary hardware. This not only means reduced CAPEX and OPEX, but also an open season for innovators, since it is much easier to develop new functions in software only.

The problem commodity hardware faces is performance, especially in case of high-speed networks. Arguably the best solution of today is to accelerate commodity hardware with FPGA chips mounted on PCI-E network cards. FPGA technology offers high degree of parallelism, very decent performance/power consumption rate and full programmability. The acceleration of VNF itself is done by offloading repetitive tasks to the FPGA NIC, accelerating data plane processing.

Most common use cases of FPGA NICs for NFV acceleration are packet forwarding based on rules defined by software in order to accelerate virtualized router, filtering of packets to accelerate virtual firewall, IDS, IPS or DDoS defender, encryption and decryption of packets for the needs of SSL terminator, addition and removal of tags like VLAN, MPLS, VxLAN in order to create virtual networks and network overlays.

Several companies have already recognised the performance potential and flexibility of FPGA. Microsoft allocated non-trivial budgets to project Catapult that aims to accelerate Bing search algorithms by running them in FPGA. Intel’s acquisition of Altera, an FPGA chip manufacturer, also speaks volumes about the potential of this technology.

But the main motivation for mass adoption of FPGA is its programmability. One of the ways to approach this is P4, a programming language dedicated to abstract description of packets independent of the network infrastructure. P4 was defined at Stanford University by Martin Izzard, Nick McKeown and other academics who also came up with OpenFlow, the most common manner of SDN deployment. Today they run Barefoot Networks, a startup whose mission is to implement P4-enabled switch chip. This startup enjoys financial backing from a number of venture capitalists, including Andreessen Horovitz.

What Netcope offers is P4 implementation into VHDL so that everyone who is not an FPGA expert could easily develop their own way of hardware-accelerated packet processing for NFV acceleration.


Cookies help us deliver our services. By using this website, you agree to the use of cookies.  More information