Global Sources
EE Times-India
Stay in touch with EE Times India
EE Times-India > Interface

Multi-protocol comms enhanced by PCIe-based fabrics

Posted: 14 Jan 2015     Print Version  Bookmark and Share

Keywords:direct-memory access  DMA  PCIe  RDMA  HPC 

Integrating intelligent direct-memory access (DMA) engines with PCI Express (PCIe) switches can enable embedded systems and other complex designs with low-latency, high-performance communication transport in small- to medium-sized clusters. Such a PCIe transport can be defined to allow tunnelling of several protocols, forming the basis for a converged fabric. While tunnelling a software protocol over any fabric can be an easy task, tunnelling a hardware protocol such as Ethernet poses some new challenges, such as broadcast or multi-cast addressing, VLANs, and priority. This article will look at an implementation of multi-protocol tunnelling over PCIe, including Ethernet and remote direct memory access (RDMA), and explain how this technique can be extended to application-specific, high-performance computing (HPC), storage and proprietary protocols.

PCIe is the de-facto standard for connecting devices in today's embedded, storage, communications, and server platforms. Leveraging the standards-based extensions to PCIe allows for a converged, scalable, rack-level fabric. While a PCIe-based fabric provides connectivity and sharing of devices across the fabric, it also exposes a built-in, intelligent, virtualized DMA engine to the connected computing nodes. Here, these DMA engines serve as a transport for multi-protocol, high-performance, host-to-host communications over PCIe.

Each computing node connected to the PCIe-based fabric sees a device tree hierarchy, as shown in figure 1.

Figure 1: Host view of DMA engines.

The computing nodes see (a configurable) number of DMA engines as full function PCIe networking-class end-points.

The current generation of networking end-points supports the following basic features:
 • Multiple transmit queues with an efficient doorbell interface
 • Multiple completion queues with MSI-X vectors
 • Interrupt moderation and CPU/core affinity for completion queues through MSI-X vectors

DMA engines in a PCIe-based fabric, such as ExpressFabric, add a few more advanced features to this set, enabling multi-protocol transport capability:
 • Per-DMA request priority
 • Per-transmit queue priority
 • RDMA-like memory registration and direct application memory access natively
 • Hardware-enforced security on RDMA or connection oriented operations
 • Software-added Upper Layer Protocol (ULP) ID/protocol-specific header fields in DMA requests

These DMA engines, for example, use a 128B descriptor to describe a transmit work request. Two types of descriptors are used – one for short messages and one for long messages. Figure 2 shows a simplified diagram of a short message DMA descriptor.

Figure 2: Short message DMA descriptor.

1 • 2 • 3 • 4 Next Page Last Page

Comment on "Multi-protocol comms enhanced by PCI..."
*  You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.


Go to top             Connect on Facebook      Follow us on Twitter      Follow us on Orkut

Back to Top