Global Sources
EE Times-India
Stay in touch with EE Times India
EE Times-India > Memory/Storage

Jumping over the 100GbE memory wall

Posted: 16 Apr 2014     Print Version  Bookmark and Share

Keywords:MoSys  memory wall  100GbE  OEM  SoC 

Almost, it not all, technology advances are usually hampered by a number of limitations. All interrelated system-level tradeoffs such as performance, pin count and area, ultimately are driven by power consumption considerations. At 100GbE and 400GbE, network chip vendors must consider end-to-end solutions for equipment OEMs. To remain competitive, OEMs plan to introduce multi-terabit systems that aggregate multiple 100Gb/s ports on each line card.

Two current technology trends, 100Gb/s line speeds in network appliances and the transition to IPv6, compound design complexity. At both the network SoC and OEM appliance levels, solutions have to deliver performance, network management and quality of service. Crucial parameters include absolute delay, delay jitter, minimum delivered bandwidth and packet loss. Network engineers monitor and manage networks based on these parameters, which also serve as the basis of contractual service-level agreements.

The IPv6 standard emerged to meet the rapidly diminishing number of addresses available in IPv4. This move to a 128bit number requires more complex processing, including IP search functions in network appliances, which require significantly larger address tables. Interestingly, a recent survey of the global Regional Internet Registry community identified "Vendor Support" as the biggest hurdle to IPv6 adoption.

At 100Gb/s line speeds, packets arrive every 6.7ns. The challenge for packet processors, then, is to handle high read/write interface transactions (at least two per packet arrival) and avoid adding delays to the system. Unfortunately, every off-chip transaction increases power consumption by an order of magnitude as compared to on-chip access.

Because of all the above factors, it's time to consider an architectural approach that shifts the traditional relationship between memory and the packet processor. In prior system architectures, designs optimised read/write accesses. With an intelligent serial-memory architecture, the processor transmits instructions, and intelligent serial memory transmits results in return. This means that there are few processor read/write interactions with off-chip memory, which significantly reduces processor power consumption per bit. In addition, it reduces the persistent challenge of I/O as the source of delays.

This approach addresses the frequency gap between processor and DRAM. While processor frequency has increased 75 per cent per year, DRAM has increased at only seven per cent per year. At 100GbE and above, this frequency gap between processor and memory is known as the "memory wall." Traditionally, designers simply added external memory to overcome this inherent latency. At 100GbE, there simply aren't enough pins to handle the parallel interface with DRAM.

Serial interfaces transfer more data per pin and per watt than parallel I/O, resulting in higher interconnect and energy efficiency. When purpose built for the task, serial transmissions can result in no latency penalty. At MoSys, we've developed an intelligent memory architecture to address the power, I/O and latency issues above. Our approach combines three elements in a serial memory: in-order request queues, the weighted-round-robin scheduler and the multi-cycle macro offload. This architecture can streamline bandwidth and latency intensive functions such as buffering and table indexing, along with offloading recursive/iterative functions such as exact match and longest prefix match.

- Michael Sporer
  EE Times

Comment on "Jumping over the 100GbE memory wall"
*  You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.


Go to top             Connect on Facebook      Follow us on Twitter      Follow us on Orkut

Back to Top