Global Sources
EE Times-India
Stay in touch with EE Times India
 
EE Times-India > Networks
 
 
Networks  

Facebook opens single-system-powered data centre

Posted: 17 Nov 2014     Print Version  Bookmark and Share

Keywords:Altoona 

Facebook's new datacentre is now online, and it runs on a single warehoused-sized system. The merit of this achievement lies in using smaller, cheaper aggregation switches instead of banking on and being constrained by the biggest, fastest boxes.

The company described the fabric architecture of its new Altoona, Iowa, datacenter in a Web post. It said the datacenter uses 10G networking to servers and 40G between all top-of-rack and aggregation switches.

Facebook's datacentre architecture

The new Facebook fabric divides so-called pods of 48 server racks into interconnected switching lanes handling traffic inside the network and a smaller set of 40G links handling external traffic.

The news comes just weeks after rival Microsoft announced it is starting to migrate all its servers to 40G links and switches to 100G. Microsoft suggested it might use FPGAs on future systems to extend bandwidth in the future given it is surpassing what current and expected Ethernet chips will deliver.

Big datacenters have long been pushing the edge of networking which is their chief bottleneck. The new Facebook datacenter appears to try to solve the problem using a novel topology, rather than using more expensive hardware.

Chip and systems vendors hurriedly developed efforts for 25G Ethernet earlier this year as another approach for bandwidth-starved datacenters. They hope some datacenters migrate from 10 to 25G to the server with road maps to 50 and possibly 200G for switches.

Facebook suggested its approach opens up more bandwidth and provides and easier way to scale networks while still tolerating expected component and system failures. It said its 40G fabric could quickly scale to 100G for which chips and systems are now available although rather expensive.

Switch makers such as Arista Networks that serve these datacenters also have felt they are bumping up to the limits of what is affordable in 100G systems. With the new fabric, Facebook is not likely to be writing purchase orders for such big silicon-rich systems.

"To build the biggest clusters [Facebook's former approach] we needed the biggest networking devices, and those devices are available only from a limited set of vendors. Additionally, the need for so many ports in a box is orthogonal to the desire to provide the highest bandwidth infrastructure possible. Evolutionary transitions to the next interface speed do not come at the same XXL densities quickly. Operationally, the bigger bleeding-edge boxes are not better for us either."

Facebook's datacentre architecture

Facebook showed a physical layout for its new datacenter fabric.

Facebook said its new design provides 10x more bandwidth between servers inside the datacenter where traffic growth rates are highest. It said it could tune the approach to a 50x bandwidth increase using the same 10/40G links. The fabric operates at Layer 3 using BGP4 as its only routing protocol with minimal features enabled.

"Our current starting point is 4:1 fabric oversubscription from rack to rack, with only 12 spines per plane, out of 48 possible. This level allows us to achieve the same forwarding capacity building-wide as what we previously had intra-cluster. When the need comes, we can increase this capacity in granular steps, or we can quickly jump to 2:1 oversubscription, or even full 1:1 non-oversubscribed state at once. All we need to do is add more spine devices to each of the planes."

Separately, Facebook said its Altoona datacenter is run 100 per cent using renewable energy, thanks mainly to a new 140MW wind farm nearby.

In a video a Facebook networking engineer describes the new fabric.

- Rick Merritt
  EE Times





Comment on "Facebook opens single-system-powered..."
Comments:  
*  You can enter [0] more charecters.
*Verify code:
 
 
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

 

Go to top             Connect on Facebook      Follow us on Twitter      Follow us on Orkut

 
Back to Top