Global Sources
EE Times-India
Stay in touch with EE Times India
 
EE Times-India > Memory/Storage
 
 
Memory/Storage  

Partitioning in network, storage systems (Part 1)

Posted: 25 Sep 2014     Print Version  Bookmark and Share

Keywords:server virtualisation  partitioning  Network adaptor  operating systems  NIC 

The larger number of end-points also drives an increase in the number of layer2 networks; this has created pressure on the fundamental way in which networks get partitioned today (i.e. VLANs). For example, the number of VLANs is limited to 4096 on most switches. This represents a scalability concern.

The other interesting impact on the network is the dynamic nature of guest operating systems. Virtualized environments support fluid mobility of a guest operating system from one host server to another to help balance workloads across the cluster and to manage server maintenance.

In order to support this capability, data centre administrators have had to provide open access to the network for every host in the cluster. This has driven concerns not only with regard to security, but also regarding the large increase in the amount of configuration that is needed in the switch for each host.

On the storage side, the fibre channel (FC) industry's response has been fairly analogous to the network industry response – both in terms of benefit and approach. Server virtualisation drove the need for isolation and performance optimisation in the shared storage networks used in these environments.

To attack those problems, the storage adapter vendors leveraged partitioning technology to allow a single host bus adapter (HBA) port to appear as multiple HBA ports to the operating system. This allows each guest operating system to have an independent HBA, which provides a secure, isolated storage end-point. And, much like the network, the implication of this had consequences into the data centre environment.

Figure 2: Over the next five years, partitioned 10 Gb Ethernet will constitute the majority of network interface design implementations.

The need for HBA partitioning caused the FC storage industry to extend the FC specification (i.e. NPIV) so that the FC fabrics could deal with a single physical port having multiple storage names (i.e. WWN). And the sheer number of storage end points puts pressure on the scalability of the FC switches both in terms of the management and internal resources.

Another place where the industry has leveraged partitioning is in converged network adapters. Leading industry NIC providers have figured out how to partition a network interface port into a combination of virtual network and storage ports. This is interesting because it provides a major simplification for the data centre environment by reducing the infrastructure that is needed to run their applications.

This technology has become particularly important with the mainstream adoption of 10Gb Ethernet networks (figure 2). Servers are now being built with 10Gb network interfaces directly on the motherboard, which provides the needed bandwidth to share both network and storage traffic on the same physical port.

This kind of advanced partitioning has also influenced the technology ecosystem. If you want to take advantage of this capability, then the switches in your datacenter need to have the ability to deal with the convergence of both network and storage protocols on the same physical wire. In order to support this, the industry has created new specifications to allow fibre channel traffic to flow over standard Ethernet networks (i.e. FCoE).

This is important because storage networks typically have different requirements than traditional Ethernet networks. Storage traffic tends to be much more sensitive to data loss and latency (the amount of time it takes for data to get to the storage array) and so new capabilities had to be added to the Ethernet network to allow storage traffic to be prioritized with a guarantee to avoid data loss. This technology has even made its way into the storage arrays – the major storage array vendors now provide support for FCoE ports on their storage arrays.

One of the hidden gems in this partitioning technology is the ability to provide IO virtualisation for bare metal operating systems. This is powerful as a building block in creating an extremely flexible infrastructure environment – one where all the servers can be anonymous resources that can run any application and operating system by dynamically programming the IO personality that is needed for that application.

This is useful in both virtualized environments and in native environments. It allows a host server to be provisioned through software and allows for an extremely efficient use of compute resources with capabilities like N+1 hardware failover, capacity on demand, day/night sharing, and a dramatically simplified disaster recovery for your infrastructure. For enterprises and cloud providers, in their quest to have the most flexible infrastructure, these partitioning technologies can help drive the operational efficiency needed to manage the most complex environments.

About the author
Scott Geng is CTO and Executive Vice President of Engineering at Egenera, and has been instrumental in the design and development of the company's Processing Area Network (PAN). Prior to joining Egenera, Geng managed the development of leading-edge operating systems and middleware products for Hitachi Computer Products and was consulting engineer for the OSF/1 1.3 micro-kernel release by Open Software Foundation. He holds Bachelor of Arts and Master of Science degrees in computer science from Boston University.

To download the PDF version of this article, click here.


 First Page Previous Page 1 • 2



Comment on "Partitioning in network, storage sys..."
Comments:  
*  You can enter [0] more charecters.
*Verify code:
 
 
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

 

Go to top             Connect on Facebook      Follow us on Twitter      Follow us on Orkut

 
Back to Top