Global Sources
EE Times-India
Stay in touch with EE Times India
 
EE Times-India > Embedded
 
 
Embedded  

Embedded multi-core enables greener networks

Posted: 25 Sep 2008     Print Version  Bookmark and Share

Keywords:energy efficiency  networking equipment  embedded multi-core  network traffic 

In the past, networking equipment manufacturers normally pushed the power envelopes of their systems to maximise performance within power budgets defined by the end system's target environment, and the amount of cooling the chassis and market could afford.

Cole: Going green is unavoidable for system vendors, and those who embrace it now will be ahead when it becomes a standard requirement.

While these practices have established innovative cooling solutions over the past decade, customers now demand greater energy efficiency—and trading off performance is not considered an acceptable solution.

System manufacturers are taking a new look at ways to decrease the power consumption of their equipment and lower the overall power requirements of their end customers' networks.

The demand for energy-efficient, greener networks is not diminishing demand for better network performance. Availability of high-quality video and audio content has fuelled this performance demand for both wired and wireless applications.

This added real-time network bandwidth inevitably translates into requirements for higher-performing processors within the networking equipment. It is anticipated that demand for greater performance will continue to grow even as consumer and regulatory requirements for green networking equipment featuring maximal energy efficiency grows more stringent.

These seemingly counter-intuitive concepts are now a reality, and system vendors must adapt, or suffer the consequences of failing to serve as an environmentally responsible vendor.

What's driving the demand for 'green'

This is not a new concept. Consumers of household appliances and computer equipment such as printers, monitors, computers and laptops have been demanding greater energy efficiency for years.

Today, networking equipment vendors are expected to improve energy efficiency. Such requirements stem not only from potential operating expense savings for network operators—they are also driven by government regulation, compliance requirements and consumer demand for environmentally friendly products.

Service providers and enterprises are demanding higher-performance and more differentiated services to increase the revenue generation of their networks. Security, intrusion detection and prevention, Quality of Service (QoS), content awareness and filtering are just a few examples of the services currently offered on general-purpose CPUs and ASICs in today's networking equipment.

However, general-purpose CPU vendors have found that the frequency wall where electrical power and silicon costs to drive the CPUs to continually higher frequencies are non-linear with respect to the additional performance, thereby making these options economically unfeasible. Processor vendors have been forced to think outside the box to offer solutions that provide more linear performance per unit of power.

Enter embedded multi-core. While not a new concept, adding multiple processor cores onto a single chip allows the processor vendor to back off usage of ultra-aggressive and high-power transistors by pulling the frequency back on the core design into the sweet spot of a process technology. The use of low-power transistors corresponds to a lower power processor.

It also results in a smaller processor, as pushing frequency commonly means adding pipeline stages and heavy node buffering, which quickly adds to the die area. Thus, by sizing the processor frequency to the technologies' natural capabilities, we have created a highly efficient processor.

What is given away in terms of frequency and performance is gained back by including more than one instance of that core. This approach can quadruple or more the MIPS-per-watt performance.

Managing traffic highs and lows

Fixed and mobile telecom and datacom networks must be designed to handle the performance of the busiest time periods with regard to network traffic. These nodes are required to always be capable of this traffic routing performance, and this means a lot of idle hardware resources during low-traffic periods.

Power reduction modes are an option to reduce the power during lower-usage periods, but these modes cannot impact the services offered by the network node. Since many network nodes are unable to monitor, measure and log usage of resources and traffic load, temporary power-reduction modes are not easy to implement within the system requirements promised to the end customer.

There are a couple of options to address these challenges within the processor itself, and they fall into two broad categories. First is non-intrusive power management and second is TDP (total dissipated power) enforcement.

Non-intrusive mechanisms include powering off IP blocks within a processor that are not used in an application. They also include choosing the operating point for the application (voltage, temperature, frequency).

And finally, they include the category of intelligent dynamic IP that recognises periods of idleness, and self-powers off either under software control or direct hardware control. TDP enforcement takes a different approach that assumes the performance can suffer in periods when the TDP is exceeded. With this approach, we can be more lenient with our thermal design, knowing that the processor will throttle itself in periods of thermal overload.

Non-intrusive power management

The TDP concept is not of much interest to the networking community because users are not going to be tolerant to variations of performance based on how hot the wiring closet gets. Non-intrusive power management, however, is of interest.

P=Cv2f is an equation well known to most electrical engineers—it defines dynamic power for CMOS-based technology. There is not much we can do about the capacitance, as this is inherent to the physics of the technology. Frequency and voltage, on the other hand, can be flexible when designing a multi-core processor to allow the user to optimise those values to deliver the necessary performance at the lowest power.

So sometimes running more cores at a lower frequency can be a better trade-off when trying to maximise performance within a fixed power budget. Hardware accelerators can offload cores from operations that are better handled by specialised hardware as opposed to general purpose hardware.

Security encryption/decryption is a good example. It takes a large number of core cycles to encrypt or decrypt a packet, but far fewer for an accelerator specifically designed to do just that task, and this saves system power.

In addition to optimising the core for frequency per watt, it is also key to address another major impact to the power calculation, and that is voltage. According to our equation, voltage is the squared function, and thus the bigger impact variable.

Many times frequency and voltage go hand in hand when optimising for power, as lowering voltage usually indirectly means you lower frequency to compensate. In a good multi-core solution, the platform architecture should support multiple power planes to offer the ability to run the cores at the ideal voltage for the frequency required to get the necessary performance.

For example, in an eight-core device the architecture could support three separate power planes with four cores on one, and two cores each on the other two planes. This would allow the designer to run each subset of cores at the voltage necessary for the desired frequency of that subset of cores. It would also allow the designer to shut down the power planes for unused cores, saving both dynamic and static power to those cores.

Minimising leakage

In the latest high-performance thin-gate technology, the leakage current, which is the static current flowing between source and drain when the cell is not switching, has become a more dominant aspect of total power. Saving this leakage or static power will continue to be an important area, and a more dominant term in total power usage. With an architected platform like this, the voltage on each power plane can be individually configured, allowing the designer to optimise the power and performance for the given application.

Once the optimal core has been chosen for the multi-core processor, the processor platform supports the ability to optimise the processors' voltage and frequencies and therefore power to the targeted application, there is still more to be done to make sure the architected solution maximises the system designer's ability to build a greener network node. With a flexible processor as defined above, system designers can dynamically change the frequency of some cores when traffic is slow.

When traffic increases, the application can notice that as well, and react quickly by increasing the frequencies of the cores to compensate for the added traffic. In fact, the same design would allow a system designer to increase the frequency of a subset of cores. This is advantageous for cases when the network needs more processing power to get to a steady state during a network crisis—for instance, if routers were to go down and the rest of the network were to be flooded with control traffic trying to establish new routes through the network.

All this makes the processor design flexible enough to offer system vendors multiple options when trying to design next-generation systems that must be green. Going green is unavoidable for system vendors in the future, and those who embrace it now will be ahead when it becomes a standard requirement for networking equipment.

This will not only reduce the operating expense for service providers and enterprises, but it will also reduce the CO2 emissions used to generate that power. These power-friendly multi-core processors also allow system vendors to consolidate services within a single chassis or system, and with the additional virtualisation resources, offer the same solution in a single system with increased performance and much lower power.

Decreasing the number of systems in the network lowers the power used within the enterprise or the service provider. We all must be committed to providing greener solutions in the future and investing in the necessary research, technology, and architecture to help deliver these power-efficient solutions to the market.

-Stephen Cole
Senior Systems Architect
Networking Systems Division
Freescale Semiconductor





Comment on "Embedded multi-core enables greener ..."
Comments:  
*  You can enter [0] more charecters.
*Verify code:
 
 
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

 

Go to top             Connect on Facebook      Follow us on Twitter      Follow us on Orkut

 
Back to Top