Global Sources
EE Times-India
Stay in touch with EE Times India
EE Times-India > Embedded

GPUs vs. CPUs for EDA simulations

Posted: 15 Apr 2009     Print Version  Bookmark and Share

Keywords:simulation  processor clocks  CPU  GPU 

Simulation has always been about speed. A program that forecasts tomorrow's weather but takes 26 hours to complete is useless, but one that takes 26 minutes is invaluable. It's the same with EDA. If you can get simulation results faster than spinning a board or a chip, you add value. If you don't, you don't.

There are basically three ways to make the simulation go faster: better algorithms, faster processor clocks and parallelism. As David A. Patterson, professor of computer science at the University of California at Berkeley, says: "No one knows how to design a 15-GHz processor, so the other option is to retrain all the software developers" to program parallel machines.

We agree. Processor speeds have topped out. Clock them much faster, and you wouldn't be able to get the heat out fast enough to keep the chip from burning up.

As for algorithms, there is predictable, incremental improvement in algorithms, and sometimes there are breakthroughs. But you can't write a business plan based on that kind of breakthrough.

So the new trend is clearly towards parallel machines. The obvious target is multi-core central processor units (CPUs).

We are also seeing a trend to leverage graphics processor units (GPUs). These chips originated in the video game industry for high-performance graphics calculations. They have hundreds of cores. And it turns out they can do tasks unrelated to their original target market of rendering a moving 3-D scene onto streaming 2-D screen images.

If you've been in the industry awhile, you may be getting a feeling of déjà vu these days. This configuration is a bit like the vector supercomputers of decades ago, and you may be wondering, "Well, if supercomputers didn't go mainstream, then why GPUs now?"

It's different this time around because vector machines started at the top of the price/performance curve with Pentagon funding and didn't migrate down. GPUs, on the other hand, have a different economic model. They started at the bottom, were sold to millions to gamers, and now have a terrific price/performance ratio.

Why are GPUs different this time around? Moore's Law states that the number of transistors on a chip doubles every two years. (For decades, smaller transistors meant not only more per chip, but also faster transistors as well. We got faster CPUs at the same time we got more sophisticated CPUs.)'

CPU architecture has also changed from complex, highly pipelined designs to simpler, cloned designs, for multi-core CPUs. Moore's Law can be therefore be interpreted as "the number of cores on a chip will double every two years."

1 • 2 Next Page Last Page

Comment on "GPUs vs. CPUs for EDA simulations"
*  You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.


Go to top             Connect on Facebook      Follow us on Twitter      Follow us on Orkut

Back to Top