Global Sources
EE Times-India
 
EE Times-India > EDA/IP
 
 
EDA/IP  

Optimising SoCs with deep learning

Posted: 01 Apr 2015     Print Version  Bookmark and Share

Keywords:deep learning  CNN  Opus  Zeroth  neural networks 

The way computers see, hear and identify objects in the real world are being transformed by deep learning.

However, the bigger—and perhaps more pertinent—issues for the semiconductor industry are: Will "deep learning" ever migrate into smartphones, wearable devices or the tiny computer vision SoCs used in highly automated cars? Has anybody come up with SoC architecture optimised for neural networks? If so, what does it look like?

"There is no question that deep learning is a game-changer," said Jeff Bier, a founder of the Embedded Vision Alliance. In computer vision, for example, deep learning is very powerful. "The caveat is that it's still an empirical field. People are trying different things," he said.

There's ample evidence to support chip vendors' growing enthusiasm for deep learning, and more specifically, convolutional neural networks (CNN). CNN are widely used models for image and video recognition.

Earlier this month, Qualcomm introduced its "Zeroth platform," a cognitive-capable platform that's said to "mimic the brain." It will be used for future mobile chips, including its forthcoming Snapdragon 820, according to Qualcomm.

Cognivue is another company vocal about deep learning. The company claims that its new embedded vision SoC architecture, called Opus, will take advantage of deep learning advancements to increase detection rates dramatically. Cognivue is collaborating with the University of Ottawa.

If presentations at Nvidia's recent GPU Technology Conference (GTC) were any indication, you get the picture that Nvidia is banking on the all aspects of deep learning in which GPU holds the key.

China's Baidu, a giant in search technology, has been training deep neural network models to recognise general classes of objects at data centres. It plans to move such models into embedded systems.

Zeroing in on this topic during a recent interview with EE Times, Ren Wu, a distinguished scientist at Baidu Research, said, "Consider the dramatic increase of smartphones' processing power. Super intelligent models—extracted from the deep learning at data centres—can be running inside our handset." A handset so equipped can run models in place without having to send and retrieve data from the cloud. Wu, however, added, "The biggest challenge is if we can do it at very low power.

AI to deep learning

One thing is clear. Gone are the frustration and disillusion over artificial intelligence (AI) that marked the late 1980s and early '90s. In the new "big data" era, larger sets of massive data and powerful computing have combined to train neural networks to distinguish objects. Deep learning is now considered a new field moving towards AI.

Some claim machines are gaining the ability to recognise objects as accurately as humans. According to a paper recently published by a team of Microsoft researchers in Beijing, their computer vision system based on deep CNNs had for the first time eclipsed the ability of people to classify objects defined in the ImageNet 1000 challenge. Only five days after Microsoft announced it had beat the human benchmark of 5.1 per cent errors with a 4.94 per cent error grabbing neural network, Google announced it had one-upped Microsoft by 0.04 per cent.

Different players in the electronics industry are tackling deep learning in different ways, however.


1 • 2 • 3 Next Page Last Page



Comment on "Optimising SoCs with deep learning"
Comments:  
*  You can enter [0] more charecters.
*Verify code:
 
 
Webinars

Seminars

Visit Asia Webinars to learn about the latest in technology and get practical design tips.

 

Go to top             Connect on Facebook      Follow us on Twitter      Follow us on Orkut

 
Back to Top