Embedded Vision Summit highlights car safety rules
Keywords:Berkeley Design Technology Embedded Vision Summit ADAS automotive safety
This year's Embedded Vision Summit showcased programming tools, trainable algorithms, growing data sets and powerful processors to develop vision systems with deep learning intelligence. An intelligent camera pointed at an unfamiliar object will allow your vision system tell you exactly what you're looking at. This is potentially handy in automotive safety applications such as ADAS where your front-view cameras will recognise, with remarkable fidelity, cyclists and pedestrians on the road with you.
While many presenters at the three-day conference could offer quotable opinions on ADAS, very few would tell you what might displace automotive safety as the vision industry's "next big thing." Automotive safety rules.
By itself, the Embedded Vision Summit serves as a signpost for the image processing industry, and related applications depending on vision and artificial intelligence. Drawing over 1,000 engineers, product and software developers (as well as venture capitalists), the Embedded Vision Summit speakers catalogued modern hardware and software tools that would 'make sense' out of captured visual information. Some analysts and VCs even offered clues for monetising the collected data.
Automotive safety has emerged as the most visible (and potentially lucrative) application for embedded vision. With about 100 million new cars coming onto the road each year, automotive safety should not be taken lightly, reminds Jeff Bier, president of consultancy Berkeley Design Technology Inc. (BDTI) and organiser of the Embedded Vision Summit. Coupled with automatic braking, active suspensions and road surface mapping, automotive driver assist systems (ADAS) have the potential to save lives and prevent accidents. Future applications of embedded vision, like "watching your 'stuff'," will likely be waiting in the wings, Bier said.

Figure 1: Deep learning software revenue by industry, worldwide 2015-2024 (Source: Tractica)
In addition to pedestrian recognition, vision will enable autonomous driving. ADAS demand will grow 19.2% CAGR from 2015-2020, claimed Qualcomm's SVP Raj Talluri, citing projections by Strategy Analytics. The vision software and hardware that supports autonomous vehicles will also support mobile robots and drones (29 million units in 2021). What other applications will embedded vision enable? Conference attendees wanted to know.
Bruce Daley, principle analyst at Tractica cited factory automation and agriculture applications in his presentation. These applications, like ADAS, will likely depend on Neural Network intelligence.
The market driver, he said, is a virtual flood of data. Like the Internet of Things (IoT), embedded vision will generate truckloads. Apart from the issues of privacy and security, users will want to know, what do you do with The Data? Do you sell it? Do you analyse it...and sell the analysis? Do you sell analysis tools? Trainable deep learning algorithms are required to analyse the data, Daley reminds. The likely neural network analysis steps include image capture and digitisation, DSP (including convolutional) analysis of the data, and the construction of sellable maps which serve as the foundation for a viable business. In one particular use case, agriculture, satellite images are rendered into maps which tell farmers about their land, where they would find moisture and plant-friendly soil.
Other deep learning markets identified by Tractica include static image processing, in which the content of photos are recognised, tagged, stored and retrieved on request. Companies like Google and Facebook, Daley estimates, use deep learning to identify objects within some 350 new photos a day. An index of these photos would also support personalised ad servers.
Software and hardware required to facilitate deep learning
Deep Learning techniques have dramatically improved the performance of computer visions tasks such as object recognition, confirms Qualcomm's Raj Tullari. Dramatic Intelligence in mobile and embedded devices enables what he calls "the ubiquitous use of vision."
Visit Asia Webinars to learn about the latest in technology and get practical design tips.