Global Sources
EE Times-India
Stay in touch with EE Times India
EE Times-India > Sensors/MEMS

Enable spatial sensing in robotic systems

Posted: 14 Oct 2013     Print Version  Bookmark and Share

Keywords:robotic systems  algorithms  Adaboost classifiers  DSPs  Embedded Vision Alliance 

As often shown both in science fiction and shipping-product documentation, robots promise to free humans from boring, monotonous and otherwise undesirable tasks, as well as to improve the quality of those tasks' outcomes through high speed and high precision. Consider, for example, the initial wave of autonomous consumer robotics systems that tackle vacuuming, carpet scrubbing, and even gutter-cleaning chores. Or consider the ever-increasing prevalence of robots in a diversity of manufacturing line environments (figure 1).

First-generation autonomous consumer robots, however, employ relatively crude schemes for learning about and navigating their surroundings. These elementary techniques include human-erected barriers comprised of infrared transmitters, which coordinate with infrared sensors built into the robot to prevent it from tumbling down a set of stairs or wandering into another room.

Figure 1: Autonomous consumer-tailored products (a) and industrial manufacturing systems (b) are among the many classes of robots that can be functionally enhanced by vision processing capabilities..

A built-in shock sensor can inform the autonomous robot that it has collided with an object and shouldn't attempt to continue forward, and in more advanced mapping-capable designs, also shouldn't revisit this location. And while first-generation manufacturing robots may work more tirelessly, faster, and more exactly than do their human forebears, their success is predicated on incoming parts arriving in fixed orientations and locations, thereby increasing the complexity of the manufacturing process. Any deviation in part position and/or orientation will result in assembly failures.

Humans use their eyes and other senses to discern the world around them and navigate through it.ÿTheoretically, robotic systems should be able to do the same thing, leveraging camera assemblies, vision processors, and various software algorithms.ÿUntil recently, such technology has been found only in complex, expensive systems. However, cost, performance, and power consumption advances in digital integrated circuits are now paving the way for the proliferation of 'vision' into diverse and high-volume applications, including robot implementations. Challenges remain, but they're more easily, rapidly, and cost-effectively solved than has been possible before.

Software techniques
Developing robotic systems capable of adapting to their environments requires the use of computer vision algorithms that can convert the data from image sensors into actionable information about the environment. Two common tasks for robots are identifying external objects and their orientations, and determining the robot's location and orientation. Many robots are designed to interact with one or more specific objects. For situation-adaptive robots, it's necessary for them to detect these objects when they are in unknown locations and orientations, as well as to comprehend that these objects might be moving.

Cameras produce millions of pixels of data per second, which creates a heavy processing burden. One way to resolve this challenge is to detect multi-pixel features, such as corners, blobs, edges, or lines, in each frame of video data (figure 2).

Figure 2: Four primary stages are involved in fully processing the raw output of a 2D or 3D sensor for robotic vision, with each stage exhibiting unique characteristics and constraints in its processing requirements.

1 • 2 • 3 Next Page Last Page

Comment on "Enable spatial sensing in robotic sy..."
*  You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.


Go to top             Connect on Facebook      Follow us on Twitter      Follow us on Orkut

Back to Top