Global Sources
EE Times-India
Stay in touch with EE Times India
EE Times-India > Embedded

Emerging tech lights up Siggraph

Posted: 08 Aug 2005     Print Version  Bookmark and Share

Keywords:computer graphics  siggraph 

Star Wars and NASA's Mars rover landings have one thing in common: killer graphics. The two worlds they represent, of fantasy and fact, came together here at the annual Siggraph conference.

Siggraph's popular Emerging Technologies pavilion offered a taste of how computer graphics and imaging will one day be used in interfaces, visualization and the presentation of content.

Presentations from Japanese teams explored the use of graphics technology to represent facial emotions and techniques for embedding cameras into a moving ball, offering a whole new angle on baseball or other sports. A team at Chiba University in Japan detailed the Color-Enhanced Emotion system, which recognizes facial expressions in computer graphics content and controls skin-pigment components using a real-time processor to enhance them. The result is an "emotional facial expression."

"This is very important for Japanese people," said head researcher Toshiya Nakaguchi. "They tend to show little emotion in face-to-face meetings."

With video phones and video chat becoming more commonplace, the researchers said, it will be increasingly important to control image quality in a limited-bandwidth environment by applying emotion effects in real-time at a reasonable cost. The technique could also be applied to movie editing, the team maintained.

The Color-Enhanced Emotion system uses computer vision techniques to recognize feelings expressed in facial images, and then implements a hardware-accelerated real-time processing system to control the pigment components of the skin by replicating a broad range of conditions with color enhancements: fair, suntanned, pale, red-faced and so on. Accurate registration cameras decompose the surface reflection of the face to enhance it with the colors associated with commonly observed emotions.

Elsewhere, Andy Wilson at Microsoft Research wants to let users control objects in displays by movement and gestures. In a demonstration of the company's TouchLight technology, a transparent acrylic-plastic 4 x 3-foot board - actually an advanced optical-lens screen from dnpDenmark, a company near Copenhagen - was mounted vertically on a jig. Three off-the-shelf cameras and a projector were placed in the back. Like the futuristic displays in the film Minority Report, an otherwise normal-looking sheet of plastic was transformed into a high-bandwidth input/output surface suitable for gesture-based interaction.

"Our current goals include exploration of interaction techniques, signal-processing algorithms and artistic installations that are idiomatic to this configuration," said Wilson.

Graphics boundaries continue to expand

TouchLight has implications for a future of ubiquitous computing in which potentially any surface is a site of input and computation, and the very displays are aware of people's presence. In the future, "we will always be in touch with our data via wall-sized displays, which, coupled with the appropriate sensing systems, will accommodate a variety of interaction styles," Wilson said.

Another project concocted by German researchers at Bauhaus-Universit

t Weimar is a fully automatic image-correction technique that supports view-dependent stereoscopic projections of real-time graphics or ordinary video content onto everyday surfaces. While the actual surfaces can be geometrically complex, arbitrarily textured and colored, it appears to the viewer that the output has been projected onto artificial (white and planar) canvases. The technique will enable upcoming portable projectors and ad hoc stereoscopic visualizations, said head researcher Oliver Bimber. With future projectors becoming small and portable as they find their way into mobile devices such as laptops, cell phones and PDAs, such smart-projector techniques could make it possible to display multimedia and other content on arbitrary, everyday surfaces. With digital cameras getting progressively smaller and smarter, it may be possible to one day embed them in baseballs or basketballs to give spectators a ball's eye view of their favorite sport. The goal of the MotionSphere project is to combine stabilization and object tracking - technologies that today work fine separately, but not together - in real-time, interactively.

MotionSphere-developed by a team from ViewPlus Inc. (Tokyo), in collaboration with the University of Tokyo and Japan's University of Electro-Communications-is built around a very fast, robust algorithm that processes multicamera images in real-time. The image-processing technology stabilizes the trembling in images captured by a rotating camera, such as one inside a curve ball.

To stabilize the image, optical flow is obtained at more than 100 locations and the rotation parameters are estimated based on the minimum-square-error method. Object tracking is achieved by combining background extraction and color extraction. The static part of the sphere image is filtered out with background extraction to dramatically decrease processing requirements and realize real-time interaction.

MotionSphere compensates for both the ball and the camera spinning simultaneously, and will produce a steady image from inside the ball, the team said.

- Nicolas Mokhoff

EE Times

Comment on "Emerging tech lights up Siggraph"
*  You can enter [0] more charecters.
*Verify code:


Visit Asia Webinars to learn about the latest in technology and get practical design tips.


Go to top             Connect on Facebook      Follow us on Twitter      Follow us on Orkut

Back to Top