NEW 4D CAMERA TO BOOST ROBOT VISION, VIRTUAL REALITY

BOSTON (TIP): Stanford scientists have developed a first-of-its-kind camera which can generate information-rich four dimensional (4D) images that robots need to navigate the world.

The camera, which captures nearly 140 degrees of information, can be better than current options for close-up robotic vision and augmented reality, researchers said.

“We want to consider what would be the right camera for a robot that drives or delivers packages by air. We are great at making cameras for humans but do robots need to see the way humans do?

Probably not,” said Donald Dansereau, a postdoctoral fellow at Stanford University in the US.

The robots have to move around, gathering different perspectives, if they want to understand certain aspects of their environment, such as movement and material composition of different objects.

The new camera could allow them to gather much the same information in a single image. The researchers also see this being used in autonomous vehicles and augmented and virtual reality technologies.

The difference between looking through a normal camera and the new design is like the difference between looking through a peephole and a window, the scientists said.

“A 2D photo is like a peephole because you cannot move your head around to gain more information about depth, translucency or light scattering,” Dansereau said.

“Looking through a window, you can move and, as a result, identify features like shape, transparency and shininess,” he said. That additional information comes from a type of photography called light field photography, first described in 1996 by Stanford researchers.

Light field photography captures the same image as a conventional 2D camera plus information about the direction and distance of the light hitting the lens, creating what is known as a 4D image.

A well-known feature of light field photography is that it allows users to refocus images after they are taken because the images include information about the light position and direction.

Robots might use this to see through rain and other things that could obscure their vision. This camera system’s wide field of view, detailed depth information and potential compact size are all desirable features for imaging systems incorporated in wearables, robotics, autonomous vehicles and augmented and virtual reality.

“It could enable various types of artificially intelligent technology to understand how far away objects are, whether they are moving and what they have made of,” said Gordon Wetzstein, assistant professor at Stanford.

“This system could be helpful in any situation where you have limited space and you want the computer to understand the entire world around it,” Wetzstein said. Source: PTI

Be the first to comment

The Indian Panorama - Best Indian American Newspaper in New York & Dallas - Comments