With the megapixel race already past the point of noticeable benefit to consumers, it looks like the next camera arms race will be the number of lenses your rig sports -- a team at Stanford is working on a 3D camera that uses 12,616 micro-lenses to generate high quality 3 megapixel images with self-contained "depth maps" that measure the distance to every object in the frame. The system works by focusing each lens above four different overlapping sensor arrays, which work in concert to determine depth -- just like your eyes. Unlike similar systems, the Stanford rig is able to use that data to create a depth map without lasers, prisms, or even complex calibration, which will allow the team to shrink the tech down to compact and cellphone camera size. Once it's ubiquitous, the teams says depth map information can be used to do anything from enhancing facial recognition systems to improving robot vision, but there's still a long way to go -- the team has just started trying to work out how to manufacture the system.
Thursday, March 20, 2008
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment