Optics without optics

fredymac

ACCESS: Top Secret
Joined
14 December 2009
Messages
2,155
Reaction score
832
This is imaging using a ton of math. They don't go into a lot of detail but I would guess each "pixel" in the chip is tagged with time and position vs all the others. How the light is spectrally segregated isn't clear. Given that IC chips are built to submicron scale tolerances, the optical phase across the chip is uniform. But the light reaching it can be randomly phased due to atmospheric effects and target line of sight so some way is needed to reference the phase of the light in each pixel to determine relative tilt. A Hartmann sensor does this by grouping four elements in a 2x2 array and putting it behind a tiny lenslet. Light gathering power (the ability to see dim targets) still must rely on size so that would mean a huge number of these chips would be needed. For a very bright target, you could get away with less but for resolution, the chips would need to be spread over a large area and then to be phased to each other, they would have to be bonded to a common optically flat substrate. If your physical envelope is thin this might let you package an imaging system equivalent to a much larger sensor. Sounds messy but electronics is the one area of science that consistently beats science fiction.

https://www.youtube.com/watch?v=ryxt4gKt1vY
 
The Lytro "point, shoot, focus" camera which allows you to shift the image focus anywhere after you take the picture is a standard wavefront capture system where a traditional lens is used to generate a focus. A wavefront sensor (either a Hartmann lenslet array or a focus dither phase curvature type) is used to collect phase data. This new system throws away the lens entirely and the Hartmann lenslets become pixels in themselves collecting only amplitude and phase information specific to an XY grid location. A Fourier transform computation is then needed to convert the information to an actual image.
 
It looks like a nice technology but it's being overhyped. This might be able to resolve the image on it's own but it does nothing to increase the sensors light collecting capacity. Plain old optics will still be needed in all but the most trivial applications, like cellphone cameras.

It's not particularly unique either as ATM micro lenses like this, angle sensitive pixels and coded masks are all being pushed as possible solutions for computational imaging. It remains to be seen which technology will ultimately prove to be the most successful commercially.
 

Similar threads

Back
Top Bottom