First-photon imaging

The compressive sensing blog Nuit-Blanche reports this publication: First-photon imaging. The technique allows to capture depth and (limited) reflectivity information using only a small number of photons (virtually in the dark).

Abstract:

Imagers that use their own illumination can capture 3D structure and reflectivity information. With photon-counting detectors, images can be acquired at extremely low photon fluxes. To suppress the Poisson noise inherent in low-flux operation, such imagers typically require hundreds of detected photons per pixel for accurate range and reflectivity determination. We introduce a low-flux imaging technique, called first-photon imaging, which is a computational imager that exploits spatial correlations found in real-world scenes and the physics of low-flux measurements. Our technique recovers 3D structure and reflectivity from the first detected photon at each pixel. We demonstrate simultaneous acquisition of sub-pulse duration range and 4-bit reflectivity information in the presence of high background noise. First-photon imaging may be of considerable value to both microscopy and remote sensing.

Making the subtle obvious, follow-up

A couple of months ago I was posting here about this SIGGRAPH publication on amplification of details in a video. Yesterday the New York Times put online a story as well as a video on the topic, with explanations from the authors and some new examples.

The science gap

I mentioned before the video by cartoonist Jorge Cham, illustrating the explanations of a CERN researcher on the Large Hadron Collider and the Higgs boson. This video was great and explained in layman’s terms the matter (pun intended) of this huge research project.

Today I watched a TEDx talk by Jorge Cham, tackling with what he refers to as the science gap, between the people who do science, and the general public. A part of his talk explains the story behind the Higgs Boson animation, and this story alone makes the talk worth watching.

Making the subtle obvious

Take a video, decompose it into several frequency components, filter and amplify each one, recompose them back to an output video, profit. Nuit-Blanche mentioned this paper presented earlier this year at SIGGRAPH. I never thought you could actually detect the blood flow from a simple video…

Update: more to see in this follow-up post.

TED talk about femto photography

I already mentioned the camera built by a team in the MIT Media Lab, allowing with its trillions of frame per second, to capture the propagation of light or to see around corners.

TED published a video of the talk given by Ramesh Raskar, where he presents this work and the new possibilities it opens.

State of the art in real-time realistic skin rendering

Jorge Jimenez posted yesterday the last results of his research on skin rendering: Separable Subsurface Scattering. He provides a very impressive real-time demo, which, some point out, does runs on actual current hardware (it ran, slowly, on my low-end laptop). So even though he provides the following video of it, you should definitely try the actual binary. Oh, and the source code is available too. :-)