This 8mn video shows the initial 30s after ignition at the launch of the Saturn V rocket, carrying the Apollo 11 mission on July 16th, 1969.
Apollo 11 Saturn V Launch (HD) Camera E-8 from Spacecraft Films on Vimeo.
This 8mn video shows the initial 30s after ignition at the launch of the Saturn V rocket, carrying the Apollo 11 mission on July 16th, 1969.
Apollo 11 Saturn V Launch (HD) Camera E-8 from Spacecraft Films on Vimeo.
This 5mn video is an attempt by Anthony Cerniello at showing the aging process of a person in a timelapse manner. I’d recommend watching the video before reading anything about how it was done.
Danielle from Anthony Cerniello on Vimeo.
Here comes the spoiler: according to this article, it was created from photos of the subject and her family relatives who shared most face similarities. The photos were then animated and morphed together. Like the article points out, the animation still falls within the uncanny valley, but pause at any time and all you see is an real face.
This video shows the process of making a lettering artwork by designer and calligrapher Frank Ortmann.
A couple of months ago I was posting here about this SIGGRAPH publication on amplification of details in a video. Yesterday the New York Times put online a story as well as a video on the topic, with explanations from the authors and some new examples.
In this very short video, IƱigo Quilez of Pixar explains in layman’s terms how they used mathematics to create the moss in the CG film Brave.
From the video description: “In her final days as Commander of the International Space Station, Sunita Williams of NASA recorded an extensive tour of the orbital laboratory […]. The tour includes scenes of each of the station’s modules and research facilities with a running narrative by Williams of the work that has taken place and which is ongoing aboard the orbital outpost.”
Take a video, decompose it into several frequency components, filter and amplify each one, recompose them back to an output video, profit. Nuit-Blanche mentioned this paper presented earlier this year at SIGGRAPH. I never thought you could actually detect the blood flow from a simple video…
Update: more to see in this follow-up post.