About a year ago at SIGGRAPH Asia 2021 (which took place as a hybrid conference both online and on site at the Tokyo International Forum) one of the technical papers that caught my attention was the publication by Šárka Sochorová and Ondřej Jamriška on color mixing.
This time, the core idea is to model colors as pigments: estimate the pigment concentration based on the color, so in a way, move from RGB space to “pigment space”, and interpolate the pigment concentration, before converting back to RGB space.
The paper uses the Kubelka-Munk model for estimating colors from pigment concentration. The problem however is to find a transformation between the two spaces. A first assumption is made on the available pigments: essentially restricting them to CMYK. Then two problems are addressed: RGB colors that cannot be represented with those pigments, and likewise pigment colors that cannot be represented in RGB. The paper proposes a remapping that enables a transform and its inverse, thus allowing to move from RGB space to pigment space, interpolate in pigment space, and move back to RGB space.
You could argue this is therefore a physically based diffuse color mixing.
Recently a short video from dark magic programmer Tomasz Stachowiak made the rounds in the graphics programming community, at the sound of jaws hitting the floor in its wake. It shows his recent progress on in his renderer pet project: beautiful real-time global illumination with fast convergence and barely any noise, in a static environment with dynamic lighting.
Rearchitecting Spatiotemporal Resampling for Production (video, slides) Both presentations explain the same thing, but with small differences that sometimes are clearer in one or the other. They explain again the foundations of the technique, then detail where the improvements lie (use fewer more relevant samples, avoid wasting work, and using a more cache friendly approach).
To be clear, right now, ReSTIR is a box of razor blades without handles (or a box of unlabeled knobs). It’s extremely powerful, but you have to know what you’re doing. It is not intuitive, if your existing perspective is traditional Monte Carlo (or real-time) sampling techniques.
People sometimes think SIGGRAPH paper = solved. Nope. We’ve learned a lot since the first paper, and our direct lighting is a lot more stable with that knowledge. We’re still learning how to do it well on full-length paths.
And there’s a bunch of edge cases, even in direct lighting, that we know how to solve but haven’t had time to write them up, polish, and demo.
We haven’t actually tried to solve the extra noise at disocclusions in (what I think of as) a very principled way. Right now a world-space structure is probably the best way. I’m pretty sure it can be done without a (formal) world-space structure, just “more ReSTIR.”
Following yesterday’s post about a music video featuring modern dance and computer visual effects, here is a video featuring classical dance and a robot controlled camera.
Francesca Da Rimini was a historical figure portrayed in the Divine Comedy and numerous works of art, including a symphonic poem by Tchaikovsky. In 2014 the director Tarik Abdel-Gawad and his team recorded a performance by two dancers of the San Francisco Ballet, Maria Kochetkova and Joan Boada, using a robot controlled camera. Tarik was also the technical and creative director of the demonstration video featuring the same Bot&Dolly robots (a company acquired by Google in 2013) and which turned viral, “Box”.
In the accompanying back stage video, he explains how seeing dancers rehearse over and over gave him the idea of experimenting with a pre-programmed robot, in order to make the camera part of the choregraphy, and allow the viewer to have a closer, more intimate, view of the performance.
So that’s my job in a sense: search other worlds for alien life.
So when I’m on a long plane flight, like coming over here, and the guy sitting next to me says: “So what do you do?”. Chatty fellow. I say: “Well I search other worlds for alien life.”. And then, he leaves me alone for the rest of the flight, I can get some sleep. It’s a great job description, I like it.
Ascent is a commented montage of carefully selected videos of the launch of space shuttle, made by the Glenn Research Center. A DVD and a Blu-ray were produced but are apparently yet to be distributed reliably, so meanwhile the DVD ISO can be downloaded on this unofficial website.
The document is 45mn long, and presents outstanding footage taken during launch of missions STS-114, STS-117, and STS-124, from some of the 125 cameras used to ensure vehicle safety. Views include close ups of the ignition and of the launchpad at 400 fps, mid range footage, and up to footage taken from over 30km away (with the equivalent of a 4000mm lens). The comments give abundant detail about what is happening on the picture as well as the camera involved (lens, film, speed…).
As mentioned this video is 45mn long, but I’ve found it so captivating that I hardly noticed the length. If you only have 8mn available though, this other montage shows the launch from the cameras attached to the solid rocket boosters (SRB) with the recorded sound, from ignition, up until separation, then down to landing in the sea.