Practical Pigment Mixing for Digital Painting

About a year ago at SIGGRAPH Asia 2021 (which took place as a hybrid conference both online and on site at the Tokyo International Forum) one of the technical papers that caught my attention was the publication by Šárka Sochorová and Ondřej Jamriška on color mixing.

Color mixing in most digital painting tools is infamously unsatisfying, often limited to a linear interpolation in RBG space, resulting in unpleasing gradients very different from what one would expect. Ten years ago I mentioned this article that presented the color mixing of the application Paper, which tried to solve this very problem.

This time, the core idea is to model colors as pigments: estimate the pigment concentration based on the color, so in a way, move from RGB space to “pigment space”, and interpolate the pigment concentration, before converting back to RGB space.

The paper uses the Kubelka-Munk model for estimating colors from pigment concentration. The problem however is to find a transformation between the two spaces. A first assumption is made on the available pigments: essentially restricting them to CMYK. Then two problems are addressed: RGB colors that cannot be represented with those pigments, and likewise pigment colors that cannot be represented in RGB.
The paper proposes a remapping that enables a transform and its inverse, thus allowing to move from RGB space to pigment space, interpolate in pigment space, and move back to RGB space.

You could argue this is therefore a physically based diffuse color mixing.

Finally, the implementation of the proposed model, Mixbox, is available under a CC BY-NC license:
https://github.com/scrtwpns/mixbox

Two Minute Papers did a video on this paper as well:

https://youtube.com/watch?v=b2D_5G_npVI

Reading list on ReSTIR

Recently a short video from dark magic programmer Tomasz Stachowiak made the rounds in the graphics programming community, at the sound of jaws hitting the floor in its wake. It shows his recent progress on in his renderer pet project: beautiful real-time global illumination with fast convergence and barely any noise, in a static environment with dynamic lighting.

In a Twitter thread where he discussed some details, one keyword in particular caught my attention: ReSTIR.

ReSTIR stands for “Reservoir-based Spatio-Temporal Importance Resampling” and is a sampling technique published at SIGGRAPH 2020 and getting refined since.

The original publication

Spatiotemporal reservoir resampling for real-time ray tracing with dynamic direct lighting
The publication page includes the recording of the SIGGRAPH presentation, with a well articulated explanation of the technique by main author Benedikt Bitterli.
(same publication hosted on the NVidia website).

Explanations of ReSTIR

Improvements over the original publication

After the initial publication, NVidia published a refined version producing images with less noise at a lower cost, which they call “RTXDI” (for RTX Direct Illumination).

Other limitations

When discussing on Twitter some of the limitations of ReSTIR, Chris Wyman made the following remarks:

To be clear, right now, ReSTIR is a box of razor blades without handles (or a box of unlabeled knobs). It’s extremely powerful, but you have to know what you’re doing. It is not intuitive, if your existing perspective is traditional Monte Carlo (or real-time) sampling techniques.

People sometimes think SIGGRAPH paper = solved. Nope. We’ve learned a lot since the first paper, and our direct lighting is a lot more stable with that knowledge. We’re still learning how to do it well on full-length paths.

And there’s a bunch of edge cases, even in direct lighting, that we know how to solve but haven’t had time to write them up, polish, and demo.

We haven’t actually tried to solve the extra noise at disocclusions in (what I think of as) a very principled way. Right now a world-space structure is probably the best way. I’m pretty sure it can be done without a (formal) world-space structure, just “more ReSTIR.”

From Maxwell’s equations to Fresnel’s equations

This series of short videos shows how to derive Maxwell’s equations all the way to Fresnel’s equations. Each one is about 10 to 15mn long.

The first four videos show how to use boundary conditions to deduce the relationship between the electromagnetic field on both sides of a surface (or interface between two different media).

The next four videos use the previous results to obtain the Fresnel equations, for S-polarized and P-polarized cases.

The rest of the series then dives into other topics like thin film interference.

The series assumes the viewer to be already familiar with the Maxwell equations, so it can be helpful to first see the explanation by Grant Sanderson of 3Blue1Brown on Maxwell’s equations.

Francesca Da Rimini

Following yesterday’s post about a music video featuring modern dance and computer visual effects, here is a video featuring classical dance and a robot controlled camera.

Francesca Da Rimini was a historical figure portrayed in the Divine Comedy and numerous works of art, including a symphonic poem by Tchaikovsky. In 2014 the director Tarik Abdel-Gawad and his team recorded a performance by two dancers of the San Francisco Ballet, Maria Kochetkova and Joan Boada, using a robot controlled camera. Tarik was also the technical and creative director of the demonstration video featuring the same Bot&Dolly robots (a company acquired by Google in 2013) and which turned viral, “Box”.

In the accompanying back stage video, he explains how seeing dancers rehearse over and over gave him the idea of experimenting with a pre-programmed robot, in order to make the camera part of the choregraphy, and allow the viewer to have a closer, more intimate, view of the performance.

Christopher McKay – Life beyond the Earth

So that’s my job in a sense: search other worlds for alien life.

So when I’m on a long plane flight, like coming over here, and the guy sitting next to me says: “So what do you do?”. Chatty fellow. I say: “Well I search other worlds for alien life.”. And then, he leaves me alone for the rest of the flight, I can get some sleep. It’s a great job description, I like it.

This excerpt is part of the introduction of the following lecture by NASA scientist Dr. Christopher McKay, on the search of life beyond Earth. He talks about Mars exploration, a potential mission to Enceladus, the challenges of field research in such environment, how to detect life (without accidentally destroying it in the attempt), how to avoid contamination and what would be some practical consequences to finding life. It’s a great insight into the current state of the field, delivered with an entertaining tone.

Commented footage of the space shuttle launch

Ascent is a commented montage of carefully selected videos of the launch of space shuttle, made by the Glenn Research Center. A DVD and a Blu-ray were produced but are apparently yet to be distributed reliably, so meanwhile the DVD ISO can be downloaded on this unofficial website.

The document is 45mn long, and presents outstanding footage taken during launch of missions STS-114, STS-117, and STS-124, from some of the 125 cameras used to ensure vehicle safety. Views include close ups of the ignition and of the launchpad at 400 fps, mid range footage, and up to footage taken from over 30km away (with the equivalent of a 4000mm lens). The comments give abundant detail about what is happening on the picture as well as the camera involved (lens, film, speed…).

As mentioned this video is 45mn long, but I’ve found it so captivating that I hardly noticed the length. If you only have 8mn available though, this other montage shows the launch from the cameras attached to the solid rocket boosters (SRB) with the recorded sound, from ignition, up until separation, then down to landing in the sea.