Practical Pigment Mixing for Digital Painting

About a year ago at SIGGRAPH Asia 2021 (which took place as a hybrid conference both online and on site at the Tokyo International Forum) one of the technical papers that caught my attention was the publication by Šárka Sochorová and Ondřej Jamriška on color mixing.

Color mixing in most digital painting tools is infamously unsatisfying, often limited to a linear interpolation in RBG space, resulting in unpleasing gradients very different from what one would expect. Ten years ago I mentioned this article that presented the color mixing of the application Paper, which tried to solve this very problem.

This time, the core idea is to model colors as pigments: estimate the pigment concentration based on the color, so in a way, move from RGB space to “pigment space”, and interpolate the pigment concentration, before converting back to RGB space.

The paper uses the Kubelka-Munk model for estimating colors from pigment concentration. The problem however is to find a transformation between the two spaces. A first assumption is made on the available pigments: essentially restricting them to CMYK. Then two problems are addressed: RGB colors that cannot be represented with those pigments, and likewise pigment colors that cannot be represented in RGB.
The paper proposes a remapping that enables a transform and its inverse, thus allowing to move from RGB space to pigment space, interpolate in pigment space, and move back to RGB space.

You could argue this is therefore a physically based diffuse color mixing.

Finally, the implementation of the proposed model, Mixbox, is available under a CC BY-NC license:
https://github.com/scrtwpns/mixbox

Two Minute Papers did a video on this paper as well:

https://youtube.com/watch?v=b2D_5G_npVI

Reading list on ReSTIR

Recently a short video from dark magic programmer Tomasz Stachowiak made the rounds in the graphics programming community, at the sound of jaws hitting the floor in its wake. It shows his recent progress on in his renderer pet project: beautiful real-time global illumination with fast convergence and barely any noise, in a static environment with dynamic lighting.

In a Twitter thread where he discussed some details, one keyword in particular caught my attention: ReSTIR.

ReSTIR stands for “Reservoir-based Spatio-Temporal Importance Resampling” and is a sampling technique published at SIGGRAPH 2020 and getting refined since.

The original publication

Spatiotemporal reservoir resampling for real-time ray tracing with dynamic direct lighting
The publication page includes the recording of the SIGGRAPH presentation, with a well articulated explanation of the technique by main author Benedikt Bitterli.
(same publication hosted on the NVidia website).

Explanations of ReSTIR

Improvements over the original publication

After the initial publication, NVidia published a refined version producing images with less noise at a lower cost, which they call “RTXDI” (for RTX Direct Illumination).

Other limitations

When discussing on Twitter some of the limitations of ReSTIR, Chris Wyman made the following remarks:

To be clear, right now, ReSTIR is a box of razor blades without handles (or a box of unlabeled knobs). It’s extremely powerful, but you have to know what you’re doing. It is not intuitive, if your existing perspective is traditional Monte Carlo (or real-time) sampling techniques.

People sometimes think SIGGRAPH paper = solved. Nope. We’ve learned a lot since the first paper, and our direct lighting is a lot more stable with that knowledge. We’re still learning how to do it well on full-length paths.

And there’s a bunch of edge cases, even in direct lighting, that we know how to solve but haven’t had time to write them up, polish, and demo.

We haven’t actually tried to solve the extra noise at disocclusions in (what I think of as) a very principled way. Right now a world-space structure is probably the best way. I’m pretty sure it can be done without a (formal) world-space structure, just “more ReSTIR.”

Implementing a Physically Based Shading without locking yourself in

Over the last few months I have been trying to push my understanding of Physically Based Shading, by actively exploring every corner and turning over every stone, to uncover any area where I lack knowledge. Although this is still an ongoing process and I still have a lot to do, I thought I could already share some of what I have learned in the process.

Last weekend the Easter demoparty event Revision took place, as an online version due to the current pandemic situation. There, I presented a talk on Physically Based Shading, in which I went into electromagnetism, existing models, and an brief overview of a prototype I am working on.

The presentation goes into a lot of detail about interaction of light with matter from a physics point of view, then builds its way up to the Cook-Torrance specular BRDF model. The diffuse BRDF and the Image Based Lighting were skipped due to time constraints. I am considering doing a Part 2 to address those topics, but I haven’t decided anything yet.

In the mean time, please leave a comment or contact me if you notice any mistake or inaccuracy.

Abstract

How do you implement a Physically Based Shading for your demos yet keep the possibility to try something completely different without having to rewrite everything?
In this talk we will first get an intuitive understanding of what makes matter look the way it looks, with as much detail as we can given the time we have. We will then see how this is modeled by a BRDF (Bidirectional Reflectance Distribution Function) and review some of the available models.
We will also see what makes it challenging for design and for real-time implementation. Finally we will discuss a possible implementation that allows to experiment with different models, can work in a variety of cases, and remains compatible with size coding constraints.

Slides

Here are the slides, together with the text of the talk and the link to the references:
Implementing a Physically Based Shading model without locking yourself in.

Video

And finally here is the recording of the talk, including a quick demonstration of the prototype:

Interference shader

Here is the shader used during the presentation to illustrate light interaction at the interface between to media:

Acknowledgements

Thanks again to Alan Wolfe for reviewing the text, Alkama for the motivation and questions upfront and help in the video department, Scoup and the Revision crew for organizing the seminars, Ronny and Siana for the help in the sound department, and everyone who provided feedback on my previous article on Physically Based Shading.

Addendum

Following the publication of this article, Nathan Reed gave several comments on Twitter:

FWIW – I think the model of refraction by the electromagnetic field causing electrons to oscillate is the better one. This explains not only refraction but reflection as well, and even total internal reflection. Feynman does out the wave calculations: https://feynmanlectures.caltech.edu/II_33.html

It also explains better IMO why a light wave keeps its direction in a material. If an atom absorbs and re-emits the photon there is no reason why it should be going in the same direction as before (conservation of momentum is maintained if the atom recoils). Besides which, the lifetime of an excited atomic state is many orders of magnitude longer than the time needed for a light wave to propagate across the diameter of the atom (even at an IOR-reduced speed).

Moreover, in the comments of the shader above, CG researcher Fabrice Neyret mentioned a presentation of his from 2019, which lists interactions of light with matter: Colors of the universe.
Quoting his summarized comment:

In short: the notion of photons (and their speed) in matter is a macroscopic deceiving representation, since it’s about interference between incident and reactive fields (reemitted by the dipoles, at least for dielectrics).


Long hiatus

Last week I was lucky enough to attend SIGGRAPH 2018, in Vancouver. My colleagues and I were presenting on a booth the work we had done, a VR story with a distinctive comic book look. I was also invited to participate to a panel session on demoscene, where I shared some lessons learned while making the 64k intro H – Immersion. The event brought a certain sense of conclusion to this work, aside from filling me with inspiration and motivation to try new things.

It has been a long time since I last posted anything here. For the last two years the majority of my spare time went into making that 64k intro. In fact the last post, “Intersection of a ray and a cone”, was related to it. I was implementing volumetric lighting for the underwater scenes, and wanted to resolve cones of light with ray tracing, before marching inside those cones. LLB and I have talked about the creation process in two making-of articles: “A dive into the making of Immersion”, and “Texturing in a 64kB intro”.

During that time, a lot of new things have happened in the computer graphics community. It has been difficult to keep track of everything. The last topic I started experimenting with is point cloud and mesh capture from photos; I might expend on it here in the future. I also want to experiment with DIY motion capture. Anyway, it’s time to resume posting here.

Sophie Wilson – The future of microprocessors

In this 2014 talk, one of the designers of the original ARM processor, Sophie Wilson, gives an overview of the history of processors and what to expect in future.

The presentation covers in layman’s terms topics like Moore’s law (obviously), pipelining, parallelism, power consumption, heat dissipation, processor specialization and cost of production among other things. As explained, all those aspects are facing difficult challenges that are likely to shape the future of microprocessors, which in turns impacts both hardware and software engineers.

Here is the same presentation in 2016:

Christopher McKay – Life beyond the Earth

So that’s my job in a sense: search other worlds for alien life.

So when I’m on a long plane flight, like coming over here, and the guy sitting next to me says: “So what do you do?”. Chatty fellow. I say: “Well I search other worlds for alien life.”. And then, he leaves me alone for the rest of the flight, I can get some sleep. It’s a great job description, I like it.

This excerpt is part of the introduction of the following lecture by NASA scientist Dr. Christopher McKay, on the search of life beyond Earth. He talks about Mars exploration, a potential mission to Enceladus, the challenges of field research in such environment, how to detect life (without accidentally destroying it in the attempt), how to avoid contamination and what would be some practical consequences to finding life. It’s a great insight into the current state of the field, delivered with an entertaining tone.

The physics of cloaking

In this seminar, Dr. Greg Gbur presents the current state of research on cloaking devices, the differences between science fiction and what seem to actually be possible, and different applications beyond invisibility, like protection from thermal radiation or earthquakes.