Practical Pigment Mixing for Digital Painting

About a year ago at SIGGRAPH Asia 2021 (which took place as a hybrid conference both online and on site at the Tokyo International Forum) one of the technical papers that caught my attention was the publication by Šárka Sochorová and Ondřej Jamriška on color mixing.

Color mixing in most digital painting tools is infamously unsatisfying, often limited to a linear interpolation in RBG space, resulting in unpleasing gradients very different from what one would expect. Ten years ago I mentioned this article that presented the color mixing of the application Paper, which tried to solve this very problem.

This time, the core idea is to model colors as pigments: estimate the pigment concentration based on the color, so in a way, move from RGB space to “pigment space”, and interpolate the pigment concentration, before converting back to RGB space.

The paper uses the Kubelka-Munk model for estimating colors from pigment concentration. The problem however is to find a transformation between the two spaces. A first assumption is made on the available pigments: essentially restricting them to CMYK. Then two problems are addressed: RGB colors that cannot be represented with those pigments, and likewise pigment colors that cannot be represented in RGB.
The paper proposes a remapping that enables a transform and its inverse, thus allowing to move from RGB space to pigment space, interpolate in pigment space, and move back to RGB space.

You could argue this is therefore a physically based diffuse color mixing.

Finally, the implementation of the proposed model, Mixbox, is available under a CC BY-NC license:
https://github.com/scrtwpns/mixbox

Two Minute Papers did a video on this paper as well:

https://youtube.com/watch?v=b2D_5G_npVI

Reading list on ReSTIR

Recently a short video from dark magic programmer Tomasz Stachowiak made the rounds in the graphics programming community, at the sound of jaws hitting the floor in its wake. It shows his recent progress on in his renderer pet project: beautiful real-time global illumination with fast convergence and barely any noise, in a static environment with dynamic lighting.

In a Twitter thread where he discussed some details, one keyword in particular caught my attention: ReSTIR.

ReSTIR stands for “Reservoir-based Spatio-Temporal Importance Resampling” and is a sampling technique published at SIGGRAPH 2020 and getting refined since.

The original publication

Spatiotemporal reservoir resampling for real-time ray tracing with dynamic direct lighting
The publication page includes the recording of the SIGGRAPH presentation, with a well articulated explanation of the technique by main author Benedikt Bitterli.
(same publication hosted on the NVidia website).

Explanations of ReSTIR

Improvements over the original publication

After the initial publication, NVidia published a refined version producing images with less noise at a lower cost, which they call “RTXDI” (for RTX Direct Illumination).

Other limitations

When discussing on Twitter some of the limitations of ReSTIR, Chris Wyman made the following remarks:

To be clear, right now, ReSTIR is a box of razor blades without handles (or a box of unlabeled knobs). It’s extremely powerful, but you have to know what you’re doing. It is not intuitive, if your existing perspective is traditional Monte Carlo (or real-time) sampling techniques.

People sometimes think SIGGRAPH paper = solved. Nope. We’ve learned a lot since the first paper, and our direct lighting is a lot more stable with that knowledge. We’re still learning how to do it well on full-length paths.

And there’s a bunch of edge cases, even in direct lighting, that we know how to solve but haven’t had time to write them up, polish, and demo.

We haven’t actually tried to solve the extra noise at disocclusions in (what I think of as) a very principled way. Right now a world-space structure is probably the best way. I’m pretty sure it can be done without a (formal) world-space structure, just “more ReSTIR.”

Long hiatus

Last week I was lucky enough to attend SIGGRAPH 2018, in Vancouver. My colleagues and I were presenting on a booth the work we had done, a VR story with a distinctive comic book look. I was also invited to participate to a panel session on demoscene, where I shared some lessons learned while making the 64k intro H – Immersion. The event brought a certain sense of conclusion to this work, aside from filling me with inspiration and motivation to try new things.

It has been a long time since I last posted anything here. For the last two years the majority of my spare time went into making that 64k intro. In fact the last post, “Intersection of a ray and a cone”, was related to it. I was implementing volumetric lighting for the underwater scenes, and wanted to resolve cones of light with ray tracing, before marching inside those cones. LLB and I have talked about the creation process in two making-of articles: “A dive into the making of Immersion”, and “Texturing in a 64kB intro”.

During that time, a lot of new things have happened in the computer graphics community. It has been difficult to keep track of everything. The last topic I started experimenting with is point cloud and mesh capture from photos; I might expend on it here in the future. I also want to experiment with DIY motion capture. Anyway, it’s time to resume posting here.

Volumetric light scattering

Here are a couple of links on how to render light scattering effect (aka. volumetric shadows):

 Update:

The rendering tools in the film industry

Here is a list of articles published by fxguide, giving fascinating insights into the tools used by the film industry in terms of rendering.

  • Ben Snow: the evolution of ILM’s lighting tools (January 2011)
    A presentation of the evolution of the technology and tools used at Industrial Light and Magic, over the course of the years and movies, from the mid-90s to nowadays.
  • Monsters University: rendering physically based monsters (June 2013)
  • The Art of Rendering (April 2012)
    A description of the different techniques used in high end rendering and the major engines.
  • The State of Rendering (July 2013): part 1, part 2
    A lengthy overview of the state of the art in high end rendering, comparing the different tools and rendering solutions available, their approach and design choices, strengths and weaknesses as well as the consequences in terms of quality, scalability and render time.

(Brace yourselves for the massive tag list hereafter.)

A list of important graphics research papers

This is an announcement that got all my attention. Since Twitter is a mess to find anything older than a day, here is the list so far:

  1. A Characterization of Ten Hidden-Surface Algorithms, Sutherland et al., ACM Computing Surveys, 1974
  2. Survey of Texture Mapping, Paul Heckbert, IEEE Computer Graphics and Applications, 1986
  3. Rendering Complex Scenes with Memory-Coherent Ray Tracing, Matt Pharr et al., proceedings of SIGGRAPH, 1997
  4. An Efficient Representation for Irradiance Environment Maps, Ramamoorthi & Hanrahan, proceedings of SIGGRAPH, 2001
  5. Decoupled Sampling for Graphics Pipelines, Ragan-Kelley et al. ACM Transactions on Graphics, 2011
  6. The Aliasing Problem in Computer-Generated Shaded Images, Franklin C. Crow, Communications of the ACM, 1977
  7. Ray Tracing Complex Scenes, Kay & Kajiya, proceedings of SIGGRAPH, 1986
  8. Hierarchical Z-buffer Visibility, Greene et al., proceedings of SIGGRAPH, 1993
  9. Geometry Images, Gu et al., ACM Transactions on Graphics, 2002
  10. A Hidden-Surface Algorithm with Anti-Aliasing, Edwin Catmull, proceedings of SIGGRAPH, 1978
  11. Modeling the Interaction of Light Between Diffuse Surfaces, Goral et al., proceedings of SIGGRAPH, 1984
    “The first radiosity paper, with the real physical Cornell box (which I’ve actually have seen in real life!)”
  12. Pyramidal Parametrics, Lance Williams, proceedings of SIGGRAPH, 1983
  13. Rendering synthetic objects into real scenes: bridging traditional and image-based graphics with global illumination and high dynamic range photography, Paul Debevec, proceedings of SIGGRAPH 2008
    “Influence on gfx proportional to title length!”
  14. A parallel algorithm for polygon rasterization, Juan Pineda, proceedings of SIGGRAPH, 1988
  15. Rendering from compressed textures, Beers et al., proceedings of SIGGRAPH 1996
    “This one (out of 3) of the 1st texture compression papers ever! Uses VQ so probably not something you want today, but major eye opener!”
  16. A general version of Crow’s shadow volumes, P. Bergeron, IEEE Computer Graphics and Applications, 1986
    “Generalized SV. Nice trick”
  17. Reality engine graphics, Kurt Akeley, proceedings of SIGGRAPH 1993
    “Paper describes MSAA, guard bands, etc etc”
  18. The design and analysis of a cache architecture for texture mapping, Hakura and Gupta, proceedings of ISCA 1997
    “Classic texture $ paper!”
  19. Deep shadow maps, Lokovic and Veach, proceedings of SIGGRAPH 2000
    “Lots of inspiration here!”
  20. The Reyes image rendering architecture, Cook et al., proceedings of SIGGRAPH 1987
    “Sooo good & mega-influential!”
  21. A practical model for subsurface light transport, Jensen et al., proceedings of SIGGRAPH 2001
  22. Casting curved shadows on curved surfaces, Lance Williams, proceedings of SIGGRAPH 1978
    “*the* shadow map paper!”
  23. On the design of display processors, Myer and Sutherland, Communications of the ACM 1968
    “Wheel of reincarnation”
  24. Ray tracing Jell-O brand gelatin, Paul S. Heckbert, Communications of the ACM 1988
  25. Talisman: Commodity realtime 3D graphics for the PC, Torborg and Kajiya, Proceedings of SIGGRAPH 1996
  26. A Frequency Analysis of Light Transport, Durand et al., Proceedings of SIGGRAPH 2005
    “Very influential!!”
  27. An Ambient Light Illumination Mode (behind a paywall), S. Zhukov, A. Iones, G. Kronin, Eurographics 1998
    “First paper on ambient occlusion, AFAIK. Not that old…”

Ambient shadows in The Last of Us

Last month at SIGGRAPH, Michał Iwanicki of Naughty Dogs presented his talk “Lighting technology in The Last of Us”, in which he focused on the technique they used for ambient shadows. In short: light maps and analytic occlusion with ellipsoid approximations of objects. Clever!