The white furnace test

The white furnace test is one of my favourite rendering debug tools. But before it was so, it was rather mysterious and abstract to me. Why would a publication proudly show what seemed like empty renders? What does it mean, and why would they care?

Slide from the presentation Revisiting Physically Based Shading at Imageworks, in which a white furnace test of the diffuse term is shown.
What’s up with the empty grey rectangle? The fact that looks empty is the point.
Revisiting Physically Based Shading at Imageworks, presented at the SIGGRAPH 2017 course: Physically Based Shading in Theory and Practice.

The idea is the following: if you have a 100% reflective object that is lit by a uniform environment, it becomes indistinguishable from the environment. It doesn’t matter if the object is matte or mirror like, or anything in between: it just “disappears”.

Accepting this idea took me a while, but there is a real-life situation in which you can experience this effect. Fresh snow can have an albedo as high as 90% to 98%, i.e. nearly perfect white. Associated with overcast weather or fog, it can sometimes appear featureless and become completely indistinguishable from the sky, to the point you’re left with skiing by feel because you can’t even tell the slope two steps in front of you. Everything is just a uniform white in all directions: the whiteout.

Photo taken on a ski track. The ground appears almost uniformly white.
Last time I visited a white furnace test. Note how the snow surface slope and details are almost invisible, and the sign in the background seems to be floating in the air.

With the knowledge that a 100% reflective object is supposed to look invisible when uniformly lit, verifying that it does is a good sanity test for a physically based renderer, and the reason why you sometimes see those curious illustrations in publications. It’s showing that the math checks out.

Those tests are usually intended to verify that a BRDF is energy preserving: making sure that it is not losing or adding energy. A typical topic for example is making sure materials don’t look darker as roughness increases and inter-reflections become too significant to be neglected. Missing energy is not the only concern though, and a grey environment (as opposed to a white one) is convenient as any excess of reflected energy will appear brighter than it.

Demonstration of the white furnace test on ShaderToy, or an expensive way to render an empty image. Press the play button to see the scene revealed.

But verifying the energy conservation of a BRDF is just one of the cases where the white furnace test is useful. Since a Lambertian BRDF with an albedo of 100% is perfectly energy preserving and completely trivial to implement, the white furnace test with such a white Lambert material can be used to reveal bugs in the renderer implementation itself.

There are so many aspects of the implementation that can go wrong: the sampling distribution, the proper weighting of the samples, a mistake in the PDF, a pi or a 2 factor forgotten somewhere… Those errors tend to be subtle and can result in a render that still looks reasonable. Nothing looks more like a correct shading than a slightly incorrect one.

So when I’m either writing a path tracer or one of its variants, or generating a pre-convolved environment map, or trying different sampling distributions, my first sanity check is to make sure it passes the white furnace test with a pure white Lambertian BRDF. Once that is done (and as writing the demonstration shader above showed me once again, that can take a few iterations), I can have confidence in my implementation and test the BRDF themselves.

Take away: the white furnace test is a very useful debugging tool to validate both the integration part and the BRDF part of your rendering.

Update: A comment on Hacker News mentioned that it would be useful to see an example of what failing the test looks like. So I’ve added a macro SIMULATE_INCORRECT_INTEGRATION in the shader above, to introduce a “bug”, the kind like forgetting that the integration over an hemisphere amounts to 2Pi or forgetting to take the sampling distribution into account for example. When the “bug” is active, the sphere becomes visible because it doesn’t reflect the correct amount of energy.

Reading list on ReSTIR

Recently a short video from dark magic programmer Tomasz Stachowiak made the rounds in the graphics programming community, at the sound of jaws hitting the floor in its wake. It shows his recent progress on in his renderer pet project: beautiful real-time global illumination with fast convergence and barely any noise, in a static environment with dynamic lighting.

In a Twitter thread where he discussed some details, one keyword in particular caught my attention: ReSTIR.

ReSTIR stands for “Reservoir-based Spatio-Temporal Importance Resampling” and is a sampling technique published at SIGGRAPH 2020 and getting refined since.

The original publication

Spatiotemporal reservoir resampling for real-time ray tracing with dynamic direct lighting
The publication page includes the recording of the SIGGRAPH presentation, with a well articulated explanation of the technique by main author Benedikt Bitterli.
(same publication hosted on the NVidia website).

Explanations of ReSTIR

Improvements over the original publication

After the initial publication, NVidia published a refined version producing images with less noise at a lower cost, which they call “RTXDI” (for RTX Direct Illumination).

Other limitations

When discussing on Twitter some of the limitations of ReSTIR, Chris Wyman made the following remarks:

To be clear, right now, ReSTIR is a box of razor blades without handles (or a box of unlabeled knobs). It’s extremely powerful, but you have to know what you’re doing. It is not intuitive, if your existing perspective is traditional Monte Carlo (or real-time) sampling techniques.

People sometimes think SIGGRAPH paper = solved. Nope. We’ve learned a lot since the first paper, and our direct lighting is a lot more stable with that knowledge. We’re still learning how to do it well on full-length paths.

And there’s a bunch of edge cases, even in direct lighting, that we know how to solve but haven’t had time to write them up, polish, and demo.

We haven’t actually tried to solve the extra noise at disocclusions in (what I think of as) a very principled way. Right now a world-space structure is probably the best way. I’m pretty sure it can be done without a (formal) world-space structure, just “more ReSTIR.”

Various links on ray tracing

Here are some links related to ray tracing, and more specifically, path tracing.

Some ray tracing related projects or blogs:

Some major publications:

  • The rendering equation, SIGGRAPH 1986, James T. Kajiya. From the paper:

    We present an integral equation which generalizes a variety of known rendering algorithms.
    […]
    We mention that the idea behind the rendering equation is hardly new.
    […]
    However, the form in which we present this equation is well suited for computer graphics, and we believe that this form has not appeared before.

  • Bi-directional path tracing, Compugraphics 1993, Eric P. Lafortune and Yves D. Willems. From the paper:

    The basic idea is that particles are shot at the same time from a selected light source and from the viewing point, in much the same way. All hit points on respective particle paths are then connected using shadow rays and the appropriate contributions are added to the flux of pixel  in question.

  • Optimally Combining Sampling Techniques for Monte Carlo Rendering, SIGGRAPH 1995, Eric Veach and Leonidas J. Guibas. From the abstract:

    We present a powerful alternative for constructing robust Monte Carlo estimators, by combining samples from several distributions in a way that is provably good.

  • Metropolis Light Transport, SIGGRAPH 1997, Eric Veach and Leonidas J. Guibas. From the abstract:

    To render an image, we generate a sequence of light transport paths by randomly mutating a single current path (e.g. adding a new vertex to the path).

  • Robust Monte Carlo methods for light transport simulation, 1998, Erich Veach PhD thesis (432 pages pdf): it presents bidirectional path tracing, and introduces Metropolis Light Transport and Multiple Importance Sampling. From the abstract:

    Our statistical contributions include a new technique called multiple importance sampling, which can greatly increase the robustness of Monte Carlo integration. It uses more than one sampling technique to evaluate an integral, and then combines these samples in a way that is provably close to optimal. This leads to estimators that have low variance for a broad class of integrands. We also describe a new variance reduction technique called efficiency-optimized Russian roulette.

    […]

    The second algorithm we describe is Metropolis light transport, inspired by the Metropolis sampling method from computational physics. Paths are generated by following a random walk through path space, such that the probability density of visiting each path is proportional to the contribution it makes to the ideal image.

Other:

On a slightly different topic, fxguide had a great series of articles on the state of rendering in the film industry, which I previously mentioned.

A GLSL version of smallpt

smallpt is a bare minimum path tracer written under 100 lines of C++, featuring diffuse, and specular reflection, and refraction. Using the detailed explanation slides by David Cline, I experimented porting it to GLSL on Shadertoy.

This proved to be an interesting experiment that brought a few lessons.

You can see the shader and tweak it here. By default it uses 6 samples per pixel, and 3 bounces, which allows it to run smoothly on average hardware. I found 40 samples per pixel and 5 bounces to give nice results while maintaining interactive framerate.

Path tracing, 40 samples per pixel, 5 bounces

Path tracing, 40 samples per pixel, 5 bounces

Update: since GLSL Sandbox has a feature, reading from the previous frame buffer, that Shadertoy is missing at the moment, I thought it’d be interesting try it to have the image converging over time. A little hacking later, a minute or so worth of rendering got me this kind of result: Given the effort, I am really pleased by the result.

Path tracing, 40 samples per pixel, 5 bounces

Path tracing, unknown number of samples per pixel, 7 bounces

A raytracer under a hundred lines of C++

On his website Kevin Beason presents a Monte Carlo ray tracer written with 99 lines of C++, generating a picture of a Cornell box with global illumination. Beyond the interesting experiment and the fact it can generate a binary of 4kB, I find very valuable the fact there are slides explaining all the code.