The white furnace test

The white furnace test is one of my favourite rendering debug tools. But before it was so, it was rather mysterious and abstract to me. Why would a publication proudly show what seemed like empty renders? What does it mean, and why would they care?

Slide from the presentation Revisiting Physically Based Shading at Imageworks, in which a white furnace test of the diffuse term is shown.
What’s up with the empty grey rectangle? The fact that looks empty is the point.
Revisiting Physically Based Shading at Imageworks, presented at the SIGGRAPH 2017 course: Physically Based Shading in Theory and Practice.

The idea is the following: if you have a 100% reflective object that is lit by a uniform environment, it becomes indistinguishable from the environment. It doesn’t matter if the object is matte or mirror like, or anything in between: it just “disappears”.

Accepting this idea took me a while, but there is a real-life situation in which you can experience this effect. Fresh snow can have an albedo as high as 90% to 98%, i.e. nearly perfect white. Associated with overcast weather or fog, it can sometimes appear featureless and become completely indistinguishable from the sky, to the point you’re left with skiing by feel because you can’t even tell the slope two steps in front of you. Everything is just a uniform white in all directions: the whiteout.

Photo taken on a ski track. The ground appears almost uniformly white.
Last time I visited a white furnace test. Note how the snow surface slope and details are almost invisible, and the sign in the background seems to be floating in the air.

With the knowledge that a 100% reflective object is supposed to look invisible when uniformly lit, verifying that it does is a good sanity test for a physically based renderer, and the reason why you sometimes see those curious illustrations in publications. It’s showing that the math checks out.

Those tests are usually intended to verify that a BRDF is energy preserving: making sure that it is not losing or adding energy. A typical topic for example is making sure materials don’t look darker as roughness increases and inter-reflections become too significant to be neglected. Missing energy is not the only concern though, and a grey environment (as opposed to a white one) is convenient as any excess of reflected energy will appear brighter than it.

Demonstration of the white furnace test on ShaderToy, or an expensive way to render an empty image. Press the play button to see the scene revealed.

But verifying the energy conservation of a BRDF is just one of the cases where the white furnace test is useful. Since a Lambertian BRDF with an albedo of 100% is perfectly energy preserving and completely trivial to implement, the white furnace test with such a white Lambert material can be used to reveal bugs in the renderer implementation itself.

There are so many aspects of the implementation that can go wrong: the sampling distribution, the proper weighting of the samples, a mistake in the PDF, a pi or a 2 factor forgotten somewhere… Those errors tend to be subtle and can result in a render that still looks reasonable. Nothing looks more like a correct shading than a slightly incorrect one.

So when I’m either writing a path tracer or one of its variants, or generating a pre-convolved environment map, or trying different sampling distributions, my first sanity check is to make sure it passes the white furnace test with a pure white Lambertian BRDF. Once that is done (and as writing the demonstration shader above showed me once again, that can take a few iterations), I can have confidence in my implementation and test the BRDF themselves.

Take away: the white furnace test is a very useful debugging tool to validate both the integration part and the BRDF part of your rendering.

Update: A comment on Hacker News mentioned that it would be useful to see an example of what failing the test looks like. So I’ve added a macro SIMULATE_INCORRECT_INTEGRATION in the shader above, to introduce a “bug”, the kind like forgetting that the integration over an hemisphere amounts to 2Pi or forgetting to take the sampling distribution into account for example. When the “bug” is active, the sphere becomes visible because it doesn’t reflect the correct amount of energy.

A list of important graphics research papers

This is an announcement that got all my attention. Since Twitter is a mess to find anything older than a day, here is the list so far:

  1. A Characterization of Ten Hidden-Surface Algorithms, Sutherland et al., ACM Computing Surveys, 1974
  2. Survey of Texture Mapping, Paul Heckbert, IEEE Computer Graphics and Applications, 1986
  3. Rendering Complex Scenes with Memory-Coherent Ray Tracing, Matt Pharr et al., proceedings of SIGGRAPH, 1997
  4. An Efficient Representation for Irradiance Environment Maps, Ramamoorthi & Hanrahan, proceedings of SIGGRAPH, 2001
  5. Decoupled Sampling for Graphics Pipelines, Ragan-Kelley et al. ACM Transactions on Graphics, 2011
  6. The Aliasing Problem in Computer-Generated Shaded Images, Franklin C. Crow, Communications of the ACM, 1977
  7. Ray Tracing Complex Scenes, Kay & Kajiya, proceedings of SIGGRAPH, 1986
  8. Hierarchical Z-buffer Visibility, Greene et al., proceedings of SIGGRAPH, 1993
  9. Geometry Images, Gu et al., ACM Transactions on Graphics, 2002
  10. A Hidden-Surface Algorithm with Anti-Aliasing, Edwin Catmull, proceedings of SIGGRAPH, 1978
  11. Modeling the Interaction of Light Between Diffuse Surfaces, Goral et al., proceedings of SIGGRAPH, 1984
    “The first radiosity paper, with the real physical Cornell box (which I’ve actually have seen in real life!)”
  12. Pyramidal Parametrics, Lance Williams, proceedings of SIGGRAPH, 1983
  13. Rendering synthetic objects into real scenes: bridging traditional and image-based graphics with global illumination and high dynamic range photography, Paul Debevec, proceedings of SIGGRAPH 2008
    “Influence on gfx proportional to title length!”
  14. A parallel algorithm for polygon rasterization, Juan Pineda, proceedings of SIGGRAPH, 1988
  15. Rendering from compressed textures, Beers et al., proceedings of SIGGRAPH 1996
    “This one (out of 3) of the 1st texture compression papers ever! Uses VQ so probably not something you want today, but major eye opener!”
  16. A general version of Crow’s shadow volumes, P. Bergeron, IEEE Computer Graphics and Applications, 1986
    “Generalized SV. Nice trick”
  17. Reality engine graphics, Kurt Akeley, proceedings of SIGGRAPH 1993
    “Paper describes MSAA, guard bands, etc etc”
  18. The design and analysis of a cache architecture for texture mapping, Hakura and Gupta, proceedings of ISCA 1997
    “Classic texture $ paper!”
  19. Deep shadow maps, Lokovic and Veach, proceedings of SIGGRAPH 2000
    “Lots of inspiration here!”
  20. The Reyes image rendering architecture, Cook et al., proceedings of SIGGRAPH 1987
    “Sooo good & mega-influential!”
  21. A practical model for subsurface light transport, Jensen et al., proceedings of SIGGRAPH 2001
  22. Casting curved shadows on curved surfaces, Lance Williams, proceedings of SIGGRAPH 1978
    “*the* shadow map paper!”
  23. On the design of display processors, Myer and Sutherland, Communications of the ACM 1968
    “Wheel of reincarnation”
  24. Ray tracing Jell-O brand gelatin, Paul S. Heckbert, Communications of the ACM 1988
  25. Talisman: Commodity realtime 3D graphics for the PC, Torborg and Kajiya, Proceedings of SIGGRAPH 1996
  26. A Frequency Analysis of Light Transport, Durand et al., Proceedings of SIGGRAPH 2005
    “Very influential!!”
  27. An Ambient Light Illumination Mode (behind a paywall), S. Zhukov, A. Iones, G. Kronin, Eurographics 1998
    “First paper on ambient occlusion, AFAIK. Not that old…”