A real-time post-processing crash course

Revision 2015 took place last month, on the Easter weekend as usual. I was lucky enough to attend and experience the great competitions that took place this year; I can’t recommend you enough to check all the good stuff that came out of it.

Like the previous times I shared some insights in a seminar, as an opportunity to practice public talking. Since our post-processing have quite improved with our last demo (Ctrl-Alt-Test : G – Level One), the topic was the implementation of a few post-processing effects in a real-time renderer: glow, lens flare, light streak, motion blur…

Having been fairly busy over the last months though, with work and the organising of Tokyo Demo Fest among others, I couldn’t afford as much time as I wanted to spend on the presentation unfortunately. An hour before the presentation I was still working on the slides, but all in all it went better than expected. I also experimented with doing a live demonstration, hopefully more engaging than some screenshots or even a video capture can be.

Here is the video recording made by the team at Revision (kudos to you guys for the fantastic work this year). I will provide the slides later on, after I properly finish the credits and references part.

Abstract:
Over decades photographers, then filmmakers, have learned to take advantage of optical phenomenons, and perfected the recipe of chemicals used in films, to affect the visual appeal of their images. Transposed to rendering, those lessons can make your image more pleasant to the eye, change its realism, affect its mood, or make it easier to read. In this course we will present different effects that can be implemented in a real-time rendering pipeline, the theory behind them, the implementation details in practice, and how they could fit in your workflow.

The rendering tools in the film industry

Here is a list of articles published by fxguide, giving fascinating insights into the tools used by the film industry in terms of rendering.

  • Ben Snow: the evolution of ILM’s lighting tools (January 2011)
    A presentation of the evolution of the technology and tools used at Industrial Light and Magic, over the course of the years and movies, from the mid-90s to nowadays.
  • Monsters University: rendering physically based monsters (June 2013)
  • The Art of Rendering (April 2012)
    A description of the different techniques used in high end rendering and the major engines.
  • The State of Rendering (July 2013): part 1, part 2
    A lengthy overview of the state of the art in high end rendering, comparing the different tools and rendering solutions available, their approach and design choices, strengths and weaknesses as well as the consequences in terms of quality, scalability and render time.

(Brace yourselves for the massive tag list hereafter.)

A list of important graphics research papers

This is an announcement that got all my attention. Since Twitter is a mess to find anything older than a day, here is the list so far:

  1. A Characterization of Ten Hidden-Surface Algorithms, Sutherland et al., ACM Computing Surveys, 1974
  2. Survey of Texture Mapping, Paul Heckbert, IEEE Computer Graphics and Applications, 1986
  3. Rendering Complex Scenes with Memory-Coherent Ray Tracing, Matt Pharr et al., proceedings of SIGGRAPH, 1997
  4. An Efficient Representation for Irradiance Environment Maps, Ramamoorthi & Hanrahan, proceedings of SIGGRAPH, 2001
  5. Decoupled Sampling for Graphics Pipelines, Ragan-Kelley et al. ACM Transactions on Graphics, 2011
  6. The Aliasing Problem in Computer-Generated Shaded Images, Franklin C. Crow, Communications of the ACM, 1977
  7. Ray Tracing Complex Scenes, Kay & Kajiya, proceedings of SIGGRAPH, 1986
  8. Hierarchical Z-buffer Visibility, Greene et al., proceedings of SIGGRAPH, 1993
  9. Geometry Images, Gu et al., ACM Transactions on Graphics, 2002
  10. A Hidden-Surface Algorithm with Anti-Aliasing, Edwin Catmull, proceedings of SIGGRAPH, 1978
  11. Modeling the Interaction of Light Between Diffuse Surfaces, Goral et al., proceedings of SIGGRAPH, 1984
    “The first radiosity paper, with the real physical Cornell box (which I’ve actually have seen in real life!)”
  12. Pyramidal Parametrics, Lance Williams, proceedings of SIGGRAPH, 1983
  13. Rendering synthetic objects into real scenes: bridging traditional and image-based graphics with global illumination and high dynamic range photography, Paul Debevec, proceedings of SIGGRAPH 2008
    “Influence on gfx proportional to title length!”
  14. A parallel algorithm for polygon rasterization, Juan Pineda, proceedings of SIGGRAPH, 1988
  15. Rendering from compressed textures, Beers et al., proceedings of SIGGRAPH 1996
    “This one (out of 3) of the 1st texture compression papers ever! Uses VQ so probably not something you want today, but major eye opener!”
  16. A general version of Crow’s shadow volumes, P. Bergeron, IEEE Computer Graphics and Applications, 1986
    “Generalized SV. Nice trick”
  17. Reality engine graphics, Kurt Akeley, proceedings of SIGGRAPH 1993
    “Paper describes MSAA, guard bands, etc etc”
  18. The design and analysis of a cache architecture for texture mapping, Hakura and Gupta, proceedings of ISCA 1997
    “Classic texture $ paper!”
  19. Deep shadow maps, Lokovic and Veach, proceedings of SIGGRAPH 2000
    “Lots of inspiration here!”
  20. The Reyes image rendering architecture, Cook et al., proceedings of SIGGRAPH 1987
    “Sooo good & mega-influential!”
  21. A practical model for subsurface light transport, Jensen et al., proceedings of SIGGRAPH 2001
  22. Casting curved shadows on curved surfaces, Lance Williams, proceedings of SIGGRAPH 1978
    “*the* shadow map paper!”
  23. On the design of display processors, Myer and Sutherland, Communications of the ACM 1968
    “Wheel of reincarnation”
  24. Ray tracing Jell-O brand gelatin, Paul S. Heckbert, Communications of the ACM 1988
  25. Talisman: Commodity realtime 3D graphics for the PC, Torborg and Kajiya, Proceedings of SIGGRAPH 1996
  26. A Frequency Analysis of Light Transport, Durand et al., Proceedings of SIGGRAPH 2005
    “Very influential!!”
  27. An Ambient Light Illumination Mode (behind a paywall), S. Zhukov, A. Iones, G. Kronin, Eurographics 1998
    “First paper on ambient occlusion, AFAIK. Not that old…”

Gamma correct and HDR rendering in a 32 bits buffer

Recently I am looking for the available options for doing gamma correct and/or HDR rendering in a 32 bits buffer. Gamma correct means you need higher precision for low values (this article by Benjamin Supnik demonstrates why). HDR means you may have values greater than 1, and since your range is getting wider, you want higher precision everywhere. The way to go recommended everywhere is to use 16 bits floats, like RGBA16, or even higher. But suppose you don’t want your buffer to get above 32 bits, what tools are available?

Note: the article has been reworked as I gathered more information. I thought organizing them was better than merely adding an update notice at the end.

RGBM

My first thought was to use standard RGBA8, store the maximum of the RGB channels in the alpha channel, and store the RGB vector divided by that scale. A back of the envelope test later, I was forgetting about it, convinced it wouldn’t go very far: since values are limited to the [0, 1] range, it would require to define the maximum value meant when alpha is 1. More importantly, interpolation would give incorrect results.

Or so I thought. It seems doing this is known as RGBM (M for shared multiplier) and while indeed the interpolation gives incorrect results, this article argues they are barely noticeable, and the other advantages outweigh it (see RGBD here after for an other worth reading article).

There are also variations of this approach, as shown on this online Unity demo. Here is the code.

RGBD

By searching on the web I first found this solution, consisting in storing the inverse of the scale in the alpha channel. Known as RGBD (D for shared divider), it doesn’t suffer from having to define a maximum value, and plotting the function seems to show an acceptable precision across the range. Unfortunately it doesn’t interpolate either.

This article gives a good comparison of RGBM and RGBD, and addresses the question of interpolation. Interestingly, it notes that while neither have correct interpolation, whether it may acceptable or not depends on the distribution of the colors.

RGBE

Then you have the RGBE (E for shared exponent): RGB and an exponent. Here is a shader implementation using an RGBA8 buffer. But then again, because of the exponent being stored in the alpha channel, interpolation is going to be an issue.

RGB9_E5

Further searching, I stumbled upon the OpenGL EXT_texture_shared_exponent extension, which defines a GL_RGB9_E5 texture format with three 9 bits components for the color channels, and an additional 5 bits exponent shared by the channels. This sounded nice: 9 bits of precision is already twice as many shades, and the exponent gives precision everywhere, as long as the channels values have the same order of magnitude. Because it is a standard format, I assume interpolation is going to be a non issue. Unfortunately as can be read on the OpenGL wiki, while this is a required texture format, it is not required for renderbuffers. In other words: chances are it’s not going to be implemented.

LogLUV

Since we really want a wide range of light intensity, a different approach is to use a different color space. Several people mentioned LogLUV, which I hear gives good results, at the expense of a high instruction cost for both packing and unpacking. Here is a detailed explanation.

R11G11B10

There is still the R11F_G11F_B10F format (DXGI_FORMAT_R11G11B10_FLOAT in DirectX) where R and G channels have a 6 bits mantissa and a 5 bits exponent, and B has a 5 bits mantissa and 5 bits exponent. Since floats have higher precision with low values, this seem very well suited to gamma correct rendering. And since this is a standard format, interpolation should be a non issue.

Conclusion

I haven’t tested in practice yet, but from these readings it seems to me the sensible solution would be to use a R11G11B10 float format when available. Otherwise (for example on mobile platforms) choose between RGBM and RGBD depending on the kind of image being rendered. Unless the format is standard, it seems interpolation is always going to be an issue, and the best you can do is mitigate by choosing the solution depending on your use case.

Did I miss something?