The rendering tools in the film industry

Here is a list of articles published by fxguide, giving fascinating insights into the tools used by the film industry in terms of rendering.

  • Ben Snow: the evolution of ILM’s lighting tools (January 2011)
    A presentation of the evolution of the technology and tools used at Industrial Light and Magic, over the course of the years and movies, from the mid-90s to nowadays.
  • Monsters University: rendering physically based monsters (June 2013)
  • The Art of Rendering (April 2012)
    A description of the different techniques used in high end rendering and the major engines.
  • The State of Rendering (July 2013): part 1, part 2
    A lengthy overview of the state of the art in high end rendering, comparing the different tools and rendering solutions available, their approach and design choices, strengths and weaknesses as well as the consequences in terms of quality, scalability and render time.

(Brace yourselves for the massive tag list hereafter.)

A list of important graphics research papers

This is an announcement that got all my attention. Since Twitter is a mess to find anything older than a day, here is the list so far:

  1. A Characterization of Ten Hidden-Surface Algorithms, Sutherland et al., ACM Computing Surveys, 1974
  2. Survey of Texture Mapping, Paul Heckbert, IEEE Computer Graphics and Applications, 1986
  3. Rendering Complex Scenes with Memory-Coherent Ray Tracing, Matt Pharr et al., proceedings of SIGGRAPH, 1997
  4. An Efficient Representation for Irradiance Environment Maps, Ramamoorthi & Hanrahan, proceedings of SIGGRAPH, 2001
  5. Decoupled Sampling for Graphics Pipelines, Ragan-Kelley et al. ACM Transactions on Graphics, 2011
  6. The Aliasing Problem in Computer-Generated Shaded Images, Franklin C. Crow, Communications of the ACM, 1977
  7. Ray Tracing Complex Scenes, Kay & Kajiya, proceedings of SIGGRAPH, 1986
  8. Hierarchical Z-buffer Visibility, Greene et al., proceedings of SIGGRAPH, 1993
  9. Geometry Images, Gu et al., ACM Transactions on Graphics, 2002
  10. A Hidden-Surface Algorithm with Anti-Aliasing, Edwin Catmull, proceedings of SIGGRAPH, 1978
  11. Modeling the Interaction of Light Between Diffuse Surfaces, Goral et al., proceedings of SIGGRAPH, 1984
    “The first radiosity paper, with the real physical Cornell box (which I’ve actually have seen in real life!)”
  12. Pyramidal Parametrics, Lance Williams, proceedings of SIGGRAPH, 1983
  13. Rendering synthetic objects into real scenes: bridging traditional and image-based graphics with global illumination and high dynamic range photography, Paul Debevec, proceedings of SIGGRAPH 2008
    “Influence on gfx proportional to title length!”
  14. A parallel algorithm for polygon rasterization, Juan Pineda, proceedings of SIGGRAPH, 1988
  15. Rendering from compressed textures, Beers et al., proceedings of SIGGRAPH 1996
    “This one (out of 3) of the 1st texture compression papers ever! Uses VQ so probably not something you want today, but major eye opener!”
  16. A general version of Crow’s shadow volumes, P. Bergeron, IEEE Computer Graphics and Applications, 1986
    “Generalized SV. Nice trick”
  17. Reality engine graphics, Kurt Akeley, proceedings of SIGGRAPH 1993
    “Paper describes MSAA, guard bands, etc etc”
  18. The design and analysis of a cache architecture for texture mapping, Hakura and Gupta, proceedings of ISCA 1997
    “Classic texture $ paper!”
  19. Deep shadow maps, Lokovic and Veach, proceedings of SIGGRAPH 2000
    “Lots of inspiration here!”
  20. The Reyes image rendering architecture, Cook et al., proceedings of SIGGRAPH 1987
    “Sooo good & mega-influential!”
  21. A practical model for subsurface light transport, Jensen et al., proceedings of SIGGRAPH 2001
  22. Casting curved shadows on curved surfaces, Lance Williams, proceedings of SIGGRAPH 1978
    “*the* shadow map paper!”
  23. On the design of display processors, Myer and Sutherland, Communications of the ACM 1968
    “Wheel of reincarnation”
  24. Ray tracing Jell-O brand gelatin, Paul S. Heckbert, Communications of the ACM 1988
  25. Talisman: Commodity realtime 3D graphics for the PC, Torborg and Kajiya, Proceedings of SIGGRAPH 1996
  26. A Frequency Analysis of Light Transport, Durand et al., Proceedings of SIGGRAPH 2005
    “Very influential!!”
  27. An Ambient Light Illumination Mode (behind a paywall), S. Zhukov, A. Iones, G. Kronin, Eurographics 1998
    “First paper on ambient occlusion, AFAIK. Not that old…”

Is tiled forward rendering the new hot thing?

I see more and more people talking about tiled forward rendering, and it seems to be the new hot thing everyone wants to try. AMD recently released a tech demo using such a technique: Leo.

Aras Pranckevičius, rendering architect at Unity, discussed modern forward rendering in an article, 2012 Theory for Forward Rendering, and later dropped a bunch of Tiled Forward Shading Links (which I won’t duplicate here so just click). Wolfgang Engel argues tiled based approaches don’t pay back when many lights cast shadows, compared to deferred lighting. At last Brian Keris discusses Tiled Light Culling, for the diffuse and specular contributions.

Readings on vector class optimization

Now that Revision has passed, we feel tempted to grab the ax and happily chop into parts of our code base we wanted to change but couldn’t really since we had other priorities. One tempting part is the linear algebra one: vector, quaternion and matrix data structures. Lets say vector for a start. Not that it’s really necessary, but the transformations are the most time consuming parts after the rendering itself, and the problem itself is somewhat interesting.

After a little googling, I basically found three approaches to this problem:

Every here and there, people seem to think of SSE instructions as a silver bullet and propose various examples of code, snippets or full implementations. The idea being to use dedicated processor instructions to apply operations on four components at a time instead of one after another.

Quite on the opposite, Fabian Giesen argued some years ago that it was not such a good idea. A quick look at the recently publicly released Farbrausch codebase shows they indeed used purely conventional C++ code for it.

At last this quite dated article (with regards to hardware evolution) by Tomas Arce takes a completely orthogonal approach, consisting of using C++ templates to evaluate a full expression component after component, thus avoiding wasting time moving and copying things around.

I am curious to implement and compare them on nowadays hardware.


Update: this is 2016 and the topic was brought back recently when someone wrote the article How to write a math library in 2016.

The point of the article is that the old advice to not bother with SSE and stick with floats doesn’t apply anymore, and it goes on to show results and sample code. This sparked a few discussions on Twitter, with opinions voiced to put it mildly.

It seemed the consensus was still against the use of SSE for the following reasons:

  • Implementation is tedious.
  • For 3 dimensional vector, which is the most common case, there is a 25% waste.
  • For 4 dimensional vectors, like homogeneous coordinates and RGBA, it doesn’t work so well either since the fourth component is treated differently than the other ones.
  • Even if the implementation detail is hidden behind a nice interface, the alignment requirements will leak and become constraints to the rest of the code.
  • Compilers like clang are smart enough to generate SSE code from usual float operations.

Capturing and rendering tiny details

Not so long ago I discovered this article presenting the CLEAN mapping technique used in Civilization V to manage how detail gets filtered with distance. I found the side effect of emerging anisotropic surfaces very seducing.

This week Angelo Pesce wrote some thoughts on how the problem of how to represent and render detail in general, how normal maps are not so well suited and what can be done instead.

Variance Shadow Maps

Shadow mapping is a popular way of getting dynamic shadows, but suffers from aliasing artifacts that cannot be addressed by usual texture filtering. The reason boils down to the fact that the average of depth test results (which is what we want) is not the same as the result of a test on the average of depths (which is what hardware does).

The trivial way to do it anyway is the Percentage Closer Filtering (PCF) technique, and usually stands in papers as the expensive upper bound.

Variance Shadow Maps are a simple technique allowing filtering, including some Gaussian blur for example, thus giving soft shadows (the blur does not depends on the distance from the occluder though). The main drawback of the algorithm is the light bleeding artifact that occurs as soon as the scene complexity is too important. I also found it to be fairly expensive in terms of texture memory since it requires twice as much as regular shadow maps, and another times two for blurring.

One could argue VSM are some pretty old stuff already, but because of the elegance of the trick they rely upon and the ease of implementation, I really like them.