A real-time post-processing crash course

Revision 2015 took place last month, on the Easter weekend as usual. I was lucky enough to attend and experience the great competitions that took place this year; I can’t recommend you enough to check all the good stuff that came out of it.

Like the previous times I shared some insights in a seminar, as an opportunity to practice public talking. Since our post-processing have quite improved with our last demo (Ctrl-Alt-Test : G – Level One), the topic was the implementation of a few post-processing effects in a real-time renderer: glow, lens flare, light streak, motion blur…

Having been fairly busy over the last months though, with work and the organising of Tokyo Demo Fest among others, I couldn’t afford as much time as I wanted to spend on the presentation unfortunately. An hour before the presentation I was still working on the slides, but all in all it went better than expected. I also experimented with doing a live demonstration, hopefully more engaging than some screenshots or even a video capture can be.

Here is the video recording made by the team at Revision (kudos to you guys for the fantastic work this year). I will provide the slides later on, after I properly finish the credits and references part.

Abstract:
Over decades photographers, then filmmakers, have learned to take advantage of optical phenomenons, and perfected the recipe of chemicals used in films, to affect the visual appeal of their images. Transposed to rendering, those lessons can make your image more pleasant to the eye, change its realism, affect its mood, or make it easier to read. In this course we will present different effects that can be implemented in a real-time rendering pipeline, the theory behind them, the implementation details in practice, and how they could fit in your workflow.

The effect of quantization in gamma space and linear space

I mentioned already (here and here) that one problem with gamma correct rendering is how we lose prevision for small values, and may run out of it if we didn’t have enough of it. I wrote a quick shader to demonstrate this problem and see how severe it is depending on the number of bits.

Thanks to BeautyPi‘s fantastic tool, ShaderToy, I could put it on line. Here is the live demo with an absurdly low precision format (R5G6B5) so you cannot miss the banding; just press the play button. It displays colors with maximum and low precision, in linear space and gamma space. The lighter vertical line shows the 50% intensity position. You can see the shader and play with the values here.

Gamma correct and HDR rendering in a 32 bits buffer

Recently I am looking for the available options for doing gamma correct and/or HDR rendering in a 32 bits buffer. Gamma correct means you need higher precision for low values (this article by Benjamin Supnik demonstrates why). HDR means you may have values greater than 1, and since your range is getting wider, you want higher precision everywhere. The way to go recommended everywhere is to use 16 bits floats, like RGBA16, or even higher. But suppose you don’t want your buffer to get above 32 bits, what tools are available?

Note: the article has been reworked as I gathered more information. I thought organizing them was better than merely adding an update notice at the end.

RGBM

My first thought was to use standard RGBA8, store the maximum of the RGB channels in the alpha channel, and store the RGB vector divided by that scale. A back of the envelope test later, I was forgetting about it, convinced it wouldn’t go very far: since values are limited to the [0, 1] range, it would require to define the maximum value meant when alpha is 1. More importantly, interpolation would give incorrect results.

Or so I thought. It seems doing this is known as RGBM (M for shared multiplier) and while indeed the interpolation gives incorrect results, this article argues they are barely noticeable, and the other advantages outweigh it (see RGBD here after for an other worth reading article).

There are also variations of this approach, as shown on this online Unity demo. Here is the code.

RGBD

By searching on the web I first found this solution, consisting in storing the inverse of the scale in the alpha channel. Known as RGBD (D for shared divider), it doesn’t suffer from having to define a maximum value, and plotting the function seems to show an acceptable precision across the range. Unfortunately it doesn’t interpolate either.

This article gives a good comparison of RGBM and RGBD, and addresses the question of interpolation. Interestingly, it notes that while neither have correct interpolation, whether it may acceptable or not depends on the distribution of the colors.

RGBE

Then you have the RGBE (E for shared exponent): RGB and an exponent. Here is a shader implementation using an RGBA8 buffer. But then again, because of the exponent being stored in the alpha channel, interpolation is going to be an issue.

RGB9_E5

Further searching, I stumbled upon the OpenGL EXT_texture_shared_exponent extension, which defines a GL_RGB9_E5 texture format with three 9 bits components for the color channels, and an additional 5 bits exponent shared by the channels. This sounded nice: 9 bits of precision is already twice as many shades, and the exponent gives precision everywhere, as long as the channels values have the same order of magnitude. Because it is a standard format, I assume interpolation is going to be a non issue. Unfortunately as can be read on the OpenGL wiki, while this is a required texture format, it is not required for renderbuffers. In other words: chances are it’s not going to be implemented.

LogLUV

Since we really want a wide range of light intensity, a different approach is to use a different color space. Several people mentioned LogLUV, which I hear gives good results, at the expense of a high instruction cost for both packing and unpacking. Here is a detailed explanation.

R11G11B10

There is still the R11F_G11F_B10F format (DXGI_FORMAT_R11G11B10_FLOAT in DirectX) where R and G channels have a 6 bits mantissa and a 5 bits exponent, and B has a 5 bits mantissa and 5 bits exponent. Since floats have higher precision with low values, this seem very well suited to gamma correct rendering. And since this is a standard format, interpolation should be a non issue.

Conclusion

I haven’t tested in practice yet, but from these readings it seems to me the sensible solution would be to use a R11G11B10 float format when available. Otherwise (for example on mobile platforms) choose between RGBM and RGBD depending on the kind of image being rendered. Unless the format is standard, it seems interpolation is always going to be an issue, and the best you can do is mitigate by choosing the solution depending on your use case.

Did I miss something?

Readings on color management

Color management in the production pipeline is a tough topic. A really tough one. And a crucial one too. Unfortunately not only is this an important and difficult topic, but it also seems to me that except maybe for people working on AAA games or heavy budget film industry, most have little knowledge on the matter, when they’re not just completely unaware of the issue.

The issue that image capturing devices, screens and printers all have different color characteristics (said simply: what you scan, photography or film will not look the same depending on the capturing device used, and a same image will look different depending on the display or printing device too).

The issue that they have a capturing or display range usually far from what the human vision is capable of, and by “far” you must understand by orders of magnitude (said simply, the average human can perceive way more contrast than a camera is able to capture, differentiate much more colors than a screen is able to display, and on top of that there are colors an average screen is just absolutely unable to render, like the orange of your fluorescent highlighter for example; this one is my favorite example actually :) ).

The issue that screens and image formats use a non linear representation leading to severe errors in colors unless it is taken into account when manipulating images (said simply, ignore gamma correction in your rendering and your lighting will be wrong, ignore it when you resize images and they will look wrong too).

I just wish it were more simple and “just worked”. But until then we have to deal with it. So here goes the list of readings on this nonetheless very interesting topic.

On color management:

On gamma correction:

  • Gamma error in picture scaling: this article shows how bad things can go when image manipulation softwares don’t take gamma into account, and gives a glimpse of how widespread the problem is.
  • The Value Of Gamma Compression: I like this short article a lot, as it shows in a quick a clear way how a bad gamma management can ruin a rendering.
  • Gamma and Lighting Part 1, Part 2, Part 3: this three parts article from the same blog explains how they handle the issue in the production pipeline of X-Plane.
  • GPU Gems 3, Chapter 24 – The Importance of Being Linear: this article explains how to take gamma into account in your rendering pipeline; while an interesting read, I think it doesn’t make the issue obvious enough (as I find the different illustrations to be equally bad looking).
  • Gamma FAQ: this FAQ is quite dated but still helps understanding the origin of gamma correction and avoid confusion between various concepts (there is also a Color FAQ from the same author).

Update: this 4mn video explains quite convincingly the need for gamma correction.

On tone mapping:

Have a good read!

Update: this lengthy presentation of GDC 2010 (quoting: “The presentation is basically four 20 minute presentations crammed into 55 minutes.”), by John Hable, covers several of these topics.