A real-time post-processing crash course

Revision 2015 took place last month, on the Easter weekend as usual. I was lucky enough to attend and experience the great competitions that took place this year; I can’t recommend you enough to check all the good stuff that came out of it.

Like the previous times I shared some insights in a seminar, as an opportunity to practice public talking. Since our post-processing have quite improved with our last demo (Ctrl-Alt-Test : G – Level One), the topic was the implementation of a few post-processing effects in a real-time renderer: glow, lens flare, light streak, motion blur…

Having been fairly busy over the last months though, with work and the organising of Tokyo Demo Fest among others, I couldn’t afford as much time as I wanted to spend on the presentation unfortunately. An hour before the presentation I was still working on the slides, but all in all it went better than expected. I also experimented with doing a live demonstration, hopefully more engaging than some screenshots or even a video capture can be.

Here is the video recording made by the team at Revision (kudos to you guys for the fantastic work this year). I will provide the slides later on, after I properly finish the credits and references part.

Abstract:
Over decades photographers, then filmmakers, have learned to take advantage of optical phenomenons, and perfected the recipe of chemicals used in films, to affect the visual appeal of their images. Transposed to rendering, those lessons can make your image more pleasant to the eye, change its realism, affect its mood, or make it easier to read. In this course we will present different effects that can be implemented in a real-time rendering pipeline, the theory behind them, the implementation details in practice, and how they could fit in your workflow.

Volumetric light scattering

Here are a couple of links on how to render light scattering effect (aka. volumetric shadows):

 Update:

Crysis 3 tech demo

Crytek has published a video showing the rendering technology used in the CryEngine, more specifically in Crysis 3. While I don’t really dig the artistic choices (I find the overall image to be messy due to the high contrast and not that appealing, aesthetically speaking), the technical side is impressive. I especially like the use displacement mapping and tessellation for the vegetation (by the way, see how great that leaf looks; they got the material completely right). The reflexions visible at 1’52 make me think they also implemented the cone tracing technique, just like Unreal did. On the downside, all the parts with falling water felt unrealistic to me.

Last but not least, Toad Technology! :)

Draft on depth of field resources

What is mostly in my thoughts recently when it comes to rendering is real time depth of field effect. I intend to read state of the art material on the matter and hopefully post a well formed summary, just like I did for physically based rendering, but until then I thought I would list a few resources already.

That’s all for now. ;-)

Update: after further documentation, both Kawase’s  and DICE’s techniques indeed rely on the idea of creating an hexagon shaped bokeh by decomposing it into three skewed boxes, but while Kawase’s approach uses seven passes, DICE’s one takes it down to two passes thanks to some clever use of multirender targets.

Also, I forgot to mention a second article of Matt Pettineo, where he suggests a combination of techniques to achieve a better result.

Show your difference

An example of actual bokeh in a photo of mine