A real-time post-processing crash course

Revision 2015 took place last month, on the Easter weekend as usual. I was lucky enough to attend and experience the great competitions that took place this year; I can’t recommend you enough to check all the good stuff that came out of it.

Like the previous times I shared some insights in a seminar, as an opportunity to practice public talking. Since our post-processing have quite improved with our last demo (Ctrl-Alt-Test : G – Level One), the topic was the implementation of a few post-processing effects in a real-time renderer: glow, lens flare, light streak, motion blur…

Having been fairly busy over the last months though, with work and the organising of Tokyo Demo Fest among others, I couldn’t afford as much time as I wanted to spend on the presentation unfortunately. An hour before the presentation I was still working on the slides, but all in all it went better than expected. I also experimented with doing a live demonstration, hopefully more engaging than some screenshots or even a video capture can be.

Here is the video recording made by the team at Revision (kudos to you guys for the fantastic work this year). I will provide the slides later on, after I properly finish the credits and references part.

Abstract:
Over decades photographers, then filmmakers, have learned to take advantage of optical phenomenons, and perfected the recipe of chemicals used in films, to affect the visual appeal of their images. Transposed to rendering, those lessons can make your image more pleasant to the eye, change its realism, affect its mood, or make it easier to read. In this course we will present different effects that can be implemented in a real-time rendering pipeline, the theory behind them, the implementation details in practice, and how they could fit in your workflow.

John Carmack on Oculus at GDC 2015

John Carmack, the CTO of Oculus VR, gave a talk at the Game Developers Conference that just ended this week. Various topics are addressed, including the story behind Samsung’s Gear VR and what’s coming next, the democratization of virtual reality, the work on the API, the unsolved problem of controllers in VR, or the use of real-time ray tracing in VR.

John Carmack’s GDC 2015 talk.

It is a fairly long video (1h30), and as often with him, there are no pictures to see, just hear his personal views and insights on the work he is currently taking care of.

Real time stereo ray tracing engineer position in Tokyo

I have retweeted this already, but information tends to get buried pretty quickly on Twitter so I put it here. Syoyo, a real time ray tracing enthusiast, is looking for a talented ray tracing engineer to join his company, Light Transport.

Given their existing technology (interactive to real time ray tracing, interactive shader editing with JIT compilation) and their current focus on the Oculus DK2, I can let you imagine how exciting this position is.

The frame debugger in Unity 5

Aras Pranckevičius wrote on the Unity blog about the new frame debugger feature they added to their editor: Frame Debugger in Unity 5.0. The hack, he calls it, is very simple and just consists in interrupting the rendering at a given stage and display whichever frame buffer was active at the moment. Just a couple of days of work; most of the work went into the editor UI.

From the article:

There’s no actual “frame capture” going on; pretty much all we do is stop rendering after some number of draw calls. So if you’re looking at a frame that would have 100 objects rendered and you’re looking at object #10 right now, then we just skip rendering the remaining 90. If at that point in time we happen to be rendering into a render texture, then we display it on the screen.

This means the implementation is simple; getting functionality working was only a few days of work (and then a few weeks iterating on the UI).

Illustration

Illustration from the article: “Here we are stepping through the draw calls of depth texture creation, in glorious animated GIF form”

SIGGRAPH 2013

More and more material and news are being released about the next edition of SIGGRAPH, so here is a short summary.

Technical papers

The video teaser of the technical papers has been published. It looks like there will be some really cool stuff to see. As every year Ke-Sen Huang maintains a page with the list of papers.


Real Time Live!

The Real Time Live! program looks very nice too, and it is good to see at least two demoscene related works will be presented there (the community GLSL tool ShaderToy by Beautypi, and some experiment by Still with a LEAP Motion controller on their production, Square).


Courses

Not much to say, it looks great and I want to see most of them… The Advances in Real-Time Rendering in Games and Physically Based Shading in Theory and Practice courses are a must see as usual. The Recent Advances in Light-Transport Simulation: Theory & Practice and Ray Tracing is the Future and Ever Will Be courses sound promising too.


Our work to be shown at SIGGRAPH

Lastly, we had some awesome news yesterday, when we were told our last released demoscene production, F – Felix’s workshop, has been selected to be shown as part of the Real-Time Live! demoscene reel event.

Released last year at Revision and ranking 2nd in its category, Felix’s workshop is a 64k intro: a real-time animation fitting entirely (music, meshes, textures…) within a 64kB binary file meant to run on a consumer level PC with a vanilla Windows and up to date drivers.

I was also told Eddie Lee‘s work, Artifacts, was selected as well. His outstanding demo won at Tokyo Demo Fest earlier this year.

Reading list on skin rendering

Skin rendering is really not my thing. Yet. I have too much figuring out rendering of opaque materials already to deal with ones exhibiting sub-surface scattering. But I got trapped reading one article and then another.. and before I knew I had a list I wanted to note for later reference.

Many links missing, as I’m not done checking the major techniques mentioned in the presentations, but perfect is the enemy of good after all.

Series of articles on anti-aliasing

Matt Pettineo is writing an interesting and in-depth series of articles on anti-aliasing:

In this last article, he provides a list of captures comparing the results he obtained, as well as the source code.

On a side note, I like this short post of Timothy Lottes (well known for FXAA and TXAA) where he compares typical film image with typical video game one. His example of temporal aliasing is also worth keeping in mind.