Overview of global illumination in Tomasz’s kajiya renderer

Soon after showcasing his recent rendering results which left industry veterans impressed and causing many of us to start documenting ourselves about ReSTIR, professional madman Tomasz Stachowiak showed a new demonstration of the global illumination capabilities of his pet project.

This is what some people manage to do with just seven months of tinkering…

But more importantly, he took the time to describe the techniques used to get such results. The writing is fairly high level, and assumes the reader to be familiar with several advanced topics, but it comes with clear illustrations at least for some parts. It also mentions the various ways in which ReSTIR is leveraged to support the techniques used. Finally, it doesn’t try to hide the parts where the techniques fall short, quite the opposite.

The article: Global Illumination overview.

In very brief, the rendering combines a geometry pass, from which a ReSTIR pass is done to compute the first bounce rays, in combination with a sparse voxel grid based irradiance cache for the rest of the light paths, which also relies on ReSTIR, and a few clever tricks to handle various corner cases, as well as denoising and temporal anti-aliasing to smooth things out.

Reading list on ReSTIR

Recently a short video from dark magic programmer Tomasz Stachowiak made the rounds in the graphics programming community, at the sound of jaws hitting the floor in its wake. It shows his recent progress on in his renderer pet project: beautiful real-time global illumination with fast convergence and barely any noise, in a static environment with dynamic lighting.

In a Twitter thread where he discussed some details, one keyword in particular caught my attention: ReSTIR.

ReSTIR stands for “Reservoir-based Spatio-Temporal Importance Resampling” and is a sampling technique published at SIGGRAPH 2020 and getting refined since.

The original publication

Spatiotemporal reservoir resampling for real-time ray tracing with dynamic direct lighting
The publication page includes the recording of the SIGGRAPH presentation, with a well articulated explanation of the technique by main author Benedikt Bitterli.
(same publication hosted on the NVidia website).

Explanations of ReSTIR

Improvements over the original publication

After the initial publication, NVidia published a refined version producing images with less noise at a lower cost, which they call “RTXDI” (for RTX Direct Illumination).

Other limitations

When discussing on Twitter some of the limitations of ReSTIR, Chris Wyman made the following remarks:

To be clear, right now, ReSTIR is a box of razor blades without handles (or a box of unlabeled knobs). It’s extremely powerful, but you have to know what you’re doing. It is not intuitive, if your existing perspective is traditional Monte Carlo (or real-time) sampling techniques.

People sometimes think SIGGRAPH paper = solved. Nope. We’ve learned a lot since the first paper, and our direct lighting is a lot more stable with that knowledge. We’re still learning how to do it well on full-length paths.

And there’s a bunch of edge cases, even in direct lighting, that we know how to solve but haven’t had time to write them up, polish, and demo.

We haven’t actually tried to solve the extra noise at disocclusions in (what I think of as) a very principled way. Right now a world-space structure is probably the best way. I’m pretty sure it can be done without a (formal) world-space structure, just “more ReSTIR.”

A real-time post-processing crash course

Revision 2015 took place last month, on the Easter weekend as usual. I was lucky enough to attend and experience the great competitions that took place this year; I can’t recommend you enough to check all the good stuff that came out of it.

Like the previous times I shared some insights in a seminar, as an opportunity to practice public talking. Since our post-processing have quite improved with our last demo (Ctrl-Alt-Test : G – Level One), the topic was the implementation of a few post-processing effects in a real-time renderer: glow, lens flare, light streak, motion blur…

Having been fairly busy over the last months though, with work and the organising of Tokyo Demo Fest among others, I couldn’t afford as much time as I wanted to spend on the presentation unfortunately. An hour before the presentation I was still working on the slides, but all in all it went better than expected. I also experimented with doing a live demonstration, hopefully more engaging than some screenshots or even a video capture can be.

Here is the video recording made by the team at Revision (kudos to you guys for the fantastic work this year). I will provide the slides later on, after I properly finish the credits and references part.

Over decades photographers, then filmmakers, have learned to take advantage of optical phenomenons, and perfected the recipe of chemicals used in films, to affect the visual appeal of their images. Transposed to rendering, those lessons can make your image more pleasant to the eye, change its realism, affect its mood, or make it easier to read. In this course we will present different effects that can be implemented in a real-time rendering pipeline, the theory behind them, the implementation details in practice, and how they could fit in your workflow.

John Carmack on Oculus at GDC 2015

John Carmack, the CTO of Oculus VR, gave a talk at the Game Developers Conference that just ended this week. Various topics are addressed, including the story behind Samsung’s Gear VR and what’s coming next, the democratization of virtual reality, the work on the API, the unsolved problem of controllers in VR, or the use of real-time ray tracing in VR.

John Carmack’s GDC 2015 talk.

It is a fairly long video (1h30), and as often with him, there are no pictures to see, just hear his personal views and insights on the work he is currently taking care of.

Real time stereo ray tracing engineer position in Tokyo

I have retweeted this already, but information tends to get buried pretty quickly on Twitter so I put it here. Syoyo, a real time ray tracing enthusiast, is looking for a talented ray tracing engineer to join his company, Light Transport.

Given their existing technology (interactive to real time ray tracing, interactive shader editing with JIT compilation) and their current focus on the Oculus DK2, I can let you imagine how exciting this position is.

The frame debugger in Unity 5

Aras Pranckevičius wrote on the Unity blog about the new frame debugger feature they added to their editor: Frame Debugger in Unity 5.0. The hack, he calls it, is very simple and just consists in interrupting the rendering at a given stage and display whichever frame buffer was active at the moment. Just a couple of days of work; most of the work went into the editor UI.

From the article:

There’s no actual “frame capture” going on; pretty much all we do is stop rendering after some number of draw calls. So if you’re looking at a frame that would have 100 objects rendered and you’re looking at object #10 right now, then we just skip rendering the remaining 90. If at that point in time we happen to be rendering into a render texture, then we display it on the screen.

This means the implementation is simple; getting functionality working was only a few days of work (and then a few weeks iterating on the UI).


Illustration from the article: “Here we are stepping through the draw calls of depth texture creation, in glorious animated GIF form”


More and more material and news are being released about the next edition of SIGGRAPH, so here is a short summary.

Technical papers

The video teaser of the technical papers has been published. It looks like there will be some really cool stuff to see. As every year Ke-Sen Huang maintains a page with the list of papers.

Real Time Live!

The Real Time Live! program looks very nice too, and it is good to see at least two demoscene related works will be presented there (the community GLSL tool ShaderToy by Beautypi, and some experiment by Still with a LEAP Motion controller on their production, Square).


Not much to say, it looks great and I want to see most of them… The Advances in Real-Time Rendering in Games and Physically Based Shading in Theory and Practice courses are a must see as usual. The Recent Advances in Light-Transport Simulation: Theory & Practice and Ray Tracing is the Future and Ever Will Be courses sound promising too.

Our work to be shown at SIGGRAPH

Lastly, we had some awesome news yesterday, when we were told our last released demoscene production, F – Felix’s workshop, has been selected to be shown as part of the Real-Time Live! demoscene reel event.

Released last year at Revision and ranking 2nd in its category, Felix’s workshop is a 64k intro: a real-time animation fitting entirely (music, meshes, textures…) within a 64kB binary file meant to run on a consumer level PC with a vanilla Windows and up to date drivers.

I was also told Eddie Lee‘s work, Artifacts, was selected as well. His outstanding demo won at Tokyo Demo Fest earlier this year.