Implementing a Physically Based Shading without locking yourself in

Over the last few months I have been trying to push my understanding of Physically Based Shading, by actively exploring every corner and turning over every stone, to uncover any area where I lack knowledge. Although this is still an ongoing process and I still have a lot to do, I thought I could already share some of what I have learned in the process.

Last weekend the Easter demoparty event Revision took place, as an online version due to the current pandemic situation. There, I presented a talk on Physically Based Shading, in which I went into electromagnetism, existing models, and an brief overview of a prototype I am working on.

The presentation goes into a lot of detail about interaction of light with matter from a physics point of view, then builds its way up to the Cook-Torrance specular BRDF model. The diffuse BRDF and the Image Based Lighting were skipped due to time constraints. I am considering doing a Part 2 to address those topics, but I haven’t decided anything yet.

In the mean time, please leave a comment or contact me if you notice any mistake or inaccuracy.

Abstract

How do you implement a Physically Based Shading for your demos yet keep the possibility to try something completely different without having to rewrite everything?
In this talk we will first get an intuitive understanding of what makes matter look the way it looks, with as much detail as we can given the time we have. We will then see how this is modeled by a BRDF (Bidirectional Reflectance Distribution Function) and review some of the available models.
We will also see what makes it challenging for design and for real-time implementation. Finally we will discuss a possible implementation that allows to experiment with different models, can work in a variety of cases, and remains compatible with size coding constraints.

Slides

Here are the slides, together with the text of the talk and the link to the references:
Implementing a Physically Based Shading model without locking yourself in.

Video

And finally here is the recording of the talk, including a quick demonstration of the prototype:

Interference shader

Here is the shader used during the presentation to illustrate light interaction at the interface between to media:

Acknowledgements

Thanks again to Alan Wolfe for reviewing the text, Alkama for the motivation and questions upfront and help in the video department, Scoup and the Revision crew for organizing the seminars, Ronny and Siana for the help in the sound department, and everyone who provided feedback on my previous article on Physically Based Shading.

Addendum

Following the publication of this article, Nathan Reed gave several comments on Twitter:

FWIW – I think the model of refraction by the electromagnetic field causing electrons to oscillate is the better one. This explains not only refraction but reflection as well, and even total internal reflection. Feynman does out the wave calculations: https://feynmanlectures.caltech.edu/II_33.html

It also explains better IMO why a light wave keeps its direction in a material. If an atom absorbs and re-emits the photon there is no reason why it should be going in the same direction as before (conservation of momentum is maintained if the atom recoils). Besides which, the lifetime of an excited atomic state is many orders of magnitude longer than the time needed for a light wave to propagate across the diameter of the atom (even at an IOR-reduced speed).

Moreover, in the comments of the shader above, CG researcher Fabrice Neyret mentioned a presentation of his from 2019, which lists interactions of light with matter: Colors of the universe.
Quoting his summarized comment:

In short: the notion of photons (and their speed) in matter is a macroscopic deceiving representation, since it’s about interference between incident and reactive fields (reemitted by the dipoles, at least for dielectrics).


Long hiatus

Last week I was lucky enough to attend SIGGRAPH 2018, in Vancouver. My colleagues and I were presenting on a booth the work we had done, a VR story with a distinctive comic book look. I was also invited to participate to a panel session on demoscene, where I shared some lessons learned while making the 64k intro H – Immersion. The event brought a certain sense of conclusion to this work, aside from filling me with inspiration and motivation to try new things.

It has been a long time since I last posted anything here. For the last two years the majority of my spare time went into making that 64k intro. In fact the last post, “Intersection of a ray and a cone”, was related to it. I was implementing volumetric lighting for the underwater scenes, and wanted to resolve cones of light with ray tracing, before marching inside those cones. LLB and I have talked about the creation process in two making-of articles: “A dive into the making of Immersion”, and “Texturing in a 64kB intro”.

During that time, a lot of new things have happened in the computer graphics community. It has been difficult to keep track of everything. The last topic I started experimenting with is point cloud and mesh capture from photos; I might expend on it here in the future. I also want to experiment with DIY motion capture. Anyway, it’s time to resume posting here.

A real-time post-processing crash course

Revision 2015 took place last month, on the Easter weekend as usual. I was lucky enough to attend and experience the great competitions that took place this year; I can’t recommend you enough to check all the good stuff that came out of it.

Like the previous times I shared some insights in a seminar, as an opportunity to practice public talking. Since our post-processing have quite improved with our last demo (Ctrl-Alt-Test : G – Level One), the topic was the implementation of a few post-processing effects in a real-time renderer: glow, lens flare, light streak, motion blur…

Having been fairly busy over the last months though, with work and the organising of Tokyo Demo Fest among others, I couldn’t afford as much time as I wanted to spend on the presentation unfortunately. An hour before the presentation I was still working on the slides, but all in all it went better than expected. I also experimented with doing a live demonstration, hopefully more engaging than some screenshots or even a video capture can be.

Here is the video recording made by the team at Revision (kudos to you guys for the fantastic work this year). I will provide the slides later on, after I properly finish the credits and references part.

Abstract:
Over decades photographers, then filmmakers, have learned to take advantage of optical phenomenons, and perfected the recipe of chemicals used in films, to affect the visual appeal of their images. Transposed to rendering, those lessons can make your image more pleasant to the eye, change its realism, affect its mood, or make it easier to read. In this course we will present different effects that can be implemented in a real-time rendering pipeline, the theory behind them, the implementation details in practice, and how they could fit in your workflow.

How to use light to make better demos?

This is the third day at Revision, and my contribution this year is the talk I gave yesterday. Unlike last year, this seminar is not technical at all but focused on the design aspect and, to some extent, how it relates to the technical one. The context is demomaking, but many ideas are still valid in other media.

There were some issues with the recording unfortunately, which means some elements are missing (you will notice some blanks at the beginning). In particular after 5mn, there is an important point which was completely cut out. The text was:

Throwing a new technique at whatever you’re doing is not going to make it any better. It’s only going to change what you can achieve. There are two sides of image creation: the technical one and the artistic one. Different techniques allow to do different things, and the more techniques you master, the better you understand what you can and cannot do with them, and how to do it. Technique becomes a tool that changes how you can express yourself.

Here are the slides with notes (~5MB), or a low quality version (~1MB).

For more demoscene related talks, here is the full list of seminars at Revision 2013.

Introduction to light shading for real-time rendering

I am finally back in Tokyo after two intense weeks in Europe, during which I did things as various as being a perfect tourist in four capitals (stolen bag experience included) or attending the world biggest demoparty, getting nominated with the rest of my group for some awards, ranking 2nd in a competition and getting slashdotted for that. :)

As previously advertised, I presented at Revision a talk on light shading. A video was recorded for the streaming and has been made available online pretty much immediately, thanks to the work of the Revision team:

Unfortunately, the last minutes are missing. I was basically comparing the Fresnel version with the manually tweaked version, and explaining that while the former might not look perfect yet, it was an out of the box result, while the latter required me to introduce some fudge factor I had to tweak. Regarding references, I couldn’t list them all so I just mentioned the most significant ones (the first part of this talk is strongly inspired by Naty Hoffmann’s course introduction) and referred to here for the rest. At last I mentioned an evaluation sheet for whoever cared to give some feedback.

Performance wise, when seeing the video I feel embarrassed. The flow is far from what I was aiming, some explanations are not crystal clear as I wanted them to be, and you can notice I was confused a couple of times by the surrounding noise (hey, did I mention it’s a party?). But on the other hand various people told me it was a good seminar so even though there is much room for improvement, it’s not that bad of a start I guess.

Anyway, you can download a quick export of the party version of the slides. When I have some time I will try to get a better looking export (without text and images cropped out), and fix a couple of slides.