Path tracing is a surprisingly simple technique to render realistic images. This would be my definition if you are unfamiliar with the term. But if you already have experience with various ray tracing techniques, I would probably say that path tracing is a remarkably elegant solution to the rendering equation. You can implement a toy path tracer in a weekend or, if you’ve already done it a few times before, within 25 minutes.

Recently I was documenting myself on path tracing, and some of the techniques that can be used, like next event estimation, bidirectional path tracing, Russian roulette, etc. This is a case where ShaderToy can be an invaluable source of examples and information, and so I was browsing path tracing shaders there. As the number of open tabs was starting to get impractical, I decided to use the “playlist” feature of ShaderToy to bookmark them all.

You can find the list here: Path tracing, on ShaderToy.

The examples of path tracers listed include very naive implementations, hacky ones, rendering features like advanced BRDF, volumetric lighting or spectral rendering, or various noise reduction techniques such as next event estimation, bidirectional path tracing, multiple importance sampling, accumulation over frames with temporal reprojection, screen space blue noise, or convolutional neural network based denoising.

Some of those shaders are meant to be artworks, but even the technical experimentation ones look nice, because the global illumination inherent to path tracing tends to generate images that are pretty.

]]>Color mixing in most digital painting tools is infamously unsatisfying, often limited to a linear interpolation in RBG space, resulting in unpleasing gradients very different from what one would expect. Ten years ago I mentioned this article that presented the color mixing of the application Paper, which tried to solve this very problem.

This time, the core idea is to model colors as pigments: estimate the pigment concentration based on the color, so in a way, move from RGB space to “pigment space”, and interpolate the pigment concentration, before converting back to RGB space.

The paper uses the Kubelka-Munk model for estimating colors from pigment concentration. The problem however is to find a transformation between the two spaces. A first assumption is made on the available pigments: essentially restricting them to CMYK. Then two problems are addressed: RGB colors that cannot be represented with those pigments, and likewise pigment colors that cannot be represented in RGB.

The paper proposes a remapping that enables a transform and its inverse, thus allowing to move from RGB space to pigment space, interpolate in pigment space, and move back to RGB space.

You could argue this is therefore a physically based diffuse color mixing.

Finally, the implementation of the proposed model, Mixbox, is available under a CC BY-NC license:

https://github.com/scrtwpns/mixbox

Two Minute Papers did a video on this paper as well:

But more importantly, he took the time to describe the techniques used to get such results. The writing is fairly high level, and assumes the reader to be familiar with several advanced topics, but it comes with clear illustrations at least for some parts. It also mentions the various ways in which ReSTIR is leveraged to support the techniques used. Finally, it doesn’t try to hide the parts where the techniques fall short, quite the opposite.

The article: Global Illumination overview.

In very brief, the rendering combines a geometry pass, from which a ReSTIR pass is done to compute the first bounce rays, in combination with a sparse voxel grid based irradiance cache for the rest of the light paths, which also relies on ReSTIR, and a few clever tricks to handle various corner cases, as well as denoising and temporal anti-aliasing to smooth things out.

]]>In a Twitter thread where he discussed some details, one keyword in particular caught my attention: **ReSTIR**.

ReSTIR stands for “Reservoir-based Spatio-Temporal Importance Resampling” and is a sampling technique published at SIGGRAPH 2020 and getting refined since.

Spatiotemporal reservoir resampling for real-time ray tracing with dynamic direct lighting

The publication page includes the recording of the SIGGRAPH presentation, with a well articulated explanation of the technique by main author Benedikt Bitterli.

(same publication hosted on the NVidia website).

- How to add thousands of lights to your renderer and not die in the process

This is a high level explanation of the technique, giving the broad lines with a few diagrams and without touching the mathematical aspects. - Spatiotemporal Reservoir Resampling (ReSTIR) – Theory and Basic Implementation

This reads like a simplified version of the paper: the equations and the various algorithms are presented, the reasoning is explained, but there is no mathematical derivation. Finally, an example implementation is presented. - Reframing light transport for real-time (video, slides)

This keynote given at HPG 2020 by Chris Wyman, who is a co-author of ReSTIR, gives another perspective on the technique, through the prism of using statistics to evaluate an unknown distribution.

After the initial publication, NVidia published a refined version producing images with less noise at a lower cost, which they call “RTXDI” (for RTX Direct Illumination).

- RTXDI: Details on Achieving Real-Time Performance
- Rearchitecting Spatiotemporal Resampling for Production (video, slides)

Both presentations explain the same thing, but with small differences that sometimes are clearer in one or the other. They explain again the foundations of the technique, then detail where the improvements lie (use fewer more relevant samples, avoid wasting work, and using a more cache friendly approach). - ReSTIR GI: Path Resampling for Real-Time Path Tracing

While both the original technique and RTXDI are limited to direct illumination, this publication applies ReSTIR to global illumination.

When discussing on Twitter some of the limitations of ReSTIR, Chris Wyman made the following remarks:

]]>To be clear, right now, ReSTIR is a box of razor blades without handles (or a box of unlabeled knobs). It’s extremely powerful, but you have to know what you’re doing. It is not intuitive, if your existing perspective is traditional Monte Carlo (or real-time) sampling techniques.

People sometimes think SIGGRAPH paper = solved. Nope. We’ve learned a lot since the first paper, and our direct lighting is a lot more stable with that knowledge. We’re still learning how to do it well on full-length paths.

And there’s a bunch of edge cases, even in direct lighting, that we know how to solve but haven’t had time to write them up, polish, and demo.

We haven’t actually tried to solve the extra noise at disocclusions in (what I think of as) a very principled way. Right now a world-space structure is probably the best way. I’m pretty sure it can be done without a (formal) world-space structure, just “more ReSTIR.”

The price of their products is apparently in the range of several tens of thousands of dollars (I’ve heard prices like $20k to 50k), which makes it out of reach for most individuals. Not many details about their invention are available either (from the promotion material: LED powered, several hundred watts of electrical power, a solid diffuse material, and a thickness around 1 meter), and I was left wondering what was the secret sauce to their intriguing technology.

The YouTube channel DIY Perks has been working on day light projects for a while now, improving at each iteration. Yesterday they published a video explaining how to build a light that seems to give very similar results as CoeLux’s product, from some basic materials that are fairly simple to find. Since their solution takes roughly the same volume, it’s tempting to think it uses the same technique

It’s extremely satisfying to finally see how this works and, despite the practical aspects, quite tempting to try if only to see how it looks in real life.

]]>Here is probably the most basic intersection: a ray and a plane. Solving it is straightforward, which I hope can be seen below. Like last time, I am using vector notation.

- We define a ray with its origin $O$ and its direction as a unit vector $\hat{D}$.

Any point $X$ on the ray at a signed distance $t$ from the origin of the ray verifies: $\vec{X} = \vec{O} + t\vec{D}$.

When $t$ is positive $X$ is in the direction of the ray, and when $t$ is negative $X$ is in the opposite direction. - We define a plane with a point $S$ on that plane and the normal unit vector $\hat{N}$, perpendicular to the plane.

The distance between any point $X$ and the plane is $d = \lvert (\vec{X} – \vec{S}) \cdot \vec{N} \rvert$. If this equality is not obvious for you, you can think of it as the distance between $X$ and $S$ along the $\vec{N}$ direction. When $d=0$, it means $X$ is on the plane itself. - We define $P$ the intersection or the ray and the plane, and which we are interested in finding.

Since $P$ is both on the ray and on the plane, we can write: $$ \left\{ \begin{array}{l} \vec{P}=\vec{O} + t\vec{D} \\ \lvert (\vec{P} – \vec{S}) \cdot \vec{N} \rvert = 0 \end{array} \right. $$ Because the distance $d$ from the plane is $0$, the absolute value is irrelevant here. We can just write: $$ \left\{ \begin{array}{l} \vec{P}=\vec{O} + t\vec{D} \\

(\vec{P} – \vec{S}) \cdot \vec{N} = 0 \end{array} \right. $$ All we have to do is replace $P$ with $\vec{O} + t\vec{D}$ in the second equation, and reorder the terms to get $t$ on one side.

$$ (\vec{O} + t\vec{D} – \vec{S}) \cdot \vec{N} = 0 $$ $$ \vec{O} \cdot \vec{N} + t\vec{D} \cdot \vec{N} – \vec{S} \cdot \vec{N} = 0 $$ $$ t\vec{D} \cdot \vec{N} = \vec{S} \cdot \vec{N} – \vec{O} \cdot \vec{N} $$ $$ t = \frac{(\vec{S} – \vec{O}) \cdot \vec{N}}{ \vec{D} \cdot \vec{N} } $$

A question to ask ourselves is: what about the division by $0$? Looking at the diagram, we can see that $\vec{D} \cdot \vec{N} = 0$ means the ray is parallel to the plane, and there is no solution unless $O$ is already on the plane. Otherwise, the ray intersects the plane for the value of $t$ written above. That’s it, we’re done.

**Note:** There are several, equivalent, ways of representing a plane. If your plane is not defined by a point $S$ and a normal vector $\hat{N}$, but rather with a distance to the origin $s$ and a normal vector $\hat{N}$, you can notice that $s = \vec{S} \cdot \vec{N}$ and simplify the result above, which becomes: $$ t = \frac{s – \vec{O} \cdot \vec{N}}{ \vec{D} \cdot \vec{N} } $$

For the sake of simplicity, in the above we defined the distance to the plane as an absolute value. It is possible however to define it as a signed value: $d = (\vec{X} – \vec{S}) \cdot \vec{N}$. In this case $d>0$ means $X$ is somewhere on the side of the plane pointed by $\vec{N}$, while $d<0$ means $X$ is on the opposite side of the plane.

Distances that can be negative are called signed distances, and they are a foundation of Signed Distance Fields (SDF).

In their comments, they remember how and when they’ve discovered live coding and got involved, explain how they prepare for a competition, talk about their state of mind during a match, share their esteem for fellow live coders, and reflect on this new kind of e-sport.

You can read them here: **A new e-sport: live coding competitions**.

Last weekend the Easter demoparty event Revision took place, as an online version due to the current pandemic situation. There, I presented a talk on Physically Based Shading, in which I went into electromagnetism, existing models, and an brief overview of a prototype I am working on.

The presentation goes into a lot of detail about interaction of light with matter from a physics point of view, then builds its way up to the Cook-Torrance specular BRDF model. The diffuse BRDF and the Image Based Lighting were skipped due to time constraints. I am considering doing a Part 2 to address those topics, but I haven’t decided anything yet.

In the mean time, please leave a comment or contact me if you notice any mistake or inaccuracy.

How do you implement a Physically Based Shading for your demos yet keep the possibility to try something completely different without having to rewrite everything?

In this talk we will first get an intuitive understanding of what makes matter look the way it looks, with as much detail as we can given the time we have. We will then see how this is modeled by a BRDF (Bidirectional Reflectance Distribution Function) and review some of the available models.

We will also see what makes it challenging for design and for real-time implementation. Finally we will discuss a possible implementation that allows to experiment with different models, can work in a variety of cases, and remains compatible with size coding constraints.

Here are the slides, together with the text of the talk and the link to the references:**Implementing a Physically Based Shading model without locking yourself in**.

And finally here is the recording of the talk, including a quick demonstration of the prototype:

Here is the shader used during the presentation to illustrate light interaction at the interface between to media:

Thanks again to Alan Wolfe for reviewing the text, Alkama for the motivation and questions upfront and help in the video department, Scoup and the Revision crew for organizing the seminars, Ronny and Siana for the help in the sound department, and everyone who provided feedback on my previous article on Physically Based Shading.

Following the publication of this article, Nathan Reed gave several comments on Twitter:

FWIW – I think the model of refraction by the electromagnetic field causing electrons to oscillate is the better one. This explains not only refraction but reflection as well, and even total internal reflection. Feynman does out the wave calculations: https://feynmanlectures.caltech.edu/II_33.html

It also explains better IMO why a light wave keeps its direction in a material. If an atom absorbs and re-emits the photon there is no reason why it should be going in the same direction as before (conservation of momentum is maintained if the atom recoils). Besides which, the lifetime of an excited atomic state is many orders of magnitude longer than the time needed for a light wave to propagate across the diameter of the atom (even at an IOR-reduced speed).

Moreover, in the comments of the shader above, CG researcher Fabrice Neyret mentioned a presentation of his from 2019, which lists interactions of light with matter:* Colors of the universe*.

Quoting his summarized comment:

In short: the notion of photons (and their speed) in matter is a macroscopic deceiving representation, since it’s about interference between incident and reactive fields (reemitted by the dipoles, at least for dielectrics).

The first four videos show how to use boundary conditions to deduce the relationship between the electromagnetic field on both sides of a surface (or interface between two different media).

- Electromagnetic Boundary Conditions Explained
- Normal Electric Field Boundary Conditions
- Tangential Magnetic Field Boundary Conditions
- Normal Magnetic Field Boundary Conditions

The next four videos use the previous results to obtain the Fresnel equations, for S-polarized and P-polarized cases.

- Wave Impedance Explained
- Fresnel Equations at Normal Incidence
- Fresnels Equations at an Angle
- Fresnels Equations for p-Polarized Waves

The rest of the series then dives into other topics like thin film interference.

The series assumes the viewer to be already familiar with the Maxwell equations, so it can be helpful to first see the explanation by Grant Sanderson of 3Blue1Brown on Maxwell’s equations.

]]>I am not sure why adoption also happened at the same in the film industry (instead of much earlier), despite having different constraints than real-time. Films made before 2010 were mostly ad hoc, until a wave converted nearly the entire industry to unbiased path tracing.

I gathered a first PBR reading list back in 2011, but since then, the community has collectively made strides of progress. I also have a better understanding of the topic myself. So I think it is time to revisit it with a new, updated (and unfortunately, longer) reading list.

However, covering the entire PBR pipeline would be way too vast, so I am going to focus on physically based shading instead, and ignore topics like physical lighting units, physically based camera or photogrammetry, even though some of the links cover those topics.

**Note:** If you see mistakes, inaccuracies or missing important pieces, please let me know. I expect to update this article accordingly during the next few weeks.

- Physically Based Shading in Theory and Practice (formerly “Practical Physically Based Shading in Film and Game Production”)

2010, (no 2011?), 2012, 2013, 2014, 2015, 2016, 2017.

This recurring*SIGGRAPH*course by the leading actors of the field is a fantastic resource and a must see for anyone interested in the topic. Naty Hoffman then Stephen Hill have been hosting on their websites the course material for several years. Some of the presentations are also available on Youtube. - Physically Based Rendering: From Theory To Implementation, Third edition, 2016, Matt Pharr, Wenzel Jakob, and Greg Humphreys

As of 2018, the content of this reference book is entirely available online. - Implementation Notes: Runtime Environment Map Filtering for Image Based Lighting, 2015, Padraic Hennessy.

Details how to implement the environment map filtering described in Karis and Lagarde publications (see below), then how to optimize it by reducing the number of samples thanks to importance sampling and rejecting samples that don’t contribute. - Image Based Lighting, 2015, Chetan Jaggi.

Focused on specular reflections, the article presents the implementation of image based lighting (IBL) using the split sum approximation from*Unreal Engine 4*(described below), and how to improve quality for several cases. - Physically Based Rendering Algorithms: A Comprehensive Study In Unity3D, 2017?, Jordan Stevens.

This tutorial explains what the different parts of the Bidirectional Reflection Distribution Function (BRDF) mean, lists many available bricks, and shows them in isolation. It is directed at Unity, but translates easily to other environments. - LearnOpenGL’s PBR series (theory, lighting, diffuse irradiance, specular IBL), 2017, Joey de Vries.

An excellent introduction that explains the basics and walks the reader through the implementation of a shader based on the same model as*Unreal Engine 4*(detailed below). There seems to be a confusion between albedo and base colour, but it’s otherwise clear and well structured.

- The MERL BRDF database, 2003, Wojciech Matusik et al.

Measured BRDF from 100 materials. This is a major reference. - The Refractive index database, 2008 until now, Mikhail Polyanskiy.

A trove of materials, with their index of refraction (in complex form) and reflectance per wavelength, polarized or not, with a reference to the original source material. - Light probe images, 1998, Paul Debevec.

Paul Debevec’s captured high dynamic range (HDR) environment maps are famous images useful to test IBL. - There are now many more light probes available, like the High-Resolution Lightprobe Image Gallery released by the University of Southern California or the sIBL Archive, even including public domain ones like the 360 HDR photos captured by Greg Zaal. We should be mindful of whether (understand: I haven’t checked myself) the recorded light intensities are reliable.

The following publications all describe the work done by teams who had to do an inventory of the existing options, and choose a model for their particular needs.

- Physically-Based Shading at Disney (slides), 2012, Brent Burley et al.

What came to be known as the*“Disney BRDF”*was a milestone in PBR literature, and a reference many other works are built upon. It compares different existing models to the MERL database (see previous section), notes their strengths and weaknesses, discusses in length the observed behaviour, especially the diffuse response at grazing angles, and proceeds to define their own empirical shading model to mimic that behaviour. The Disney BRDF is designed to be robust and expressive but also simple and intuitive for artists.

In the annex, a brief overview of the history of BRDF is given.

The publication proposes a tool, the*BRDF Explorer*, to visualize and compare analytic BRDF models or measured ones.

(In 2015, a follow up publication extended their model to a full BSDF in order to support refraction and scattering, but this falls out of the scope of this already long list.) - Real Shading in Unreal Engine 4 (slides), 2013, Brian Karis.

Strongly inspired by the Disney BRDF, it presents a similar shading model. It prefers a simple Lambert diffuse BRDF due to both the cost and the integration with spherical harmonics, and uses other approximations for realtime. The course notes mention that a lot of work was done to compare the various available bricks, but doesn’t list them. Karis lists them in a separate publication, listed next.

For image based lighting, the famous “split sum” approximation of the integral is introduced, allowing to convolve a part of the integral, and precompute the rest in a 2D look-up table (LUT).

When explaining how the workflow adapted to this new model, the course notes stress the importance of having linear parameters for material interpolation.

Warning: I was told a year ago by Yusuke Tokuyoshi that there was an error in a derivation, but my understanding of it is not sufficient to spot it. Apparently the error is only in the publication, and was fixed in the actual code though. - Specular BRDF Reference, 2013, Brian Karis.

Lists various available bricks for the Cook-Torrance BRDF, using the same naming convention. - Moving Frostbite to Physically Based Rendering 3.0, 2014, Sébastien Lagarde.

The biggest publication in this list, with over 120 pages of course notes. I haven’t finished reading it yet, but this is an outstanding piece of work, that goes deep into details for many of the aspects involved. - Physically Based Rendering in Filament, 2018, Romain Guy et al.

This documentation presents the shading model used in*Filament*, the choices that were made, which are similar to Frostbite in many ways, and the alternatives that were available. The quality of this document is outstanding, and it seems it is becoming a reference for PBR implementations. - MaterialX Physically-Based Shading Nodes, 2019, Niklas Harrysson, Doug Smythe and Jonathan Stone.

This specification is meant as a transfer format in the VFX industry. It describes a wide range of materials, not limited to BRDF, but also including emissive and volumetric materials, and allows to choose between a variety of such functions.

Reading this document can help solidify or confirm the understanding of how all these different functions contribute to the rendering ecosystem. However I would only recommend it to readers who already have a fairly good understanding of the PBR models.

The Disney BRDF is so popular that many implementations can be found in the wild. Here are a few of them.

- Implementations of ShaderToy:
- Real-time implementation with IBL, by Maxwell Planck.
- Real-time implementation, by Romain Guy.
- Path tracer implementation, by Markus Moenig.
- Attempt at a single pass implementation, by Markus Moenig.
- Anisotropic implementation for a single light source, by an anonymous user.

- Straightforward implementation in Unity, by Przemyslaw Zaworski.
- Web based BRDF explorers:
- WebGL BRDF Explorer, by Benoît Mayaux.
- Another BRDF Explorer, by Nick Brancaccio.

It seems that in litterature, diffuse BRDF are a lot less covered than specular ones. I suppose this is because it is harder to solve, while the low frequency nature of the diffuse component makes its quality less noticeable. Therefore, many realtime implementations consider the Lambert model sufficient. However, the following publications explore the topic.

- Physically-Based Shading at Disney (slides), 2012, Brent Burley et al.

One of the contribution of the*“Disney BRDF”*is its diffuse model. It compares several existing diffuse models with the measured data of the MERL database (see earlier section) but, unsatisfied with their response, it proposes its own, empirical one. One of the features of that model is the retroreflection at grazing angles.

I have read multiple times that this model is not energy conserving. Yet Disney uses it for offline rendering, which I assume is path tracing (?), so I am not sure of what is the impact of that decision. - Moving Frostbite to Physically Based Rendering 3.0, 2014, Sébastien Lagarde.

The diffuse BRDF described is a normalized version of the Disney BRDF to make it energy conserving. - Designing Reflectance Models for New Consoles (slides), 2014, Yoshiharu Gotanda.

Gotanda explains here several weaknesses of the Oren-Nayar model for PBR (its geometry term is different than the one used for the specular term, and it’s not energy conserving), and proceeds to propose a modified version. Since there is no analytic solution, he suggests a fitted approximation.

He also reminds his own improvement over Schlick’s Fresnel approximation, but concludes that both models fail for complex indices of refraction. - PBR Diffuse Lighting for GGX+Smith Microsurfaces, 2017, Earl Hammon, Jr.

This presentation tries to combine the Oren-Nayar diffuse model (originally a Gaussian normal distribution) with the GGX normal distribution. It studies the Smith geometry fonction (G), proposes a BRDF to use for testing with path tracing, and concludes with an approximation for a diffuse GGX.

On a side note, a few final slides give some identities that are useful for shader optimization.

One of the pillars of PBR is to make sure to respect the energy conservation law. When designing a BRDF, there shouldn’t be more energy out than comes in. This is especially important for path tracing to converge. The following links explain how to take that constraint into account.

- Energy Conservation In Games, 2009, Rory Driscoll.

Explains briefly the problem of energy conservation, and details how to obtain the normalization factor for diffuse Lambert. It’s a good example of how to get started.

The comments discuss the case of the Phong and Blinn-Phong specular lobe. - Phong Normalization Factor derivation, 2009, Fabian Giesen.

Demonstrates the derivation to obtain the normalization factor for the Phong and Blinn-Phong specular lobe. - The Blinn-Phong Normalization Zoo, 2011, Christian Schüler.

Lists various normalization factors that exist for variants of Phong and Blinn-Phong.

Also proposes a crude approximation for Cook-Torrance. - How Is The NDF Really Defined?, 2013, Nathan Reed.

Explains conceptually what the Normal Distribution Function (NDF) is, and how this affects the region to integrate over for normalization. - Adopting a physically based shading model, 2011, Sébastien Lagarde et al.

Starts by reminding a few normalization factors (Lambert, Phong and Blinn-Phong). Includes a quick paragraph on the factor to use to combine diffuse with specular. - How to properly combine the diffuse and specular terms?, 2016, CG Stack Exchange.

A question I candidly asked on how to combine diffuse and specular so the energy lost to specular is taken into account in the diffuse term. - Designing Reflectance Models for New Consoles (slides), 2014, Yoshiharu Gotanda.

The third section explains that a Fresnel term should be taken into account for the diffuse part, but also why this is problematic. This Fresnel term should take into account all microfacets, not only the perfect reflection ones that contribute to the specular component.

This is an answer to my Stack Exchange question above. - PBR Diffuse Lighting for GGX+Smith Microsurfaces, 2017, Earl Hammon, Jr.

Among the other topics it covers, this presentation shows the derivation to normalize a BRDF. - Physically Based Shading at DreamWorks Animation, 2017, Feng Xie and Jon Lanz.

In the appendix of these course notes, the derivation to normalize their fabric BRDF is shown.

It is a common trick in video games to represent certain diffuse materials that have a lot of scattering with a custom diffuse that “wraps” around and brings light in the shadowed part. When PBR became popular, several people looked into how to make their wrapped diffuse PBR compliant.

- Energy-Conserving Wrapped Diffuse (archived version), 2011, Steve McAuley.

Like the title says, it presents an energy-conserving wrapped diffuse. - Extension to Energy-Conserving Wrapped Diffuse (archived version), 2013, Steve McAuley.

Proposes a new, more generic, energy-conserving model for wrapped diffuse. - Righting Wrap, part 1 and part 2, 2011, Stephen Hill.

The sibling of Steve McAuley’s article, shows how to use wrapped diffuse with spherical harmonics, and how to optimize from a naive 16 instructions implementation down to a tight 2 instructions one.

A recently tackled problem is the energy loss due to ignoring multiple scattering. In many BRDF models, rays occluded by geometry are simply discarded. This tends to cause a noticeable darkening as roughness increases, visible in many of the charts showing material appearance for various roughnesses. However the trend is changing and this is why we see more and more references to the “furnace test”, which is a way to highlight energy loss.

- Multiple-Scattering Microfacet BSDFs with the Smith Model, 2016, Eric Heitz et al.

I haven’t read that paper except for the abstract, but the reception it received indicates that it’s an important publication. Recently, Morgan McGuire even said about it:*“It is such a beautifully complete piece of work, a short, careful, and clear book on microfacets of the form that typically only arrives out of a complete Ph.D. thesis.”*.If I understand correctly, they extended the Smith model to take multiple scattering into account, and compared their results with a simulation, by raytracing a surface at the micro-facet level.

Károly Zsolnai-Fehér of*Two Minute Papers*did a video abstract of their paper. - A Multi-Faceted Exploration, part 1, part 2, part 3, part 4 2018-2019, Stephen Hill.

This series of articles explores the feasibility of using in real-time rendering a model used by Sony Pictures Imageworks for offline rendering. The first part explains and illustrates what the problem is. The second part presents the solution from Heitz, and uses it as a ground truth reference, before presenting the Sony Pictures Imageworks solution and comparing the two. It then proposes an improvement of the latter. The third part gives a brief and clear reminder of the idea behind the split integral technique from UE4 and others, and uses it to propose a further improvement by precomputing a 2D LUT (instead of a 3D one). The fourth part details the precomputation step and shows the results in a WebGL demo.

The series is not concluded yet, so I imagine one or more articles are coming. - Advances in Rendering, Graphics Research and Video Game Production (PDF version, video), 2019, Steve McAuley.

This presentation shows the steps that were involved in implementing multiscattering BRDF and area lights for diffuse and specular, in FarCry, which uses a complex rendering engine that has to support a variety of combination of cases. It’s a reminder that such task can become more involved that expected. It’s also a case for academic papers that highlight their main insight, and have code available.

I haven’t explored this topic yet, but I bookmarked some publications that spent time on the topic.

- Lighting of KillZone Shadowfall, 2013, Michal Drobot.

A part of the presentation is dedicated to area lights. It observes that point lights are inadequate for artists and that they tend to tweak roughness to compensate. It then briefly explains the technique, which consist in analytically integrating over the area light. Unfortunately the full derivation is not shown. - Real Shading in Unreal Engine 4 (slides), 2013, Brian Karis.

A part covers the area lights. Like Drobot, Karis observes a tendency of artists to use roughness to compensate the small reflection highlight of point lights. The course notes list their requirements, some solutions that were considered (including Drobot’s) and why they were rejected. They then present a method based on a “representative point”, and how it applies to spherical lights and tube lights. - Real-Time Polygonal-Light Shading with Linearly Transformed Cosines, 2016, Eric Heitz et al.

The current state of the art. This technique approximates physically based lighting from polygonal lights by transforming a cosine distribution (which is simpler to integrate) so it matches the BRDF properties. A demo with the code as well as a WebGL demo showing the result are provided.

Many thanks to Calvin Simpson, Dimitri Diakopoulos, Jeremy Cowles, Jonathan Stone, Julian Fong, Sébastien Lagarde, Stefan Werner, Yining Karl Li and the computer graphics community at large for your contributions and suggestions of material to read.

]]>