PVRTC2 compression quality

Christophe Riccio posted on his twitter feed some pictures comparing the quality of different texture compression formats, including the PowerVR’s native compression format, PVRTC2. In the light of his tests, it seems to me the new compression is a lot better than before (unfortunately they are not compared).

Last year at my work, in a context of trying to reduce loading time, memory consumption and application size, we gave a try at PVRTC and in our use case it was a clear no go. The quality was so badly impacted that the texture size we’d need for the artists to be happy was well beyond the weight of a PNG of equivalent quality. In the end we settled with WebP.

Here it is interesting to see that even at 2bpp, PVRTC2 seems to retain a lot of detail and texture. The edge tend to be muddy but this is still very good for the price.

Various links on ray tracing

Here are some links related to ray tracing, and more specifically, path tracing.

Some ray tracing related projects or blogs:

Some major publications:

  • The rendering equation, SIGGRAPH 1986, James T. Kajiya. From the paper:

    We present an integral equation which generalizes a variety of known rendering algorithms.
    […]
    We mention that the idea behind the rendering equation is hardly new.
    […]
    However, the form in which we present this equation is well suited for computer graphics, and we believe that this form has not appeared before.

  • Bi-directional path tracing, Compugraphics 1993, Eric P. Lafortune and Yves D. Willems. From the paper:

    The basic idea is that particles are shot at the same time from a selected light source and from the viewing point, in much the same way. All hit points on respective particle paths are then connected using shadow rays and the appropriate contributions are added to the flux of pixel  in question.

  • Optimally Combining Sampling Techniques for Monte Carlo Rendering, SIGGRAPH 1995, Eric Veach and Leonidas J. Guibas. From the abstract:

    We present a powerful alternative for constructing robust Monte Carlo estimators, by combining samples from several distributions in a way that is provably good.

  • Metropolis Light Transport, SIGGRAPH 1997, Eric Veach and Leonidas J. Guibas. From the abstract:

    To render an image, we generate a sequence of light transport paths by randomly mutating a single current path (e.g. adding a new vertex to the path).

  • Robust Monte Carlo methods for light transport simulation, 1998, Erich Veach PhD thesis (432 pages pdf): it presents bidirectional path tracing, and introduces Metropolis Light Transport and Multiple Importance Sampling. From the abstract:

    Our statistical contributions include a new technique called multiple importance sampling, which can greatly increase the robustness of Monte Carlo integration. It uses more than one sampling technique to evaluate an integral, and then combines these samples in a way that is provably close to optimal. This leads to estimators that have low variance for a broad class of integrands. We also describe a new variance reduction technique called efficiency-optimized Russian roulette.

    […]

    The second algorithm we describe is Metropolis light transport, inspired by the Metropolis sampling method from computational physics. Paths are generated by following a random walk through path space, such that the probability density of visiting each path is proportional to the contribution it makes to the ideal image.

Other:

On a slightly different topic, fxguide had a great series of articles on the state of rendering in the film industry, which I previously mentioned.

Reading list on Z-buffer precision

Nathan Reed recently published a blog article plotting his numerical findings of Z-buffer precision under different uses. On the way he references a couple of previous articles, that also reference other resources; I think it’s a good opportunity to list some of them. They all tell a part of the story and I recommend reading all of them to get the complete picture.

Unreal Engine experimental scene videos

Since the beginning of 2014, there has been a lot of videos demonstrating the realism that can now be achieved with Unreal Engine 4.

Often, these videos showcase a static scene or even concentrate on a single detail: lighting in an architectural structure, the look of rain hitting the ground, or some wet pebble on the beach.

Physically based rendering, global illumination and screen space reflections seem to manage to trick the brain an get it confused between what is real and what isn’t. Even when some artifacts get salient, like reflections popping in and out or changing with camera orientation, we are quick to forget them and find the image very believable.

Here are some of these videos, by Alexander Dracott, Koola, and Benoît Dereau.

Unreal 4 Lighting Study: Forest Day from Alexander Dracott on Vimeo.

Reverse engineering the rendering of a frame in Deus Ex: Human Revolution

Earlier this year, Adrian Courrèges wrote an article detailing his findings while reverse engineering the rendering pipeline in Deus Ex: Human Revolution.

Starting from a given frame, he illustrates the different stages in the rendering: creation of the G buffer, shadow map, ambient occlusion, light prepass, how opaque and transparent objects are treated differently, volumetric lights, bloom effect in LDR, anti-aliasing and color correction, the depth of field, and finally the object interaction visual feedback.

Here are a few screenshots stolen from his article:

Normal map

The light pre-pass

Final image

Update:
Adrian since then posted a new article, this time breaking down the rendering of a frame in Supreme Commander. The comments also include insights from the programmer then in charge of the rendering, Jon Mavor.

A real-time post-processing crash course

Revision 2015 took place last month, on the Easter weekend as usual. I was lucky enough to attend and experience the great competitions that took place this year; I can’t recommend you enough to check all the good stuff that came out of it.

Like the previous times I shared some insights in a seminar, as an opportunity to practice public talking. Since our post-processing have quite improved with our last demo (Ctrl-Alt-Test : G – Level One), the topic was the implementation of a few post-processing effects in a real-time renderer: glow, lens flare, light streak, motion blur…

Having been fairly busy over the last months though, with work and the organising of Tokyo Demo Fest among others, I couldn’t afford as much time as I wanted to spend on the presentation unfortunately. An hour before the presentation I was still working on the slides, but all in all it went better than expected. I also experimented with doing a live demonstration, hopefully more engaging than some screenshots or even a video capture can be.

Here is the video recording made by the team at Revision (kudos to you guys for the fantastic work this year). I will provide the slides later on, after I properly finish the credits and references part.

Abstract:
Over decades photographers, then filmmakers, have learned to take advantage of optical phenomenons, and perfected the recipe of chemicals used in films, to affect the visual appeal of their images. Transposed to rendering, those lessons can make your image more pleasant to the eye, change its realism, affect its mood, or make it easier to read. In this course we will present different effects that can be implemented in a real-time rendering pipeline, the theory behind them, the implementation details in practice, and how they could fit in your workflow.