Technology showcase by BeautyPi

Back in 2009, Iñigo Quilez was leaving everyone in awe by releasing the milestone 4kB intro, Elevated, in cooperation with the group TBC. If you haven’t seen this masterpiece, watch it, and keep in mind this was generated from only 4096 bytes worth of data (just the text of this article is already more than a third of that).

After that, news were that he was hired by Pixar, and besides some in progress screenshots from time to time and some live coding experiments, not much was heard from him.

Then a couple of months ago this interview was published, and more recently this praising article of CGW, where we could read he had been in charge of the vegetation rendering in Pixar’s Brave. Needless to say, many people were looking forward to seeing what he would do next, especially in the real-time domain.

Today the group he’s part of, BeautyPi, which seems to be focusing on interactive animations (they presented their work earlier this year at SIGGRAPH), has published the following video. Being a showcase of their last experiments, it is not entertaining like an animation, a clip or a demo are. You could even say it’s boring. But it is visually very impressive, both technically and artistically. Although this is some real-time material, the quality is not that far from movie standards. Regarding the interaction, I am suspecting they are only scratching the surface and they may come up with some very interesting things. What these folks are doing is definitely worth following.

Crysis 3 tech demo

Crytek has published a video showing the rendering technology used in the CryEngine, more specifically in Crysis 3. While I don’t really dig the artistic choices (I find the overall image to be messy due to the high contrast and not that appealing, aesthetically speaking), the technical side is impressive. I especially like the use displacement mapping and tessellation for the vegetation (by the way, see how great that leaf looks; they got the material completely right). The reflexions visible at 1’52 make me think they also implemented the cone tracing technique, just like Unreal did. On the downside, all the parts with falling water felt unrealistic to me.

Last but not least, Toad Technology! :)

Octree-Based Sparse Voxelization for Real-Time Global Illumination

Last year Cyril Crassin presented a voxel based approach for interactively computing indirect diffuse and specular lighting, along with a couple of demonstration videos, and kept working on the matter since then.

In this talk given in May at the NVIDIA GPU Technology Conference, he briefly explains the technique:

Interestingly enough, as he points out, the technique has been implemented in the Unreal Engine 4 already.

Watch Dogs

Ubisoft made an impression at E3 by unveiling this video of its upcoming game, Watch Dogs. The mood definitely reminds the original Deus Ex.

Although I am not a fan of violent games, I like the effort put to make it not only look, but feel real: in particular the scene of the random guy trying to get his girlfriend to talk to him after getting shot is quite strong and disturbing.

On the rendering side, there is a lot to notice. Many materials completely nail it (look at that leather coat!), and the faces look really good, especially when back lit.

State of the art in real-time realistic skin rendering

Jorge Jimenez posted yesterday the last results of his research on skin rendering: Separable Subsurface Scattering. He provides a very impressive real-time demo, which, some point out, does runs on actual current hardware (it ran, slowly, on my low-end laptop). So even though he provides the following video of it, you should definitely try the actual binary. Oh, and the source code is available too. :-)

Variance Shadow Maps

Shadow mapping is a popular way of getting dynamic shadows, but suffers from aliasing artifacts that cannot be addressed by usual texture filtering. The reason boils down to the fact that the average of depth test results (which is what we want) is not the same as the result of a test on the average of depths (which is what hardware does).

The trivial way to do it anyway is the Percentage Closer Filtering (PCF) technique, and usually stands in papers as the expensive upper bound.

Variance Shadow Maps are a simple technique allowing filtering, including some Gaussian blur for example, thus giving soft shadows (the blur does not depends on the distance from the occluder though). The main drawback of the algorithm is the light bleeding artifact that occurs as soon as the scene complexity is too important. I also found it to be fairly expensive in terms of texture memory since it requires twice as much as regular shadow maps, and another times two for blurring.

One could argue VSM are some pretty old stuff already, but because of the elegance of the trick they rely upon and the ease of implementation, I really like them.