Long hiatus

Last week I was lucky enough to attend SIGGRAPH 2018, in Vancouver. My colleagues and I were presenting on a booth the work we had done, a VR story with a distinctive comic book look. I was also invited to participate to a panel session on demoscene, where I shared some lessons learned while making the 64k intro H – Immersion. The event brought a certain sense of conclusion to this work, aside from filling me with inspiration and motivation to try new things.

It has been a long time since I last posted anything here. For the last two years the majority of my spare time went into making that 64k intro. In fact the last post, “Intersection of a ray and a cone”, was related to it. I was implementing volumetric lighting for the underwater scenes, and wanted to resolve cones of light with ray tracing, before marching inside those cones. LLB and I have talked about the creation process in two making-of articles: “A dive into the making of Immersion”, and “Texturing in a 64kB intro”.

During that time, a lot of new things have happened in the computer graphics community. It has been difficult to keep track of everything. The last topic I started experimenting with is point cloud and mesh capture from photos; I might expend on it here in the future. I also want to experiment with DIY motion capture. Anyway, it’s time to resume posting here.

A real-time post-processing crash course

Revision 2015 took place last month, on the Easter weekend as usual. I was lucky enough to attend and experience the great competitions that took place this year; I can’t recommend you enough to check all the good stuff that came out of it.

Like the previous times I shared some insights in a seminar, as an opportunity to practice public talking. Since our post-processing have quite improved with our last demo (Ctrl-Alt-Test : G – Level One), the topic was the implementation of a few post-processing effects in a real-time renderer: glow, lens flare, light streak, motion blur…

Having been fairly busy over the last months though, with work and the organising of Tokyo Demo Fest among others, I couldn’t afford as much time as I wanted to spend on the presentation unfortunately. An hour before the presentation I was still working on the slides, but all in all it went better than expected. I also experimented with doing a live demonstration, hopefully more engaging than some screenshots or even a video capture can be.

Here is the video recording made by the team at Revision (kudos to you guys for the fantastic work this year). I will provide the slides later on, after I properly finish the credits and references part.

Abstract:
Over decades photographers, then filmmakers, have learned to take advantage of optical phenomenons, and perfected the recipe of chemicals used in films, to affect the visual appeal of their images. Transposed to rendering, those lessons can make your image more pleasant to the eye, change its realism, affect its mood, or make it easier to read. In this course we will present different effects that can be implemented in a real-time rendering pipeline, the theory behind them, the implementation details in practice, and how they could fit in your workflow.

Insight In An Unseizable World, by Cocoon

Last Fall, the French demogroup Cocoon uncovered this beautiful ambient demo: Insight In An Unseizable World. Its technical features, including real-time fluid dynamics or screen space reflections, manage to stay humble a leave the full stage to the superb direction. The special attention given to transition is outstanding, and I invite you to see by yourselves.

Intrinsic Gravity, by Still

Three years ago, the German demoscene group, Still, was releasing an experiment at shaping some of the work of late painter Victor Vasarely as animated figures in a tribute demo: Beta. The unusual style from a demoscene standpoint, extrapolating what his work would have been if it were animated, was a success.

Last week Still released another demo with a similar geometric style and a brilliant direction: Intrinsic Gravity. It serves as an invitation to the demoparty NVScene, to take place in San Jose, California, this March.

I recommend you these two demos, they are a pleasure to watch.

Invitation to Revision party 2014

Revision is a big demoparty held each year at Easter, in Saarbrücken, Germany. Whenever possible, it is a custom in the demoscene to release a production dedicated to officially announce upcoming parties: an invitation.

Last weekend at the Ultimate Meeting, the invitation to Revision 2014 was presented. The quality of invitations can vary wildly, from rushed and uninspired to works of art (Kings of the playground or You Should are two examples that come to mind); this new invitation is rather on the higher end of the spectrum. Aiming for epic feeling, and nailing it, it imagines a time when the mostly unheard off sub culture has become a dominant one and the reason for a major Super Bowl like event in a Tron like set.

Enjoy it and its dry wit jokes. :)

Simple light setup for outdoor environments

On his website, Iñigo Quilez (known for a wide range of notable contributions at RGBA, BeautyPi and Pixar; talk about an over-achiever! but I digress already), recently described the light setup he often uses for outdoor environments.

Capture of his technique in action

From the article:

This articles describes the lighting rig I use when doing such tiny computer graphics experiments with landscapes. It’s basically made of 3 or 4 directional lights, one shadow, some (fake or screen space) ambient occlusion, and a fog layer. These few elements tend to behave nicely and even look fotoreal-ish if balanced properly.

Setting up lights is not an easy task, so this article is a very welcomed insight. I especially like the trick of using an opposite directional light to fake global illumination. I also very much agree on using actual fill lights. Constant ambient alone is not enough, as you lose any sense of volume in the shadowed parts.

I am not too fond of the shadow penumbra trick though, which he described previously already. I must admit it indeed gives a warm look, but it doesn’t make any physical sense. So I suspect this should rather belong to the tone mapping part of the rendering, just like the square root he used to apply to the diffuse fall-off really was really working around the lack of gamma correction.

The recommendation to keep albedo near 0.2 is an interesting one. Indeed, your typical rock and grass albedo is nowhere near the albedo of snow (a quick look at Wikipedia gives this comparison chart). But if it is stored in a texture in a typical rendering pipeline, the question of precision lingers. I wonder how big game studios typically address this.