Last week I was lucky enough to attend SIGGRAPH 2018, in Vancouver. My colleagues and I were presenting on a booth the work we had done, a VR story with a distinctive comic book look. I was also invited to participate to a panel session on demoscene, where I shared some lessons learned while making the 64k intro H – Immersion. The event brought a certain sense of conclusion to this work, aside from filling me with inspiration and motivation to try new things.
It has been a long time since I last posted anything here. For the last two years the majority of my spare time went into making that 64k intro. In fact the last post, “Intersection of a ray and a cone”, was related to it. I was implementing volumetric lighting for the underwater scenes, and wanted to resolve cones of light with ray tracing, before marching inside those cones. LLB and I have talked about the creation process in two making-of articles: “A dive into the making of Immersion”, and “Texturing in a 64kB intro”.
During that time, a lot of new things have happened in the computer graphics community. It has been difficult to keep track of everything. The last topic I started experimenting with is point cloud and mesh capture from photos; I might expend on it here in the future. I also want to experiment with DIY motion capture. Anyway, it’s time to resume posting here.
The Art of Rendering (April 2012)
A description of the different techniques used in high end rendering and the major engines.
The State of Rendering (July 2013): part 1, part 2
A lengthy overview of the state of the art in high end rendering, comparing the different tools and rendering solutions available, their approach and design choices, strengths and weaknesses as well as the consequences in terms of quality, scalability and render time.
(Brace yourselves for the massive tag list hereafter.)
Rendering from compressed textures, Beers et al., proceedings of SIGGRAPH 1996 “This one (out of 3) of the 1st texture compression papers ever! Uses VQ so probably not something you want today, but major eye opener!”
Last month at SIGGRAPH, Michał Iwanicki of Naughty Dogs presented his talk “Lighting technology in The Last of Us”, in which he focused on the technique they used for ambient shadows. In short: light maps and analytic occlusion with ellipsoid approximations of objects. Clever!
The Real Time Live! program looks very nice too, and it is good to see at least two demoscene related works will be presented there (the community GLSL tool ShaderToy by Beautypi, and some experiment by Still with a LEAP Motion controller on their production, Square).
Lastly, we had some awesome news yesterday, when we were told our last released demoscene production, F – Felix’s workshop, has been selected to be shown as part of the Real-Time Live! demoscene reel event.
Released last year at Revision and ranking 2nd in its category, Felix’s workshop is a 64k intro: a real-time animation fitting entirely (music, meshes, textures…) within a 64kB binary file meant to run on a consumer level PC with a vanilla Windows and up to date drivers.
A couple of months ago I was posting here about this SIGGRAPH publication on amplification of details in a video. Yesterday the New York Times put online a story as well as a video on the topic, with explanations from the authors and some new examples.