This articles describes the lighting rig I use when doing such tiny computer graphics experiments with landscapes. It’s basically made of 3 or 4 directional lights, one shadow, some (fake or screen space) ambient occlusion, and a fog layer. These few elements tend to behave nicely and even look fotoreal-ish if balanced properly.
Setting up lights is not an easy task, so this article is a very welcomed insight. I especially like the trick of using an opposite directional light to fake global illumination. I also very much agree on using actual fill lights. Constant ambient alone is not enough, as you lose any sense of volume in the shadowed parts.
I am not too fond of the shadow penumbra trick though, which he described previously already. I must admit it indeed gives a warm look, but it doesn’t make any physical sense. So I suspect this should rather belong to the tone mapping part of the rendering, just like the square root he used to apply to the diffuse fall-off really was really working around the lack of gamma correction.
The recommendation to keep albedo near 0.2 is an interesting one. Indeed, your typical rock and grass albedo is nowhere near the albedo of snow (a quick look at Wikipedia gives this comparison chart). But if it is stored in a texture in a typical rendering pipeline, the question of precision lingers. I wonder how big game studios typically address this.
Rendering from compressed textures, Beers et al., proceedings of SIGGRAPH 1996 “This one (out of 3) of the 1st texture compression papers ever! Uses VQ so probably not something you want today, but major eye opener!”
Here comes the spoiler: according to this article, it was created from photos of the subject and her family relatives who shared most face similarities. The photos were then animated and morphed together. Like the article points out, the animation still falls within the uncanny valley, but pause at any time and all you see is an real face.
smallpt is a bare minimum path tracer written under 100 lines of C++, featuring diffuse, and specular reflection, and refraction. Using the detailed explanation slides by David Cline, I experimented porting it to GLSL on Shadertoy.
This proved to be an interesting experiment that brought a few lessons.
Path tracing is fun, easy to implement, and good looking.
GLSL support in WebGL is still nowhere near robust: valid code will or will not work depending on the platform, the browser, and whether the OpenGL layer is native or not. The statements “break” and “continue” in particular seem often to break everything.
You can see the shader and tweak it here. By default it uses 6 samples per pixel, and 3 bounces, which allows it to run smoothly on average hardware. I found 40 samples per pixel and 5 bounces to give nice results while maintaining interactive framerate.
Path tracing, 40 samples per pixel, 5 bounces
Update: since GLSL Sandbox has a feature, reading from the previous frame buffer, that Shadertoy is missing at the moment, I thought it’d be interesting try it to have the image converging over time. A little hacking later, a minute or so worth of rendering got me this kind of result: Given the effort, I am really pleased by the result.
Path tracing, unknown number of samples per pixel, 7 bounces