This is going to be a short one: I haven’t done any extensive search, but I still want to make a back of the envelope note with a couple of links.
M. Oren and S. K. Nayar proposed a model more accurate than the Lambert reflectance in their paper: Generalization of Lambert’s Reflectance Model, referred to as the Oren-Nayar model
Back in 2009, Iñigo Quilez was leaving everyone in awe by releasing the milestone 4kB intro, Elevated, in cooperation with the group TBC. If you haven’t seen this masterpiece, watch it, and keep in mind this was generated from only 4096 bytes worth of data (just the text of this article is already more than a third of that).
After that, news were that he was hired by Pixar, and besides some in progress screenshots from time to time and some live coding experiments, not much was heard from him.
Then a couple of months ago this interview was published, and more recently this praising article of CGW, where we could read he had been in charge of the vegetation rendering in Pixar’s Brave. Needless to say, many people were looking forward to seeing what he would do next, especially in the real-time domain.
Today the group he’s part of, BeautyPi, which seems to be focusing on interactive animations (they presented their work earlier this year at SIGGRAPH), has published the following video. Being a showcase of their last experiments, it is not entertaining like an animation, a clip or a demo are. You could even say it’s boring. But it is visually very impressive, both technically and artistically. Although this is some real-time material, the quality is not that far from movie standards. Regarding the interaction, I am suspecting they are only scratching the surface and they may come up with some very interesting things. What these folks are doing is definitely worth following.
Color management in the production pipeline is a tough topic. A really tough one. And a crucial one too. Unfortunately not only is this an important and difficult topic, but it also seems to me that except maybe for people working on AAA games or heavy budget film industry, most have little knowledge on the matter, when they’re not just completely unaware of the issue.
The issue that image capturing devices, screens and printers all have different color characteristics (said simply: what you scan, photography or film will not look the same depending on the capturing device used, and a same image will look different depending on the display or printing device too).
The issue that they have a capturing or display range usually far from what the human vision is capable of, and by “far” you must understand by orders of magnitude (said simply, the average human can perceive way more contrast than a camera is able to capture, differentiate much more colors than a screen is able to display, and on top of that there are colors an average screen is just absolutely unable to render, like the orange of your fluorescent highlighter for example; this one is my favorite example actually :) ).
The issue that screens and image formats use a non linear representation leading to severe errors in colors unless it is taken into account when manipulating images (said simply, ignore gamma correction in your rendering and your lighting will be wrong, ignore it when you resize images and they will look wrong too).
I just wish it were more simple and “just worked”. But until then we have to deal with it. So here goes the list of readings on this nonetheless very interesting topic.
On color management:
Digital Color Part 1: the first part of a supposedly series of articles (at the moment I am writing this, there is no part 2 yet) on color management. This introduction is a really great read, and I definitely recommend it if you care about color.
Cinematic Color course notes, SIGGRAPH 2012: 55 pages is quite a long read, but it is very interesting as it explains how color is handled in a film production pipeline, what are the problems, and their origin. Even if you don’t care about film making, the read is still full of insights in terms of vision, capture and rendering.
Gamma error in picture scaling: this article shows how bad things can go when image manipulation softwares don’t take gamma into account, and gives a glimpse of how widespread the problem is.
The Value Of Gamma Compression: I like this short article a lot, as it shows in a quick a clear way how a bad gamma management can ruin a rendering.
GPU Gems 3, Chapter 24 – The Importance of Being Linear: this article explains how to take gamma into account in your rendering pipeline; while an interesting read, I think it doesn’t make the issue obvious enough (as I find the different illustrations to be equally bad looking).
Gamma FAQ: this FAQ is quite dated but still helps understanding the origin of gamma correction and avoid confusion between various concepts (there is also a Color FAQ from the same author).
Update: this 4mn video explains quite convincingly the need for gamma correction.
A Closer Look At Tone Mapping: another follow up, where the author makes various tone mapping experiments (also find more links there).
Have a good read!
Update: this lengthy presentation of GDC 2010 (quoting: “The presentation is basically four 20 minute presentations crammed into 55 minutes.”), by John Hable, covers several of these topics.
I wanted to put here a couple of links on SIGGRAPH 2012 material, but as the Real-Time Rendering blog points out, somebody has already made a comprehensive list of what can be found. So wait no more, go there and pick up your commuting time reading for the next weeks: SIGGRAPH 2012 Links.
Crytek has published a video showing the rendering technology used in the CryEngine, more specifically in Crysis 3. While I don’t really dig the artistic choices (I find the overall image to be messy due to the high contrast and not that appealing, aesthetically speaking), the technical side is impressive. I especially like the use displacement mapping and tessellation for the vegetation (by the way, see how great that leaf looks; they got the material completely right). The reflexions visible at 1’52 make me think they also implemented the cone tracing technique, just like Unreal did. On the downside, all the parts with falling water felt unrealistic to me.