On his blog, Florian Boesch introduces to a clever technique to render wireframe polygons using fragment shading, along with a live demo. The full explanation is presented in this paper. For convex polygons, just add as an attribute (or compute in the geometry shader?) the distance between the vertex and the other edges, and use the minimum distance in the fragment shader. Simple, easy to implement, and a nice anti-aliased result. (the paper also presents a second technique for non-convex polygons).
Category Archives: Rendering
Wonder moss
In this very short video, Iñigo Quilez of Pixar explains in layman’s terms how they used mathematics to create the moss in the CG film Brave.
Maximizing depth buffer range and precision
Half of the available range is packed into tiny distance from the near plane to twice the near plane distance.
This quote from this article of the Outerra blog is more or less its punch line. The author explains the precision issues with typical depth buffer use, and explores ways to get better results. Since Outerra is a planet engine, no wonder depth precision must be critical.
I discovered Outerra and its blog fairly recently, through a couple of mentions in the Flipcode’s Daily Flip, and was impressed by its rendering. The amount of work that went into it must be insane. See these captures demonstrating the space to ground transition or the grass rendering for instance:
Triangle order optimization
This article by Adrian Stone discusses the impacts of triangle ordering in meshes and compares some algorithms.
Refractive index database
A gold mine if you’re doing some physically based rendering: http://refractiveindex.info/
Reading list on skin rendering
Skin rendering is really not my thing. Yet. I have too much figuring out rendering of opaque materials already to deal with ones exhibiting sub-surface scattering. But I got trapped reading one article and then another.. and before I knew I had a list I wanted to note for later reference.
- The NVidia Human Head demo, featuring Doug Jones, referred by many as the golden standard of real-time skin rendering (article in Chapter 14 of GPU Gems 3).
- In 2009, Jorge Jimenez et al. published a paper presenting a Screen Space Sub-Surface Scattering technique (blog post).
- He later published another paper in 2010 on skin translucency, and finally released his separable sub-surface scattering demo last year.
- At the Advances in Real-Time Rendering course of SIGGRAPH 2010, John Hable presented the techniques used to render characters in Uncharted 2 (PowerPoint slides, PDF version, presentation on SlideShare).
- Next year at the same SIGGRAPH course, Eric Penner presented a technique called pre-integrated skin rendering (PowerPoint slides, presentation on SlideShare).
- Skin rendering is also an recurring topic on Angelo Pesce‘s blog: here and here as well as here and here.
Many links missing, as I’m not done checking the major techniques mentioned in the presentations, but perfect is the enemy of good after all.
Series of articles on anti-aliasing
Matt Pettineo is writing an interesting and in-depth series of articles on anti-aliasing:
- Signal Processing Primer
- Applying Sampling Theory to Real-Time Graphics
- A Quick Overview of MSAA
- Experimenting with Reconstruction Filters for MSAA Resolve
In this last article, he provides a list of captures comparing the results he obtained, as well as the source code.
On a side note, I like this short post of Timothy Lottes (well known for FXAA and TXAA) where he compares typical film image with typical video game one. His example of temporal aliasing is also worth keeping in mind.