I discovered last year these tutorials by Jasper Flick on how to make and use noise in Unity, and a couple of terrain and particle use examples. They present the difference between value noise and gradient noise, how Perlin noise and simplex noise work, and among others how to use curl noise to control the flow of particles.
The order information is presented is well thought, although the intention might not be clear at first. Don’t let the beginner’s tutorial tone (“You’ll learn to: create and fill a texture;”, etc.) turn you away, as the series do a great job at detailing the concepts and algorithms in a simple manner yet without cutting corners like so many articles on the topic do (when they’re not blatantly wrong and go ahead calling a blurred noise “Perlin noise”). I thought I had a pretty good grasp of gradient noise already, but reading it gave me an even better understanding.
While at it, other resources on the topic include Ken Perlin’s GDC 1999 talk and his two pages paper Improving noise explaining why use a 5th order polynomial for interpolation (a function I’ve sometimes seen called “smootherStep”).
John Carmack, the CTO of Oculus VR, gave a talk at the Game Developers Conference that just ended this week. Various topics are addressed, including the story behind Samsung’s Gear VR and what’s coming next, the democratization of virtual reality, the work on the API, the unsolved problem of controllers in VR, or the use of real-time ray tracing in VR.
Aras Pranckevičius wrote on the Unity blog about the new frame debugger feature they added to their editor: Frame Debugger in Unity 5.0. The hack, he calls it, is very simple and just consists in interrupting the rendering at a given stage and display whichever frame buffer was active at the moment. Just a couple of days of work; most of the work went into the editor UI.
From the article:
There’s no actual “frame capture” going on; pretty much all we do is stop rendering after some number of draw calls. So if you’re looking at a frame that would have 100 objects rendered and you’re looking at object #10 right now, then we just skip rendering the remaining 90. If at that point in time we happen to be rendering into a render texture, then we display it on the screen.
This means the implementation is simple; getting functionality working was only a few days of work (and then a few weeks iterating on the UI).
Illustration from the article: “Here we are stepping through the draw calls of depth texture creation, in glorious animated GIF form”
The following video showcases Livity, an extension allowing live coding in Unity. It definitely reminds the ideas presented by Bret Victor in his talk, with the step between coding and viewing fading away as iteration time gets close to zero.