Developing for the Oculus VR

In January, Oculus shared a list of recommendations for a good VR experience as a PDF, and kept updating them since then: Oculus VR Best Practices Guide.

More recently, Tom Forsyth gave a talk at GDC 2014 where he gave some guidelines on what to do, what not to do, and what they haven’t figured yet about making VR experiences. The talk is available in the GDC Vault: Developing VR Experiences with the Oculus Rift.

Update:

Michael Abrash of Valve gave a talk about the near future of VR: What VR could, should and almost certainly will be within two years. Much of it deals with the notion of “presence”, the sensation of actually being in the virtual world, and what makes or breaks it.

Guest post: Incremental reshaping

A couple of dear friends and I are (too) often having discussions by email, on topics that range from video games to politics, and mostly software development and design in general. A while back, one of those discussions started after sharing an AltDevBlogADay article on optimization. As we were confronting our experiences on optimization or lack thereof, one of them, Rubix as we call him, made some insightful points by explaining an approach he would call “incremental reshaping”.

When I found myself coming back to his mails several times, I suggested he turned them into an article. With his permission, here it is.


Incremental reshaping

The most common case at my work is to stumble upon slow spaghetti & copypasta code, full of lists of hashtables of lists, all within a big class with 45 methods hitting them directly. So I start off by pulling the data out and into new classes, to encapsulate it. Then I clean up the code that uses them, by factorizing all similar operations into methods of these new classes, as well as trying to come up with good names for them (I know this part is done when the original data structure ends up being private in the class).

This way the operations eventually make sense, and I can at last understand what the code is trying to do on a higher level. Finally I can simplify the calculations and rework the (now hidden and loosely-coupled) data structures to best match the most critical uses. All of this has to be done incrementally. Do one change at a time, and then pause to look at the new code to search for the next simple refactoring. Otherwise, your changes will not be optimal.

Once everything is clean, if it’s still too slow, *then* optimize, even if it means to add complexity by removing good abstractions. Code that is complex in a particular way because that’s what is time-critical, will be both more maintainable and faster than code that gradually, without global monitoring, evolved into something complex everywhere, uniformly slow for many reasons.

Encapsulation rarely affects performance in my case. I work in environments where the cost of a function call is negligible, and functions are the most common abstraction tool.

Now, an example.

Yesterday, I saw a four parameters function I didn’t like. I first created a tiny class to contain just that function (all call sites became “new Foo().method(a1, a2, a3, a4)”), then I moved two of these arguments, that were often the same, into the class (call sites would become “new Foo(a1, a2).method(a3, a4)”). At some places in the calling code, I was then able to cache and reuse the instance of “Foo” into a local variable, because “a1” and “a2” were the same for several calls to “method”.

From there, I found that the calling code was more or less always doing the same kind of things before calling the method, so I moved that stuff to a second method of Foo (it turned out to work so well that the first method became private). Then I noticed a loop that was calling Foo’s method repeatedly, so I wrote a new method that took a list as a parameter. (I also ended up finding an appropriate name for Foo!)

Now I ‘could’ also have done it the hard way: enter the zone, stare at the code for a while, think a lot, decide that I wanted it to eventually be like this, then put my headphones and implement it directly instead of going through any of the sub-steps, which are unnecessary when you can fit all of it in your head.

But incremental reshaping has several advantages:

  1. No need to know in advance what to do from begin to end, ideas come as you go.
  2. The possibility of changing your mind more easily; you had an idea but you may find a better one later.
  3. You can take advantage of the compiler & IDE at each step, to always know what state your code is in, and make sure nothing was forgotten, thus avoiding bugs and going faster. At a given time, it’s always the same message from the compiler, so it’s easier to read.
  4. It keeps your code compiling and correct between every change. Also, for every little change, if you know your tools well you should be able to convince yourself that it breaks nothing.
  5. It diminishes greatly the cost (and the pain) of being interrupted while focused on your changes.

Matrix and Quaternion FAQ

Although its caveats, a classic on the web and still a useful resource, here is the most recent version I found of the: Matrix and Quaternion FAQ.

Warning: the document I’m linking here has been orphaned for many years and might still contain errors. Moreover, the links are broken on this version hosted on Java 3D (they’re not on the even older, Princeton version).

Massive blast in the console world, two injured

I am at the airport waiting for my plane, and since there is free WiFi (and surprisingly fast on top of that), I naturally check my Twitter feed. Well, obviously, something just happened at E3, and the conclusion seems pretty clear. Here is an excerpt:

Understanding quaternions

Quaternion are a very useful tool in 3D, but also one that is unintuitive and difficult to get a natural feeling about. The talk Jim Van Verth, of EssentialMath, gave earlier at GDC2013 explains some facts about quaternions and how they work, by looking back at their discovery: Understand Quaternions.

Update: on a side note, here is a trick for faster quaternion – vector multiplication.

A game engine programmer walks into a bar…

A game engine programmer walks into a bar, asks for a beer or two and starts chatting, especially with that green eyed hot girl. After some small talk she asks him what he’s doing for a living. “Oh I work in a video game company you know…” “Oh really, that sounds cool! And what do you do there?” And there it comes. He can try changing the topic, being mysterious, accidentally spilling his glass, or he can try to answer that question without sounding soporific.

About a year ago a colleague asked me how you would explain to someone who is not in the video game industry – not in software, not even in anything related to technology for that matter, to a normal person you know – what your job consists in when you work on a game engine. “Well it’s… blah blah…” Nah, too long, it’s already boring. The explanation should be brief, easy to get and possibly sort of cool. After a couple of tries we agreed on a description we thought worked.

Working on a game engine is like building a stadium.

Once you have a stadium, you can have all sorts of games played inside: football, basketball, athletics… All you need are rules and some equipment, and then the players can get in. Just like in a game, once you have the engine, all you need are the logic and the assets. You could even have a gig. But you might not be able to have ice hockey or swimming competitions if your stadium is not meant for it. Just like a game engine allows certain kinds of games and not others.

I found this metaphor to come in handy when, you know, talking to normal people about what you do. Now for the rest of the conversation with the hot girl (or hot guy, no sexism here), that’s up to you. ;-)

Résumé d'une soirée Flickr