Now that Revision has passed, we feel tempted to grab the ax and happily chop into parts of our code base we wanted to change but couldn’t really since we had other priorities. One tempting part is the linear algebra one: vector, quaternion and matrix data structures. Lets say vector for a start. Not that it’s really necessary, but the transformations are the most time consuming parts after the rendering itself, and the problem itself is somewhat interesting.
After a little googling, I basically found three approaches to this problem:
Every here and there, people seem to think of SSE instructions as a silver bullet and propose various examples of code, snippets or full implementations. The idea being to use dedicated processor instructions to apply operations on four components at a time instead of one after another.
Quite on the opposite, Fabian Giesen argued some years ago that it was not such a good idea. A quick look at the recently publicly released Farbrausch codebase shows they indeed used purely conventional C++ code for it.
At last this quite dated article (with regards to hardware evolution) by Tomas Arce takes a completely orthogonal approach, consisting of using C++ templates to evaluate a full expression component after component, thus avoiding wasting time moving and copying things around.
I am curious to implement and compare them on nowadays hardware.
Update: this is 2016 and the topic was brought back recently when someone wrote the article How to write a math library in 2016.
The point of the article is that the old advice to not bother with SSE and stick with floats doesn’t apply anymore, and it goes on to show results and sample code. This sparked a few discussions on Twitter, with opinions voiced to put it mildly.
@nothings No, that’s bullshit. Let the compiler do it, and if it can’t, don’t worry. At all. Cost >> benefit.
— Tom Forsyth (@tom_forsyth) March 16, 2016
It seemed the consensus was still against the use of SSE for the following reasons:
- Implementation is tedious.
- For 3 dimensional vector, which is the most common case, there is a 25% waste.
- For 4 dimensional vectors, like homogeneous coordinates and RGBA, it doesn’t work so well either since the fourth component is treated differently than the other ones.
- Even if the implementation detail is hidden behind a nice interface, the alignment requirements will leak and become constraints to the rest of the code.
- Compilers like clang are smart enough to generate SSE code from usual float operations.
@kenpex @Zavie "against" list: It's SSE only. NEON, and other SIMDs might have deeper pipeline, it makes no sense with wider SIMD like AVX.
— Branimir Karadžić (@bkaradzic) April 9, 2016
About the last item on your list: in the “C++ Template Metaprogramming” book (http://www.boostpro.com/mplbook/), there is an entire chapter devoted to the technique of using templates expressions to make vector computations lazy, which has been made popular by the Blitz++ library (IIRC). Quite an interesting read!
Thank you for mentioning it!
Fabian Giesen changed his mind (and I think he was right to do so), see :
“pretty old opinion piece, seriously outdated by now (compilers got a lot better at intrinsics). I mainly keep this here in case someone linked to it.”
The “problem” with SSE is you have to build your code with this in mind from start.
It’s very difficult to just patch old code and hope for a big gain in performance.
Thanks a lot for pointing it out; I changed the wording.