My VR development had stalled for about two months because I dug myself in too deep trying to support all possible VR SDKs and software platforms while not repeating myself. When the 0.6 OVR SDK came out with the compositor, the change in API and general style was too great for the structure of the old RiftSkeleton, and it basically broke. The prospect of expending all the effort required to change the code to accommodate it was just too daunting, so I chickened out on it and worked on something else.

The only way forward was to start a new one from scratch.

Why did the code get so complex? What was the old path and why was it a dead end? I was trying to separate out all dependencies so they were easily interchangeable. I wanted the basic app to be as completely agnostic as possible: to work with multiple VR SDKs(OVR, OSVR, none), multiple windowing frameworks(GLFW, SDL2, SFML), multiple OSs(Windows, Linux, MacOS, FreeBSD, Solaris). To that end, the AppSkeleton class tree was supposed to pull out all 3D-world-related navigation and scene management into the base class, leaving the subclasses like OVRAppSkeleton to worry about the API-specific business like ovrSwapTextureSet or OSVR_ClientContext. Then I could just add a new subclass to handle a new VR API. OpenVRAppSkeleton : public AppSkeleton, no problem! I even considered adding a SConstruct file on top of CMakeLists.txt to diversify build systems. The only bedrock-level requirements were C++ 11-or-so and OpenGL 3-ish or better.

This abstraction really started to break with SDK 0.5, when the runtime would handle buffer swapping. Suddenly, there was a functional interaction between windowing framework code and VR SDK code, so that meant there had to be a branch somewhere in the GLFW code(and the SDL2 code). Each branch is an artifact of some VR API or runtime difference of opinion, some different way of doing things.

This got ugly really fast. On top of that, I had this whole movable dashboard system implemented with Hydra support that was almost completely obviated by the ovrLayerType_Quad layer type. RiftRay has all the “floating quad in world space” stuff implemented and working in what I called the “PaneScene” abstraction, which was drawn to a larger render buffer(FBO) separately from the more expensive raymarched shadertoy scenes. Brad Davis does the same in his book and it allows text in floating windows to avoid the blurry graininess of downscaled rendering.

OVR SDK 0.6 gives the floating window panes in space to us for free, complete with pose and size in world space - it’d be crazy not to use them. And I think OVR SDK 0.8 is supposed to be API-compatible with the upcoming 1.0(if not binary compatible - where did I hear this again?), so now was the time to get this working. The main source file is now only 600 lines long with GLFW and OVR SDK code mixed together in there, and it’s so much nicer to be able to refer to global eye poses without having to pass pointers through encapsulation barriers.

At least the IScene interface is still intact, so all the time I never got around to spending on designing graphical scenes for VR wasn’t wasted. :)