Why I built "Focal Point VR" -- and other related thoughts


If you know me personally, you may or may not know that, for better or for worse, I have some very strong and deeply rooted opinions on exactly how I think VR ought to be.

Some of this is based on my dance and theater background (I like to think that my 20 years of illusion-style dance experience gives me a head start on virtual object manipulation) -- but most of my opinions are totally unsubstantiated.  The more I dig into my opinions, the more I realize that they are instincts at best, and at worst, unproven beliefs and the feelings that come along with them.  I guess being a dancer helps me feel like a prima donna now and again too : )

But I think this unsureness is okay.  The more I dig into VR, the more I see that nobody really knows how it’ll shake out.  How could we?  The field is so young, and almost all the ideas are still unproven.  Unsubstantiated opinions do have a place at the table -- not as total solutions, but as sparks of inspiration to build new things...

The Impulse to Build

The good part about having an admittedly high-and-mighty point of view produces a lot of useful creative tension.  If it’s blatantly obvious to me how it should work, it should be easy to prove it, right?  Always easier said than done.

The purpose of this build, then, was to build something materially useful for others -- to ignite something new in others so that they will be inspired to build cool things.  If this resonates with others, maybe that’s proof that these ideas have do have traction.

The question I wanted to solve was: what sort of framework is necessary to create VR experiences that incorporate joyful human movement?

The Build

So I built Focal Point.  The work mainly involved bringing the interaction patterns from Spatial Computing into the HTC Vive on the Unity platform.

All in all, I think it’s a pretty solid first crack at the problem.  Here’s the promo clip:

The work is emotionally charged and feels exuberant in a way that I feel separates it from other VR content.  While it is obviously not nearly as polished, the core mechanics of object manipulation and movement feel really great.  The gestures feel physically expressive and never awkward.  For a more in depth view of the mechanics of movement, check the Focal Point VR Demo Instructions video:

Apart from the end deliverable result, I’m also pleased to see my appreciation for the problem set grow.  This stuff is hard, but my instincts do feel as right as I hoped that they would.  Implementing an idea always reveals new things. Often those new things reveal that the idea doesn't have traction, but sometimes (and in this case), those new learnings reveal that you should dig in deeper.

Opportunities for Improvement

Saying that this project is successful doesn’t mean by any means it’s perfect.  The two areas that I think can be improved are the code and the communication of the idea.

The Code

This is my first serious C# project, so I’m very likely coding things in a non-C# way, resulting in code that’s harder for people to read and possibly more end-user headaches (literally) due to slower frame rates.

More on architecture side of things, I don’t quite understand the proper way to author code that is both extensible and easy for beginners to understand.  As such, there’s a lot of repetition where I feel an experienced C# developer would be able to standardize some of this stuff.  (if this is you, please contact me!)

Communication of the Idea

My main frustration at this point, though, isn't the code.  It is that I am having a difficult time getting at precisely what it is I want to express.  I’m having a difficult time articulating something that I feel in my body.  The best way I have yet to find to describe it so far is that human bodies seem to work very nicely with 3d cartesian points.  3d points are mechanically reliable, emotionally charged (think tip of a knife, stamen of a flower), and, perhaps most importantly, totally kinesthetically / proprioceptively grock-able.  I believe this concept to be central to the future of VR IXD.

This project is perhaps an attempt at expressing this, but for now a lot has been left for others to fill in the details, and, as I said before, I’m pretty intent on trying to fill them in with my potentially over-opinionated perspective… hopefully for the best.


Focal Point will now serve as a base camp for helping me whacking away at the bigger question: what, precisely, are the rules that govern joyful kinesthetic interactivity?

If you’re interested in jumping in on the fun, download and run the demos and send me your feedback.  This is open source, so if you have ideas and would like to add them as well, reach out or just send me a pull request on the Github repo.