Apart from the end deliverable result, I’m also pleased to see my appreciation for the problem set grow. This stuff is hard, but my instincts do feel as right as I hoped that they would. Implementing an idea always reveals new things. Often those new things reveal that the idea doesn't have traction, but sometimes (and in this case), those new learnings reveal that you should dig in deeper.
Opportunities for Improvement
Saying that this project is successful doesn’t mean by any means it’s perfect. The two areas that I think can be improved are the code and the communication of the idea.
The Code
This is my first serious C# project, so I’m very likely coding things in a non-C# way, resulting in code that’s harder for people to read and possibly more end-user headaches (literally) due to slower frame rates.
More on architecture side of things, I don’t quite understand the proper way to author code that is both extensible and easy for beginners to understand. As such, there’s a lot of repetition where I feel an experienced C# developer would be able to standardize some of this stuff. (if this is you, please contact me!)
Communication of the Idea
My main frustration at this point, though, isn't the code. It is that I am having a difficult time getting at precisely what it is I want to express. I’m having a difficult time articulating something that I feel in my body. The best way I have yet to find to describe it so far is that human bodies seem to work very nicely with 3d cartesian points. 3d points are mechanically reliable, emotionally charged (think tip of a knife, stamen of a flower), and, perhaps most importantly, totally kinesthetically / proprioceptively grock-able. I believe this concept to be central to the future of VR IXD.
This project is perhaps an attempt at expressing this, but for now a lot has been left for others to fill in the details, and, as I said before, I’m pretty intent on trying to fill them in with my potentially over-opinionated perspective… hopefully for the best.
Summary
Focal Point will now serve as a base camp for helping me whacking away at the bigger question: what, precisely, are the rules that govern joyful kinesthetic interactivity?
If you’re interested in jumping in on the fun, download and run the demos and send me your feedback. This is open source, so if you have ideas and would like to add them as well, reach out or just send me a pull request on the Github repo.