Expressing the hands through Vive controllers

Here's an idea I sketched out a couple of weeks ago for an interview.  Thought it was cool so I wanted to share.  Just a way to express one's hands in VR with the Vive controllers.

The first image is the main control scheme.  The second image shows the wide range of expressivity it naturally afford.

Why is this significant?  Well, hand shapes aren't super functional except perhaps in communication, and the interview was for a VR communication product.  As a dancer, though, I would love to try out such an implementation.

Oh, yeah, and maybe like half of that interview was in VR, where I began to fully appreciate how amazing VR is going to be for dance instruction.  I was able to illustrate and sketch out particular configurations, which was really nice.  Totally bonkers and really cool.

Learning Unity, one sketch at a time

There's some anecdote about art where a pottery teacher allowed the students to either be graded by how many pots they created or how good the best pot was.  The students who decided to be graded by how many pots they created ended up making the better pots than the ones who focused on making just a few good ones.

Not sure if that's a true story or just an urban legend, but there's definitely something to be said about the value of just doing.  As a craftsman, it's not about building the Taj Mahal in the first go, it's about learning flow and learning process.  An important part of process is rounding off a project to completion -- getting a sense of when you've completed what you've set out to do.

New Media is weird in that your tool set evolves at the pace at which tech evolves, so you have to round off in new ways over and over.

Sketching definitely helps with that.  You get to test new ideas and learn new approaches.  I dunno, it's like the difference between learning a language from a book and just getting drunk with the locals and just doing your best to keep up.

Anyhow, I just finished 30 days of Unity sketches.  You can download off of Github here.  Here are a few screens of some of my favorite sketches:

fractal.gif
tile floor.gif

And here's a gif from the last batch of 30 sketches, using Processing.  That Github repo is here.

Why I built "Focal Point VR" -- and other related thoughts

Context

If you know me personally, you may or may not know that, for better or for worse, I have some very strong and deeply rooted opinions on exactly how I think VR ought to be.

Some of this is based on my dance and theater background (I like to think that my 20 years of illusion-style dance experience gives me a head start on virtual object manipulation) -- but most of my opinions are totally unsubstantiated.  The more I dig into my opinions, the more I realize that they are instincts at best, and at worst, unproven beliefs and the feelings that come along with them.  I guess being a dancer helps me feel like a prima donna now and again too : )

But I think this unsureness is okay.  The more I dig into VR, the more I see that nobody really knows how it’ll shake out.  How could we?  The field is so young, and almost all the ideas are still unproven.  Unsubstantiated opinions do have a place at the table -- not as total solutions, but as sparks of inspiration to build new things...

The Impulse to Build

The good part about having an admittedly high-and-mighty point of view produces a lot of useful creative tension.  If it’s blatantly obvious to me how it should work, it should be easy to prove it, right?  Always easier said than done.

The purpose of this build, then, was to build something materially useful for others -- to ignite something new in others so that they will be inspired to build cool things.  If this resonates with others, maybe that’s proof that these ideas have do have traction.

The question I wanted to solve was: what sort of framework is necessary to create VR experiences that incorporate joyful human movement?

The Build

So I built Focal Point.  The work mainly involved bringing the interaction patterns from Spatial Computing into the HTC Vive on the Unity platform.

All in all, I think it’s a pretty solid first crack at the problem.  Here’s the promo clip:

The work is emotionally charged and feels exuberant in a way that I feel separates it from other VR content.  While it is obviously not nearly as polished, the core mechanics of object manipulation and movement feel really great.  The gestures feel physically expressive and never awkward.  For a more in depth view of the mechanics of movement, check the Focal Point VR Demo Instructions video:

Apart from the end deliverable result, I’m also pleased to see my appreciation for the problem set grow.  This stuff is hard, but my instincts do feel as right as I hoped that they would.  Implementing an idea always reveals new things. Often those new things reveal that the idea doesn't have traction, but sometimes (and in this case), those new learnings reveal that you should dig in deeper.

Opportunities for Improvement

Saying that this project is successful doesn’t mean by any means it’s perfect.  The two areas that I think can be improved are the code and the communication of the idea.

The Code

This is my first serious C# project, so I’m very likely coding things in a non-C# way, resulting in code that’s harder for people to read and possibly more end-user headaches (literally) due to slower frame rates.

More on architecture side of things, I don’t quite understand the proper way to author code that is both extensible and easy for beginners to understand.  As such, there’s a lot of repetition where I feel an experienced C# developer would be able to standardize some of this stuff.  (if this is you, please contact me!)

Communication of the Idea

My main frustration at this point, though, isn't the code.  It is that I am having a difficult time getting at precisely what it is I want to express.  I’m having a difficult time articulating something that I feel in my body.  The best way I have yet to find to describe it so far is that human bodies seem to work very nicely with 3d cartesian points.  3d points are mechanically reliable, emotionally charged (think tip of a knife, stamen of a flower), and, perhaps most importantly, totally kinesthetically / proprioceptively grock-able.  I believe this concept to be central to the future of VR IXD.

This project is perhaps an attempt at expressing this, but for now a lot has been left for others to fill in the details, and, as I said before, I’m pretty intent on trying to fill them in with my potentially over-opinionated perspective… hopefully for the best.

Summary

Focal Point will now serve as a base camp for helping me whacking away at the bigger question: what, precisely, are the rules that govern joyful kinesthetic interactivity?

If you’re interested in jumping in on the fun, download and run the demos and send me your feedback.  This is open source, so if you have ideas and would like to add them as well, reach out or just send me a pull request on the Github repo.

 

My day job these last two years...

This post has nothing to do with specific creative insights and everything to do with my journey as a creative individual

About 2.5 years ago, the art grind was really taking it's toll on me.  I had an unhealthy relationship to it.  I would binge on doing an awesome project cuz it was cool, and then go back to supporting my art habit with random day jobs that were horrifyingly boring.

The main problem, as I saw it, was that any attempt to introduce the money component to my art felt wrong.  I was creating art only to resolve my own curiosity, and the commercial aspect made me feel like I was commodifying my self-image.

Don't get me wrong, there are plenty of artists out there who I respect who can own that in a responsible way.  Properly grooming your brand and defining your value prop as an identity, I feel, is the only responsible way to approach the field.  It was slowly dawning on me, then, that I didn't want to be a capital-A Artist, because that work of re-branding my outward image in that manner every time I pursued a new opportunity didn't suit me.  I didn't want to negotiate the value of my Albert-branded services.  I took the whole practice very personally, unnecessarily, and it was... well, unhealthy.

So, to try to address this situation, I decided to make two rules for myself:

  1. No art work unless it's impulse is driven towards solving other people's problems. The idea here was that if I was helping others from the get-go, I would have a justified reason to position myself as a brand.
  2. My day job cannot be mindless drone work... it must at the very least teach me something about how I fit into the world

I soon learned that:

  1. I was the type of artist who only did things because of my curiosity.  No judgement here, but turns out I never had an impulse to create specifically for the sake of others.
  2. Working on in an awesome and functional business is a phenomenal way to learn how to efficiently convert labor into helping others

I took a job at Movable Ink, which is, hands down, the most interesting player in arguably the most boring space on the web: Email.

The tech is blazingly cool.  For you web nerds out there, in a nutshell, Movable Ink creates image assets, and the ".png" doesn't just sit on a server. It is a rasterized endpoint of a web app that could change every time it's requested.  Once you grock this, there's an endless rabbit hole of how this can be implemented to make cool email.

Being around this cool tech and the talented people who were able to architect and support it was an amazing opportunity for me to sharpen my development practices.

But about as equally important to my growth was the fact that my position put me in close proximity to the business side of things.  I got to see and eventually assimilate into the practices / attitudes / culture needed for persistent value creation.  I got to see the top-to-bottom funnel in the most functional, streamlined way I could have asked for, and I got to participate in all aspects of the servicing and implementation side of that.

I got a hands-on glimpse of how my work was affecting other people's lives in measurable ways.  Yes, it was just in the boring corporate world of email, but I got to see, proof-positive, real ways that my actions were impacting the lives of others, and I got the hands on experience to run at a fast pace towards value-centric goals.


And in the blink of an eye, 2.5 years pass, and the company grew from 30 to 100 employees.  I found that all I had to do was show up and hire around me, and eventually I found myself managing a team of three.  If I were to just kept doing my job I could continue climbing the corporate ladder by virtue of us just needing to hire more people.

But that's not why I joined in the first place.  That's someone else's dream job, not mine.  I joined because I wanted a day job that could teach me business fundamentals.  And in this regard, I like to think of Movable Ink as my on-site 2.5 year MBA program.

In light of this, Wednesday was my last day.


And so now, I'm reacquainting myself with my creative side in a fresh new light, and it feels incredible.  I never would have thought that a day-job could be so inspiring to my creativity, but it has.  Now I feel like I'm realigned.  I am asking the right questions, getting to the right people faster, cutting out lots of operational fat.

I'm also flirting a bit with how to manage self-image in a way that would have made me very uncomfortable in the past.  I feel like in the past I used my self-image as a way to validate self-worth, but from this vantage it feels more like a tool one can wield to help simply get stuff done.

So, yeah... what an amazing day job that was.  Now, I'm off to build stuff and set myself up to find the next, hopefully equally amazing one.

Preview of "Focal Point" -- a design framework for Vive

Here's some of the results of my work so far...

 

Obviously, a lot of my process here is based on stuff that I build back w/ Spatial Computing. So far, the process has been all about boiling down what was present in Spatial Computing and distilling it into it's fundamentals.  Fortunately, the fundamentals are really simple underneath it all.  My task now is to rearrange those fundamentals so that it's easy for new VR developers to grock ASAP.

The good news is, I have lots and lots of experience with education around sophisticated mental models of movement in 3d space, so I'm not too worried about not being able to boil, distill, and redistribute in a meaningful manner.

Anyhow, jump on the mailing list if you want me to let you know when the Unity asset is live.


Sorta tangentially, just wanted to add a note to some of the cool folks I've had a chance to work with at MRL:

MRL VR + Spatial Computing, Day 2

 

Path to component-ization is becoming clearer.  I can see this becoming a Unity asset to kickstart Vive interactivity.  Just snap on some components to a few things and you'll be able to plug into a simple interaction pattern.

Some observations:

I'm liking this "focal point" concept more and more as time goes on.  It's a powerful and simple idea that's easy to work with both as a developer and user.  Conceptually rock-solid (so far), and the more I lean into it, the more I discover obvious yet still innovative solutions.

The ability to glide an object around against the normal of your focal point was totally accidental in the above code.  This effect will behave differently on non-cube geometry.  Again, though, the principles behind this interaction, while not coded out, are pretty clear as to how they're supposed to work.

Conceptually, whipping the world around the user (instead of pretending to move the body around a world), is much less taxing on the proprioceptors.  No perceptual dissonance, which is nice.

Environment navigation with this method is feels way more natural than the "transportation" pattern of navigation.  After all, in the real world, we always transition through spaces through translation, not teleportation.  This method is hardly disorienting.

Running into issues with rotation against two focal points.  I'm doing just a Unity Quaternion LookRotation, which produces some bad results.  The problem is that two points means that there's an unwanted rotational degree of freedom.  I plan on building around this by planting multiple focal points per controller.

Thanks again to MRL, and also to Dave Tennent for swinging by (and helping w/ a much-needed refactor).

I'm pretty new to Unity w/ regards to source control. Next time I'm in the lab I'll probably make a proper github repo. For now, though, here's the main portion of the code.