Sketch of a VR Rhythm Game I'm working on...

Back in August I started working on a VR Rhythm Game.  Then life got complicated, so I abandoned it for a while.  Then I did some more work on it and life got complicated again w/ a cross-country move...

Anyhow -- I wanted to share the work and some discoveries.  Once this demo is actually complete, I'll announce it on my mailing list.  I hope to get that out as soon as I finish unpacking in my new place in Seattle.  Here's some footage of an earlier version of the demo:

So, as I said, this is an early version.  It uses music that I absolutely have no rights to, but I fell for this track so hard in high school that I sorta had to use it : )

One of the challenges with designing this was how to signal to the user exactly where and when they need to catch the juggling pin.  Turns out, some of our depth cues in VR are totally borked.  While I was playtesting this, some people were totally unable to make any sense of virtual clubs flying at their face, while others were immediately able to grock the experience.

My suspicion is that different people rely more heavily on different depth cues.  Some rely more heavily on binocular vision, while others rely more on comparing objects to their contexts.  I built a handful of little things to try to accommodate for as many depth cues as possible, many of which aren't present in the video above.  Unfortunately, some depth cues cannot be triggered by our current batch of tech, so I feel totally fine completely ignoring them for now...

In any case, here are a few things that I found helpful in creating an experience that sets the player up for the best catching experience possible:

Use Objects Designed for Catching

My very first pass was baseball-sized balls.  Baseballs, I feel, are actually great throwing objects, or even great objects to swing at, but not really great catching objects.  They're hard to see, and in practice, they require special equipment to catch.  Playing with baseball-sized balls wasn't fun because they were difficult to see and equally difficult to catch (even if the collider was unnaturally large)

I then went to football- / basketball-sized, which felt way better, but felt like they required too much of my body -- more than I felt was actually present inside the virtual environment.  I feel objects at this scale almost require one's center of mass to be involved, like, just one step away from what's required of a medicine ball.

Of course, when you catch a football or basketball, you're usually doing it with two hands, and I think I realized I wanted a thing to catch with one hand, which led me to juggling pins.

What's really great about juggling pins is that not only do they afford one-handed catching, but they are also designed to communicate the physics of a throw, both to the performer and the audience.  This ability to broadcast data through physics felt like a beautiful fit for what I was going for.

Model upon Evocative Experiences

Another plus of juggling pins is that there was a clear practice that I could draw from, and that this practice was all about the joy of catching things.  It took me a bit to realize that I should model throw trajectories and spins upon what you might experience when, IRL, someone passes you a real juggling pin.

Provide Visual / Physical Context

In the very beginning (and in the POC video above), I focused exclusively on the catching mechanic and ignored the rest of the game world... figuring that would be a thing to add later when working on theme or story or something.

I soon realized after testing that the lack of visual context was making harder to catch -- a little bit for most people, but virtually impossible for others.  One user reported that the flat blue skybox made them feel that everywhere they looked felt like there was a wall right in front of their face.

So I spent a bit more time working on the environment, peppering the user's periphery with other non-distracting geometry.  I think it did two things: (1) proprioceptively anchored them into a room and asserted their physical presence, making it more meaningful that juggling pins were being sent their way and (2) provided objects in the distance to help them contextualize the incoming trajectories of the juggling pins, giving them more information to help them catch the pins.

Emit Light from Hands to Augment Presence

To be completely honest, I stumbled upon this trick and only have thoughts as to why this works so damned well, but don't have all the answers, so I'll just blab a bit on the topic.

So, I discovered that when comparing the Vive Controllers with these 3d objects, something just felt disconnected.  Sure, conventional world lights bounce off the controller and digital 3d objects in the same way, but somehow it's still really easy for my brain to consider the controller and the digital 3d objects as belonging to separate worlds.

IOW, the controller is human-driven, is like a computer mouse, while the digital object somehow belongs to a computer.  I think my brain just compartmentalizes them separately, and it just doesn't accommodate as graceful of a transition between the two states as I would like.

HOWEVER!

If you place a point light on the user's hand... uhm, it feels totally magical.  Like, suddenly, these digital objects are painfully compelling.  These objects begin to feel more like extensions of the body and less like pure, flat data.

I suspect that hand-driven dynamic lighting triggers something in our perceptual systems that help us model 3d spaces in relation to our bodies.  The crucial part of this is the implication of the body, I believe -- because things only feel physically available once the user sees them in relation to their body, not in relation to a 3d model.

... anyhow, this is something I'll be turning over in my head a while...


So, that's about it for now.  As I said above, I'm planning on sharing my work as a Demo once it is actually ready, and will send updates on this blog and on my mailing list.

Tis all.  Payce!

 

 

Two Books I ALWAYS Suggest to VR Creatives

VR is a 3d medium.  That doesn't mean that it's harder or better or worse than any 2d medium, it's just different.

Unfortunately, practically all of the VR / UX community comes 2d backgrounds.  Whether that be video games, web / app dev, cinema, cg, etc., these fields are concerned with engaging users across 2d media.

I've noticed myself frequently suggesting two books to help people flip their brains to start thinking three-dimensionally.  I figured I'd share them on my blog as well.

 

The Body has a Mind of its Own

This book is so perfect for VR / AR work.  It lays the groundwork of how people perceive their bodies, and what meaning they can extract from that.  It talks about how we incorporate tools as a part of our body image, as well as explains how some of these systems can be fooled and why these tricks work.

The reason this is always my first suggestion is because the inclusion of the body is precisely what makes VR / AR from any other digital media.  VR / AR fuses the body with digital 3d spaces, and what's so exciting about the concept of immersion is the idea that the body can be fooled to experience virtual 3d things in substantive ways, that we can draw physical meaning from these experiences, and that we can extend our impulses and intentions through our bodies into the digital realm.

So, yeah, anyway, please read it.  It's totally totally amazing.

 

101 Things I Learned in Architecture School

One of the painful things about the third dimension is that scale is fluid.  IOW, in 2d, your canvas is confined by a frame.  This isn't an afterthought in 2d compositional theory -- the frame is central to 2d compositional theory.  In 3d compositional theory, however, you want to design spaces that provide many scales through which to experience a space.

Architecture is one of those fields that is really complicated and difficult precisely because of this reason.  Architects are required to design that account for so many weird things: bodies, governmental agencies, cars, gazes, celestial bodies, electricity, rainfall.  Architects enjoy wrestling with competing frames and massaging them all into a cohesive, singular design.

101 Things I Learned in Architecture School is a condensed overview of a field of study that takes decades to master.  The "101 Things" feel like they're the foundational truths upon which all other modern architectural theory rests.  Full disclosure, I know very little actual architectural theory.  I only say these things because these nuggets of wisdom have been hugely helpful for me over the years in considering the organization of 3d spaces and critical 3d thinking.

 

One more Suggestion: pick up a 3d practice

My non-book advice is practice a 3d craft... something that involves no screens at all (dance, pottery, interior design, etc).  Self-aware practice is way more enlightening than whatever it is a book can teach you.  I'll prolly write more about why I think this is so important in another post at some time, but for now, get those two books and just start practicing something new.

Go (back) West, Young Man...

I secured a 6-mo contract with Oculus Research as a UX designer!

So I'll be moving to the Seattle area in less than a month for the gig.  Well, actually, not exclusively for the gig.  A move out West has been in the cards for a while for my family, but this was the last piece of the puzzle to get us all out there.  Gonna have to hustle as soon as I land to meet all the crazy interesting VR talent out there so that I feel safe when this contract expires.

Apart from the absurd logistics associated with moving a family cross-country, I'm dealing with a lot of mixed emotions.  I feel so fortunate to have met and worked alongside so many amazing talented people, and in many ways so foolish to step away from these relationships.  In terms of personal growth -- I don't know how much of the last 15 years of my life can be directly associated to NYC itself, but I know I'll sure as hell miss NYC.  Not sure what I'll miss -- I bet the most poignant things will be the things I didn't even realize I had til it was gone...

On the flip side, of course, is an amazing opportunity at an amazing company in an industry that fits my creative passions.  I was interviewed by a handful of really talented people at Oculus, all of whom I'm very excited to work this.  Also, I know that the city is swarming with VR talent, so I'm positive I'll have the fortune of working with other talented VR folks as well.

So, uh, yeah, that's it for now.  I'll of course be under NDA on the stuff I work on while I'm there, but still hope to find ways to stay engaged w/ the community and to contribute, either via this blog or otherwise.

I also have intentions to further develop a community dedicated to the crossover between VR / dance / theater... so stay tuned, especially if you're in the Seattle area or if you and I have worked in some sort of conventional 3d capacity...

Looking Glass Factory

Just dropping a quick update.  These last few months I've been contracting with a company making commercially available volumetric displays: Looking Glass Factory.  The tech is super cool and so much fun to work with, and the people are awesome and come from a variety of crazy backgrounds and always have interesting perspectives.

The latest thing I made for them I'm super proud of:

More info about this build / construction at this article here:

https://blog.lookingglassfactory.com/revealing-the-virtual-without-a-vr-headset-c5528de4469d#.s1hjftj91

If you're in the NYC area, they do bi-weekly open hours, where anybody can just stop by and chill out -- look at Volume, code on it, or just drink the beer and chat.

https://www.meetup.com/volume/

And lastly if you want to you can jump on the public Slack group:

http://slack.lookingglassfactory.com/

 

#WeAreDanceFace: Intentionally "Stupid" VR

A few weeks back, Matt and I debuted a VR rhythm performance game at Come Out and Play.  Here's an Instagram loop that was posted by a participant (click to play the loop):

Here's a shot of a typical audience:

 

Here's the (admittedly prompted) cheering that performers receive at the end of the routine:

 

 

Backstory: A Mission to Make Mobile VR Social

Mobile VR is a socially awkward technology.

If you're ever in a group of people and one person pulls out a Google Cardboard, brace for an awkward social situation.  Based on my experience, what'll happen is:

  • This novel artifact is collectively marveled at
  • It goes on one person's face, teleporting them to an alternate universe
  • They don't know how self-aware they should be: should they endeavor to be totally immersed?  Should they report back?
  • Onlookers feel awkwardly voyeuristic -- should they vicariously live through this person's experience?  Should they shut up and wait their turn?
  • People crack jokes to try to resolve the tension, but it only makes the person in VR more self-aware

The whole exercise is self-defeating.  Immersion seems impossible to achieve when you're tethered to a collective objective eye.

Is it possible to create a mobile VR experience that's more socially compatible?

 

A Sidestep: Heads Up!

In trying to crack this nut, my brain jumped to Heads Up! -- a simple (non VR) mobile game where you "guess the word on the card that's on your head from your friends' clues before the timer runs out!"  Here's a clip of the gameplay:

 

What's so useful about Heads Up! is that it relies heavily on information blindness.  Participants have to contend with the fact that certain people are privy to certain information.  This knowledge gap is bridged via performance, creating a cohesive and shared social experience.

This type of overt performance seemed like the right move for what I was looking for in the problem of making mobile VR more social.  My theory was, if you gave specific, performative roles to the VR user and the people watching, the awkwardness would wash away.


Performing VR

But can VR be a performance?  I mean, when you're in VR you can't make eye contact with the people watching you, never mind the fact that you're supposedly in an entirely alternate reality.

Well, while performing almost always involves eye contact, the primary function of a performance is for one person to communicate an experience to another.  In the case of mobile VR, the only thing really worth performing / sharing is how a user engages with the tech.  Mobile VR only has rotational tracking, meaning that a user's agency is confined to face orientation.  (I may write a blog post later as to why I prefer the term "face" over "head" when it comes to VR tracking...)

So, given that we're working with performance and body shape, my mind jumped to some sort of face-controlled VR rhythm game.  Everybody would hear a song, and the performer would execute a sequence of face orientations to the beat, with the audience watching.  Hopefully, this would set clear enough roles to overcome social VR awkwardness.

And lastly, perhaps to make it even more social, what if multiple VR headsets could network together?  This could create a choreographed dance among multiple participants.

 

Prototyping / Building

So one day I casually mention this sketch of an idea to Matt over lunch.  After spitballing some more thoughts on it, the conversation moves on and I quickly forget about it.  A few weeks later we're having lunch again and Matt whips out a VR headset and tells me he's built a prototype.

I put it on and start the demo.  I try to forget that I'm in a crowded diner with a hunk of plastic on my face that's loudly blaring "Poison" by Bell Biv DeVoe.  As I rock my head around, it feels great.  We decided at this point that this thing has legs and we should continue.  It was just so weird and fun.  It was just enough directed activity to keep the user busy, but porous enough to feel fundamentally social.

Shortly after this, my life became crazy busy with other things, but Matt continued to crush in on the development, doing pretty much the entire build.  The hardest part, it turned out, was getting multiple devices to sync at exactly the same time stamp, but eventually we found a workable solution.

We aimed to debut the game at Come Out and Play, and started play testing at the Game Center where he works.  Here's an early play test:

 

After seeing it on networked performers, we felt even better.  It validated that it not only felt good from the performer's perspective, but it was also very entertaining to watch.

 

Celebrating VR Stupidity

One unexpected thing we discovered while testing was that in addition to feeling great, it also felt totally stupid.  Perhaps it felt great because it felt stupid.  It felt stupid in a good way, like Old Spice commercials.

So when seeking play testers, we'd sell the game as an "intentionally stupid VR experience."  This wasn't to hedge criticism.  It was because we wanted to share a discovery we made about VR that was indeed totally stupid and awesome at the same time.

I mean, if you look at the mechanics of the game, it's a perfect crockpot of stupid.  First off, it's VR.  I mean, let's face it: VR on it's own terms looks stupid.  You put your face in a brick of plastic and enter a suspended state of stupor.  You're so wrapped up in your own magical experience that you lose touch with reality.  Back in meatspace, you're completely unable to respond to things that are obvious to everybody else in the room.  VR = Textbook Stupid.

Take that, and add dancing in public (while unaware of how public you are), and you have the level of stupid that #WeAreDanceFace can provide.


A Note about Labels and Counterculture...

Celebrating the notion of being labeled "stupid" isn't something that's unique to this project or even VR.  I'd argue that all cultural movements have varying degrees of appearing "stupid" to those outside the culture.

For example, if you look at hippies -- here's a culture that, from the conventional perspective of their era, valued sexual deviance and drug abuse over owning up to personal responsibility.  They were a group of unwashed kids who lost touch with reality.

The inability for conventional folks to understand the new culture's value system is celebrated by these countercultures.  If countercultural actors feel confident in their value system, it only makes sense that they'd want to play up the boogey-man appearances, as if to say: "Screw your labels, we all agree that you simply don't and won't get me, and honestly, that's not my problem anymore."


Designing for Counterculture

I bring this up because #WeAreDanceFace takes the form of a countercultural statement.  Instead of treating the we-look-stupid issue as a VR thing we'll someday outgrow, this project directly addresses it by declaring: "This grotesque face-appendage of plastic is totally awesome.  So is my dancing and so is this ridiculous 90s song that we're piping in from an alternate reality.  Eat me."

To sum it up, our countercultural statement soon became the design thesis for the project: Celebrate VR Stupidity.  

The name "#WeAreDanceFace" incapsulates this assertive stupidness.  It's bombastic and self-involved.  It's also a social, declarative, and performative (something that a band yells at a performance), all of which gets further digitally amplified with a gratingly annoying hashtag.

The hashtag in the name prompted us to get a twitter handle, which we used during the event to publish animated gifs of the performances.  I tried my best to accompany each gif with equally bombastic and stupid text (which, by the way, was an exhausting exercise for someone as typically chill as me).

This thesis of celebrating VR stupidity also provided direction on the UX at the event.  While training the on-deck performers, the line that consistently got the most laughs was "Remember: You probably look cooler than you feel".  This instruction partially points to a game mechanic (your choreography may feel boring but only because you can't see the group as a whole), but it also reinforced that we're here to not give a damn.

 

This is not a Tech Demo

Having a cultural statement like this elevated the project above "tech demo" status and into something else, which proved to be hugely useful.

This project would have been much less successful if everybody viewed it as a demo.  Demos succeed and fail based on how useful the underlying tech appears to be, and utility is defined by what value it adds and how reliable and convenient it is.

By these metrics, at this stage, #WeAreDanceFace is a pretty, uhm... not great technology... lol.  At runtime, the software required the core developer to babysit it.  The VR visual interface is so confusing that it requires that you sit through a lecture (by me) and an in-VR training session with Matt.  It took a long time to reset between demos and the line to enter the experience was dauntingly long.

But despite these challenges, we were able to pull together a convincingly good experience because...

 


This is primarily a live performance

Early on we discovered a different way to frame the project: as a live performance.  This framing magically made everything better.

First off, it sets clear expectations for performers and audience.  Focus gets directed to how much fun people are having, not how performant / useful the tech is.  They don't mind waiting in lines or waiting for tech to resolve.  From this framing, watching tech people do stuff almost feels like peeking under the hood of a magic trick.  You're wait not because the tech is bad, but because the experience is worth it.

In addition to clarifying expectations for the audience, it helped direct my and Matt's behavior during the run.  It was a lot of fun to assume the role of crowd control manager -- hamming up on all things performance-related.  I (annoyingly) started referring to the performers as "the talent", and coached them on how to get the best reactions from the crowd.  Between games Matt and I would assume the role of stage hands / roadies, handling equipment and ushering people around.

Lastly, by seeing this as a "live performance", everybody was better equipped to process the experience.  Performers were 110% behind the work they were tasked to do and took their roles seriously enough to instigate a fun time.  Audiences grasped at the ephemeral nature of the performance, cheering, laughing, snapping photos, and generally just enjoying themselves.

 


In Summary...

The event was a success.  People who already know about VR encountered a fresh look, and the unacquainted got introduced to a form of VR that extolled its awesomeness while poking fun at its shortcomings.

To be clear: mobile VR is still socially aberrant and still looks stupid.  And running an event based on mobile VR technology sucks because you're juggling an immature tech with crowd management -- a logistical nightmare.

But despite all this, people keep flocking to VR experiences anyways.  Why?  Because VR is just that awesome.

And this tension between stupid and awesome undergirds how VR fits into the bigger picture.  Designing along this tension is definitely a challenge, but it's phenomenally fun and rewarding to do so.