30 days to make 30 dance clips

Back in July I was feeling anxious to be practicing my creativity again.  I decided to re-up on my dancing game.  See, lately, my dancing has been feeling sorta paralyzed because I started paying too much attn to the production quality on my YouTube acct.  I often find that obsessing over the mediation of dance to be counter-productive to the craft I set out to do in the first place.

So I decided, screw it -- I just need to get sparked again.  So the plan was: dance to a track for 30 days straight in front of a camera.  In the beginning, I don't write anything down, but starting at day 5 I began writing notes in the description of the YouTube videos.

Learned a lot about many things: how to film, what features of dance I enjoy, what types of music resonate with me these days.  I can see big differences in skill between day 1 and day 30.

Most importantly, I definitely feel like I have a renewed confidence in my craft.  Fortunately, shortly after I finished recording these, I was able to put these skills to good use.  Folly Turtle, a sponsored dancer from LEDGloves.com visited Seattle and invited me out to meet some ridonkulous good glovers.  Don't know if I would have gone out if not for the practice.

Also, more recently, I was at a street fair and saw an open call for all-styles dancing -- to which I said, "Meh, fuck it, why not?"  Like, my old ass was for SURE outclassed by these dancers, but it was a blast to share and overcome a lot of my public performance anxiety and lay out 30 seconds of liquid in front of dozens of amazing dancers and maybe a few hundred in the crowd.

So yeah, here's a playlist of my dancing.  Remember to check out the notes in the descriptions of the videos on the YouTube page.  And below that is a snapshot of me getting down in front of a crowd taken by my sister-in-law.  A video of this exists somewhere (I think), so I'll update this post with that if I can locate it.  Still had some public performance jitters, but regardless, had a blast up there.

al-dancing.jpg

In Tree Dimensions -- and the inverse relationship between cameras and projectors

Back in June I went into the woods with a projector and a bunch of equipment to build an art piece.  It was a part of the Electric Sky Art Camp, a yearly art event in Skykomish Washington, a tiny town in the middle of the Cascade Mountains (pop ~200).

The piece was called "In Tree Dimensions", and it worked by leveraging how cameras and projectors work in tandem with each other.  The main idea is that there's a tree surrounded by phantom lights.  The brightness and location of these lights are controlled by a MIDI controller.

Here's some footage of the project.  My good camera broke in transit, so unfortunately the best documentation I have is this (heavily corrected) footage from my cell phone.

Dials 1 - 3 move lights around the left side, bottom, and right side of the tree.  Dials 4-5 rotate stationary lights.  Dial 6 looks like a car is passing through.  Dials 7-8 emulate Christmas lights that are strung on individual branches.

A bit about cameras and projectors

Before I get into how this particular project works, I first want to cover an interesting note about cameras and projectors.  Cameras and projectors do opposite things.  Cameras eat 3d spaces and leave behind film.  Projectors eat film and push it back out onto any surface it encounters in a 3d environment (of course, we usually try to project onto flat surfaces).

What's significant about the opposite nature of these devices is that when they are perfectly matched with one another, you get fantastically weird results.  I've experimented with this in past projects:

The effects on this cake were made with a projector (not my work, found on giphy)

The effects on this cake were made with a projector (not my work, found on giphy)

Related to all of this is a thing called projection mapping.  This is where people project compelling illusions onto the surface of 3d objects.  You've probably seen examples -- usually it's projecting onto buildings.

Almost all projection mapping uses techniques that rely on the relationship between cameras and projectors -- though in these cases, the cameras are virtual cameras in virtual 3d environments.  Use the camera to take footage of a virtual environment, and project it out onto an environment that geometrically identical to the virtual environment.  With some clever programming, this is a fast way to produce some really stunning effects.

Theory behind how "In Tree Dimensions" works

Information flows from tree, to camera, to footage, to projector, back out to tree.

Information flows from tree, to camera, to footage, to projector, back out to tree.

So, I was going into this art event with a different plan.  I was under a tight deadline, and I couldn't afford to spend time constructing a digital 3d model of a tree I found in the woods, so I went with a more analogue hacky approach.  Instead of using a virtual camera, I used a real world camera.

So first, I recorded a tree under various lighting conditions.  To create these conditions, I simply pointed a work light at the tree and moved it around.  Then, I piped the footage back out onto the tree with a projector.  I programmed a MIDI controller so that it would be able to manipulate the footage, giving visitors the ability to replay and scrub through the past on a physical 3d object.

The devil is in the details...

Of course, pure theory only takes you so far, and if you want the project to look good, you have to do lots of clean up work in the process.

I couldn't just project the raw footage back out because of differences in optics between the camera and projector.  Also, to make sure the colors popped and looked good, I had to do some image enhancement.  This video quickly demonstrates what was done to the footage to make it ready to be projected.


Another example of where pure theory failed to help was when working conditions are just awful.  Like, working until 2am in the rain, cowering under a tarp that protected me and my gear from water damage, and improvising a camera situation because my good camera was broken in transit.  "Pure theory" doesn't really help you when the nearest Radioshack is a 2 hr drive away.


But fortunately for me, I was surrounded by an amazing community of new media artists from the greater Seattle area.  As you can see from this photo, I'm just elated and having an amazing time.  Everyone was super chill, super positive, and always willing to help.  And on top of all of this, they were crazy talented and the quality of their work kept me on my game, so I was especially lucky because I just moved out here from NYC and I just happened to stumble upon a cool crowd.

So a big thanks goes out to all of them for keeping my spirits up in these kinda stupid working conditions.  Also, thanks to the Electric Sky Retreat for hosting it and for giving me the opportunity to explore some of my work.  I definitely plan on building another project next year for this event.
 

Sketch of a VR Rhythm Game I'm working on...

Back in August I started working on a VR Rhythm Game.  Then life got complicated, so I abandoned it for a while.  Then I did some more work on it and life got complicated again w/ a cross-country move...

Anyhow -- I wanted to share the work and some discoveries.  Once this demo is actually complete, I'll announce it on my mailing list.  I hope to get that out as soon as I finish unpacking in my new place in Seattle.  Here's some footage of an earlier version of the demo:

So, as I said, this is an early version.  It uses music that I absolutely have no rights to, but I fell for this track so hard in high school that I sorta had to use it : )

One of the challenges with designing this was how to signal to the user exactly where and when they need to catch the juggling pin.  Turns out, some of our depth cues in VR are totally borked.  While I was playtesting this, some people were totally unable to make any sense of virtual clubs flying at their face, while others were immediately able to grock the experience.

My suspicion is that different people rely more heavily on different depth cues.  Some rely more heavily on binocular vision, while others rely more on comparing objects to their contexts.  I built a handful of little things to try to accommodate for as many depth cues as possible, many of which aren't present in the video above.  Unfortunately, some depth cues cannot be triggered by our current batch of tech, so I feel totally fine completely ignoring them for now...

In any case, here are a few things that I found helpful in creating an experience that sets the player up for the best catching experience possible:

Use Objects Designed for Catching

My very first pass was baseball-sized balls.  Baseballs, I feel, are actually great throwing objects, or even great objects to swing at, but not really great catching objects.  They're hard to see, and in practice, they require special equipment to catch.  Playing with baseball-sized balls wasn't fun because they were difficult to see and equally difficult to catch (even if the collider was unnaturally large)

I then went to football- / basketball-sized, which felt way better, but felt like they required too much of my body -- more than I felt was actually present inside the virtual environment.  I feel objects at this scale almost require one's center of mass to be involved, like, just one step away from what's required of a medicine ball.

Of course, when you catch a football or basketball, you're usually doing it with two hands, and I think I realized I wanted a thing to catch with one hand, which led me to juggling pins.

What's really great about juggling pins is that not only do they afford one-handed catching, but they are also designed to communicate the physics of a throw, both to the performer and the audience.  This ability to broadcast data through physics felt like a beautiful fit for what I was going for.

Model upon Evocative Experiences

Another plus of juggling pins is that there was a clear practice that I could draw from, and that this practice was all about the joy of catching things.  It took me a bit to realize that I should model throw trajectories and spins upon what you might experience when, IRL, someone passes you a real juggling pin.

Provide Visual / Physical Context

In the very beginning (and in the POC video above), I focused exclusively on the catching mechanic and ignored the rest of the game world... figuring that would be a thing to add later when working on theme or story or something.

I soon realized after testing that the lack of visual context was making harder to catch -- a little bit for most people, but virtually impossible for others.  One user reported that the flat blue skybox made them feel that everywhere they looked felt like there was a wall right in front of their face.

So I spent a bit more time working on the environment, peppering the user's periphery with other non-distracting geometry.  I think it did two things: (1) proprioceptively anchored them into a room and asserted their physical presence, making it more meaningful that juggling pins were being sent their way and (2) provided objects in the distance to help them contextualize the incoming trajectories of the juggling pins, giving them more information to help them catch the pins.

Emit Light from Hands to Augment Presence

To be completely honest, I stumbled upon this trick and only have thoughts as to why this works so damned well, but don't have all the answers, so I'll just blab a bit on the topic.

So, I discovered that when comparing the Vive Controllers with these 3d objects, something just felt disconnected.  Sure, conventional world lights bounce off the controller and digital 3d objects in the same way, but somehow it's still really easy for my brain to consider the controller and the digital 3d objects as belonging to separate worlds.

IOW, the controller is human-driven, is like a computer mouse, while the digital object somehow belongs to a computer.  I think my brain just compartmentalizes them separately, and it just doesn't accommodate as graceful of a transition between the two states as I would like.

HOWEVER!

If you place a point light on the user's hand... uhm, it feels totally magical.  Like, suddenly, these digital objects are painfully compelling.  These objects begin to feel more like extensions of the body and less like pure, flat data.

I suspect that hand-driven dynamic lighting triggers something in our perceptual systems that help us model 3d spaces in relation to our bodies.  The crucial part of this is the implication of the body, I believe -- because things only feel physically available once the user sees them in relation to their body, not in relation to a 3d model.

... anyhow, this is something I'll be turning over in my head a while...


So, that's about it for now.  As I said above, I'm planning on sharing my work as a Demo once it is actually ready, and will send updates on this blog and on my mailing list.

Tis all.  Payce!

 

 

Two Books I ALWAYS Suggest to VR Creatives

VR is a 3d medium.  That doesn't mean that it's harder or better or worse than any 2d medium, it's just different.

Unfortunately, practically all of the VR / UX community comes 2d backgrounds.  Whether that be video games, web / app dev, cinema, cg, etc., these fields are concerned with engaging users across 2d media.

I've noticed myself frequently suggesting two books to help people flip their brains to start thinking three-dimensionally.  I figured I'd share them on my blog as well.

 

The Body has a Mind of its Own

This book is so perfect for VR / AR work.  It lays the groundwork of how people perceive their bodies, and what meaning they can extract from that.  It talks about how we incorporate tools as a part of our body image, as well as explains how some of these systems can be fooled and why these tricks work.

The reason this is always my first suggestion is because the inclusion of the body is precisely what makes VR / AR from any other digital media.  VR / AR fuses the body with digital 3d spaces, and what's so exciting about the concept of immersion is the idea that the body can be fooled to experience virtual 3d things in substantive ways, that we can draw physical meaning from these experiences, and that we can extend our impulses and intentions through our bodies into the digital realm.

So, yeah, anyway, please read it.  It's totally totally amazing.

 

101 Things I Learned in Architecture School

One of the painful things about the third dimension is that scale is fluid.  IOW, in 2d, your canvas is confined by a frame.  This isn't an afterthought in 2d compositional theory -- the frame is central to 2d compositional theory.  In 3d compositional theory, however, you want to design spaces that provide many scales through which to experience a space.

Architecture is one of those fields that is really complicated and difficult precisely because of this reason.  Architects are required to design that account for so many weird things: bodies, governmental agencies, cars, gazes, celestial bodies, electricity, rainfall.  Architects enjoy wrestling with competing frames and massaging them all into a cohesive, singular design.

101 Things I Learned in Architecture School is a condensed overview of a field of study that takes decades to master.  The "101 Things" feel like they're the foundational truths upon which all other modern architectural theory rests.  Full disclosure, I know very little actual architectural theory.  I only say these things because these nuggets of wisdom have been hugely helpful for me over the years in considering the organization of 3d spaces and critical 3d thinking.

 

One more Suggestion: pick up a 3d practice

My non-book advice is practice a 3d craft... something that involves no screens at all (dance, pottery, interior design, etc).  Self-aware practice is way more enlightening than whatever it is a book can teach you.  I'll prolly write more about why I think this is so important in another post at some time, but for now, get those two books and just start practicing something new.

Go (back) West, Young Man...

I secured a 6-mo contract with Oculus Research as a UX designer!

So I'll be moving to the Seattle area in less than a month for the gig.  Well, actually, not exclusively for the gig.  A move out West has been in the cards for a while for my family, but this was the last piece of the puzzle to get us all out there.  Gonna have to hustle as soon as I land to meet all the crazy interesting VR talent out there so that I feel safe when this contract expires.

Apart from the absurd logistics associated with moving a family cross-country, I'm dealing with a lot of mixed emotions.  I feel so fortunate to have met and worked alongside so many amazing talented people, and in many ways so foolish to step away from these relationships.  In terms of personal growth -- I don't know how much of the last 15 years of my life can be directly associated to NYC itself, but I know I'll sure as hell miss NYC.  Not sure what I'll miss -- I bet the most poignant things will be the things I didn't even realize I had til it was gone...

On the flip side, of course, is an amazing opportunity at an amazing company in an industry that fits my creative passions.  I was interviewed by a handful of really talented people at Oculus, all of whom I'm very excited to work this.  Also, I know that the city is swarming with VR talent, so I'm positive I'll have the fortune of working with other talented VR folks as well.

So, uh, yeah, that's it for now.  I'll of course be under NDA on the stuff I work on while I'm there, but still hope to find ways to stay engaged w/ the community and to contribute, either via this blog or otherwise.

I also have intentions to further develop a community dedicated to the crossover between VR / dance / theater... so stay tuned, especially if you're in the Seattle area or if you and I have worked in some sort of conventional 3d capacity...

Looking Glass Factory

Just dropping a quick update.  These last few months I've been contracting with a company making commercially available volumetric displays: Looking Glass Factory.  The tech is super cool and so much fun to work with, and the people are awesome and come from a variety of crazy backgrounds and always have interesting perspectives.

The latest thing I made for them I'm super proud of:

More info about this build / construction at this article here:

https://blog.lookingglassfactory.com/revealing-the-virtual-without-a-vr-headset-c5528de4469d#.s1hjftj91

If you're in the NYC area, they do bi-weekly open hours, where anybody can just stop by and chill out -- look at Volume, code on it, or just drink the beer and chat.

https://www.meetup.com/volume/

And lastly if you want to you can jump on the public Slack group:

http://slack.lookingglassfactory.com/