MRL VR + Spatial Computing, Day 2

 

Path to component-ization is becoming clearer.  I can see this becoming a Unity asset to kickstart Vive interactivity.  Just snap on some components to a few things and you'll be able to plug into a simple interaction pattern.

Some observations:

I'm liking this "focal point" concept more and more as time goes on.  It's a powerful and simple idea that's easy to work with both as a developer and user.  Conceptually rock-solid (so far), and the more I lean into it, the more I discover obvious yet still innovative solutions.

The ability to glide an object around against the normal of your focal point was totally accidental in the above code.  This effect will behave differently on non-cube geometry.  Again, though, the principles behind this interaction, while not coded out, are pretty clear as to how they're supposed to work.

Conceptually, whipping the world around the user (instead of pretending to move the body around a world), is much less taxing on the proprioceptors.  No perceptual dissonance, which is nice.

Environment navigation with this method is feels way more natural than the "transportation" pattern of navigation.  After all, in the real world, we always transition through spaces through translation, not teleportation.  This method is hardly disorienting.

Running into issues with rotation against two focal points.  I'm doing just a Unity Quaternion LookRotation, which produces some bad results.  The problem is that two points means that there's an unwanted rotational degree of freedom.  I plan on building around this by planting multiple focal points per controller.

Thanks again to MRL, and also to Dave Tennent for swinging by (and helping w/ a much-needed refactor).

I'm pretty new to Unity w/ regards to source control. Next time I'm in the lab I'll probably make a proper github repo. For now, though, here's the main portion of the code.

MRL VR + Spatial Computing, getting started...

I'm doing some work with NYU's Media Research Lab. I've just started in earnest and here's some of my progress: Here's a quick sketch of an implementation of a Spatial Computing pattern in Unity.

And here's a quick update from the lab on my latest progress...

It feels really really great to finally get back in a creative code mindset again. Also a great feeling to bring my knowledge about my body to the table... Anyhow, more updates to come.

MVP User Stories for Physically Present Virtual Media

For Spatial Computing I had a lot of thoughts how interactivity should work in a 3d context. I'm building out development for it with Vive. Here's a working document of some user stories I'm working on to create this. Not positive on what's actually MVP, and what's not. Have been interested in arranging all this stuff in a perceptual hierarchy of needs (don't make me sick being more fundamental than being able to touch things)... maybe that reorganization of these ideas is next.

VR Hype and the California Gold Rush... Anything Substantive There? (and a little gushing about Medium.com)

Uhm, Medium.com is totally amazing. Slick, smart, and oh so purdy.. Here's my profile on it. Anyhow, here's a quick bit I wrote on it:

VR and the California Gold Rush: Balancing Hype with Substance

I had wanted to write this a while ago... and feel a little late on publishing it, as VR is pretty much already here, but the hype definitely still exists.

In any case, read up and comment over there if you want. Moving forward, I'm going to have to figure out where my written online content belongs... here or there...

Lumarca for Processing -- 1.0.0 published!

I just released "Lumarca for Processing" -- an easy way to make stuff in the Lumarca with Processing. It uses two really cool and elegant techniques that I wanted to share: distance functions for modeling geometry, and extending Processing's "Renderer" object to make the code super easy to work with.

On Modeling Geometry...

One of the questions you need to ask if you want to build a renderer for a volumetric display is, "how do I want to model geometry?"   In other words, how do I want to store the idea of a 3d sphere inside a computer?

The obvious place to start is by looking at how everybody else does it.

Conventional 3d modelers start by placing a bunch of dots on the surface of the sphere.  They then connect the dots to make edges, and the edges to make triangles.  Here's an example of what that looks like:

Modeling a sphere in conventional 3d

The point of this exercise is to obtain the triangles.  Why?  Because faces are all you see when you look at something through a 2d screen.  Your brain constructs a sense of volume from what it sees on the faces -- how light hits them and how textures deform around them. In other words, to create the illusion of 3d on a 2d display, make a bunch of triangles (2d shapes) and arrange them in a way that makes them appear 3d.

Unfortunately, this methodology doesn't translate well to 3d volumetric displays.

Using 2d display practices to build stuff for Lumarca feels like using paint brushes to shape clay.  In 3d volumetric displays, 2d shapes are boundaries.  This is similar to how lines are boundaries on 2d displays.  A square on a 2d display is bounded by 4 lines.  A cube in a 3d volumetric display is bounded by 6 squares.  For the Lumarca, I don't want face data.  I need volume data.

How we did it in the past

So for my first pass, I decided to ignore 2d rendering techniques altogether, and build a renderer from scratch. I used a bunch of high-school trig to solve for both a sphere and a cube. It was a weird and manual solution, but it worked. I liked how spheres weren't just polygon approximations, but mathematically truly spheres.  With all the trig, though, it took quite a bit of time to render when you wanted multiple shapes on the screen at once.

The second pass at modeling geometry was inside a library built by my colleague Matt Parker.  The library was a huge improvement on the "software" that existed... which was more a collection of functions than software.  This pass at modeling used OpenGL, and we ran into all those problems I ran away from in the first place -- how do you know where "inside" of an object is when all you have is triangle data.  There were lots of clever work arounds, but there were always a few edge case bugs that we would just have to code around instead of fix.

After a few amazing events / installations Matt and I slowly lost interest over the years and stopped making progress on the software. Eventually Processing would release version 2.0, and the Lumarca Software became outdated.

Distance Functions

Sometime this last year I saw a video on ray marching that just knocked my socks off. I won't go into a full explanation of how ray marching works in this post (maybe later), but if you'd like to know more about it, I'd definitely encourage that you watch the demo.

Ray marching introduced me to the idea of a distance function -- which is an algorithm that tells you if a point is inside or outside an object, and by how much. So, say you had a sphere centered at (0, 0, 0) with a radius of 1 unit.  Using a distance function, you'd find:

(2, 0, 0) would return 1, meaning that this point is 1 unit from the outside of the sphere (0, 0, 1) would return 0, meaning that this point is exactly on the surface of the sphere (0, .5, .4) would return -.36, meaning is .36 units inside the sphere

The code of those specific algorithms for the curious.

Distance functions are totally amazing. They are compact, crazy fast, and can run parallelized over the graphics card instead of the CPU. Also, they have a mathematical purity that polygon meshes don't share.  Additionally, unlike geometry defined with triangles, distance functions will also tell you important things like proximity / inclusion.

Distance Functions in this Library

This last point is super-important for the Lumarca.  One small piece of code tells you if something is inside or outside a shape.  Here's how this is implemented in the Lumarca for Processing library.

When the library is run, it generates a "map" image that looks like this:

Lumarca Texture Map

A map image is a concise way to define the physical geometry of a Lumarca structure. When a calibrated projector projects this image out onto the structure of strings, the color of each pixel describes where it lands. The RGB values describe the pixel's XYZ location. In other words, a pixel that has an RGB pigment of (255, 0, 0), when projected, will hit a string at (x max, y min, z min).  Now all I need to do is this XYZ location into a distance function, which will tell whether or not you're inside a geometry and by how much.

What's nice about this approach is that you compute all the expensive geometry only one time -- at the generation of the map image. Everything after that is simply reading from this texture and performing simple distance functions, making it run way faster.

More significant than the speed, though, was that distance functions helped me break through a problem that was holding me and the Lumarca project back for years. I didn't need to do crazy trig or to rely on a batch of triangles and intersection calculations. I had found something that was designed to give me answers in a volumetric manner, and so I stitched the distance function into the core of the library.

So how nicely does this all play with Processing?

Now that I had a plan for creating geometry, I just needed to wrap it all up in a Processing library that was easy to use.

To give some context, in past iterations, writing the code to pass a sphere was quite painful.  If you wanted to build a sphere from the initial software, you needed to copy and paste around 100 lines of code.  If you wanted to build a sphere from the 0.1.1 library, it was orders of magnitude simpler, but still quite complicated:

shape = new ObjFile(this, center, new PVector(1, 0, 0), "sphere.obj", 1.5);
lumarca.drawShape(new PVector(1, 1, 0), shape);

I wanted to cut this down and make it easy.  How easy?  I wanted it to be as easy as the rest of Processing.  I wanted to create a sphere by simply calling "sphere(10)".

I dug around to see how realistic it would be to overwrite / replace elementary Processing functions, things like sphere() and box(). When I looked, I found that while these could be replaced, it would mean replacing the entire renderer, and potentially doing some really really ugly things. I'd also have to do it with one of my least favorite languages: Java. Cue eyeroll. But I really wanted this so I decided to investigate a bit and just survey how painful this would get.

I was dead wrong

While I'm still not a fan of Java : ) I can absolutely see the appeal where I did not before.

While I was technically correct that I'd have to replace the entire renderer, I hadn't realized that making new renderers was really simple.  Processing has, by design, swappable renderers and simple ways to build your own renderers.  The heavy-handed OOP nature of Java helped me swim through this process and gave me all the guide rails I needed.

The library includes Lumarca.RENDERER.  You enable it simply by passing it into the Processing size() function -- so something like: size(1024, 768, Lumarca.RENDERER).  You can easily flip back to size(1024, 768, P3D) if you want to see your work in a conventional 3d context if you happen to be not near a display anymore.

What's really cool about this neat little trick is that sphere(10) means the same thing in the Lumarca.RENDERER as it does in the P3D renderer, just with a different graphical result.  It means you can take hopefully any conventional processing sketch and display it on the Lumarca with a few configuration differences.

Lumarca spotted in Europe

About five months ago I was contacted by a group of artists out in Tallinn, the capital of Estonia. They were interested in building a version of the Lumarca for an event called Heliodesign.

They did that. They've also gone on to build bigger and better designs, and did something that I hadn't ever tried before -- projection on string outdoors. Here's a video from another festival, Staro Riga:

https://vimeo.com/112679446

Super cool! For more information about the organization that did this, Valgusklubi, check out their Facebook page.