June 23, 2011

The UC Davis KeckCAVES and earthquake studies

Posted by Austin Elliott

I am privileged to be able to conduct much of my research using an immersive 3D data visualization facility housed by UC Davis. I link there to a clip of the Holodeck from Star Trek because that’s pretty much what this facility is, and I’ve got videos of it below. The future is here; see for yourself.

The KeckCAVES (Center for Active Visualization in the Earth Sciences, or “the Cave” as we call it) is a four-sided room (8’x10’x8′) in the geology department that serves as an immersive virtual reality space within which we can load all kinds of multi-dimensional data. The data is rendered in a 3D space, which is projected (in stereo for 3D) onto each of the three walls and the floor. A CAVES user steps into the space wearing a head tracker and a tracked wand–rather like a sophisticated Wii controller–where he or she can interact with and manipulate data as though it’s a real object in the room with him.

The 8' high x 10' wide x 8' deep "Cave" room in the Geology department at UC Davis. Each wall displays an image projected from behind, and the floor image is projected from above. Stereo 3D images are projected in space with reference to a user wearing head-tracking 3D goggles.

Where my research involves analysis of topography, displaying the data as a 3D model that I can manipulate in space by hand is one of the most enlightening ways of looking at it.

Using the CAVES room. Myself (left) and fellow UCD grad student Peter Gold "discuss data results" (read: pose for a promo shot) immersed in a topographic survey we conducted in Baja.

Conventional software handles 3D data sets by projecting or slicing them into two dimensions. Practically all of the media we use to communicate is patently 2D, from computer monitors to mobile screens to good old fashioned newspapers and journals, so we’re accustomed to distilling our 3D world down into distributable 2D formats. Indeed computer analysis of data can handle innumerable dimensions, but when it comes to seeing the results, scientists [, engineers, etc.] can be pretty limited. With the burgeoning world of 3D media technology we can go beyond convenient “flat” representations of data and use immersive visualization to pick out patterns that our brains have been geared to look for in 3-dimensions, allowing perhaps more nuanced or informed treatment of the data than computer analysis alone can offer.

That’s sort of the selling-this-to-funding-agencies way of saying that simply visualizing data in novel or intuitive ways can be exceptionally informative. Imagine reading a list of driving directions around windy roads versus looking at a map with the route drawn out. Visualization adds some informative nuance (“ah, there’s a big oak tree there!” or “oh, I have to turn because the road ends at a lake!”).

So computer scientists in UC Davis’s Institute for Data Analysis and Visualization (IDAV) work cooperatively with geoscientists to develop 3D visualization software that’s geared towards understanding flow in the mantle, topography, geothermal temperature volumes, and serial sections (~CAT-scans) of rocks, among other applications.

The primary developer of much of the KeckCAVES software, Oliver Kreylos, is ever branching out and innovating new applications as well as modes of showing his accomplishments. Most recently he embarked on a pursuit that proved rewarding and popular: he hacked the video and depth information out of a Microsoft Kinect and used it to “film” himself in 3D. This endeavor turned out to be wildly popular, so he has continued posting his results to YouTube, including the novel use of two Kinects to fill in shadows and really round out the 3D scene he can capture.

This brings me to the main purpose of my post (other than a lot of promo for the KeckCAVES and the folks behind it): we can now use the dual-Kinect video setup to generate footage of users interacting with their data in a 3D environment, which we can “film” (i.e., render into a visual recording) from any perspective within that space. So for visitors to UC Davis’s public expo, Picnic Day, I recorded an explanatory piece about the data I’ve collected for research on last year’s M7.2 Baja earthquake. The explanations will be a little slow for the geologists in the crowd, so just bear with me and marvel at 1) how neat the topographic data is, and 2) the fact that I’m sitting there interacting with a dataset that appears as real in space to me as it looks in this video. The fuzzy halo around my margins is an artifact of the data stream coming from both Kinects as they try to omit my real-world background as I move.

[youtube=http://www.youtube.com/watch?v=_e2DsZ40sqg]

Here’s the setup we used to film this; it’s merely in front of a well-outfitted 3D TV, not in the Cave itself:

3D TV, tracking system, and two calibrated Kinects. All but the user in the chair are erased to record the 3D virtual reality session in the video above.

The astute observer will notice that a couple times during the video I surprised myself by hitting the TV screen, invisible in the virtual space–the 3D immersion is that convincing!

There are CAVEs set up in universities, research institutes, and medical labs all over the world. They have a huge presence in medicine and biological research, but the possible applications are quite endless. The ease and intuitiveness of 3D immersion in your data sets is something you have to witness to truly comprehend. If you ever have the opportunity, visit one!