Soil micro-environments with augmented reality

For the final I experimented with projection and augmented reality (Unity & Vuforia) to tell the story of how plants remediate their environments. I narrowed this down to phytoremediation with sunflowers, specifically, to make this manageable for a week-long project. Sunflowers accumulate lead from the soil, but like all bioremediators, they then become toxic themselves. I also wanted to show the complexity of the micro-environments in soil. My grand aspiration was to create one of these experiences for each of the ways to remediate soil. 

I put everything together with some text and animations. Many of the soil images I put up on Vuforia as tests got much better tracking ratings when I made them brighter.  I made a big composite image of these trackable parts, and then used screenshots of the composite image as my image targets.

 

Gabe helped me find 3D models of bacteria and bugs, but I found they had so many vertices and faces they created a lag on the phone. I tried using Blender to “decimate” the objects but this made me want to throw my computer out the window. Instead, I used Blender to identify the objects with he fewest faces and used those. Gabe later suggested correcting this with shaders for mobile in Unity.

It might have been useful to add some more information with audio, since it seems better to avoid lots of text that you’d need to read. I would have liked to incorporate sound but couldn’t find a simple way to attach this to an event, like a found target or a rendered object. I’m not sure this would have added to the experience, since you can see multiple targets at the same time. It’s easier to attach sound to a camera, and maybe I could have added some ambient noise. 

My original project proposal:

Ready Player One

Ready Player One has been appropriately critiqued for being a superficial page-turner, propelled forward by pages on pages of cultural references and tired tropes. These aspects of the book made it exhausting to read, but there were nonetheless a some compelling ideas: the parallels of this VR-scape to the present day internet–particularly around its corporatization, and the details around the pervasiveness and integration of this future VR world.

Aspects of the immersive VR world where the book largely takes place reminded me of Sarah Rothberg’s description of a future work environment–that we may one day just put on headsets that function as our desktops. Wade Watts and his peers go to school in this VR world, but they also maintain robust personal lives, learn, play, and participate in this parallel economy. The reality that everyone is in fact connected to an actual, physical body is something to be exploited by those with power, who can kill. One troubling aspect in this respect was how little processing the characters do when they experience deaths of those close to them. Although, Wade does go to somewhat thoughtful measures to protect his physical body and his real identity. The digital ephemera of the dead person’s avatar is not something that’s addressed–something curiously lacking considering this is already a problem. The only person who gets this kind of forethought is Halliday.

Anonymity in this world is important, which was interesting to reflect upon since this has largely been lost on the modern, social internet. The commercial progress of the current internet seemed all the more apparent when taken to the extreme as it was in the book, where people go to great lengths and spend a lot of money to curate their VR lives. Halliday and OASIS represent some techno-utopian vision of a future VR-scape that has already failed in its lower-fidelity precursor (sad/boring/fiefdom/current internet).

3D Avatars & Unity

This week we scanned ourselves using structure sensors and Skanect, to create 3D avatars that we can animate in Mixamo. It was difficult to get a good scan: things to consider were keeping the sensor level, maintaining a wifi connection with the computer running Skanect, moving the sensor in the right direction at the right speed, and making sure to stay very still. Post-processing in Skanect allows you to color your scan, edit out the ground, and rotate the figure for importing into Mixamo.

While I maintained “claw hands” during my scan, I must have moved a little bit and my hands just messed up anyway. So when I rigged my figure, I used the fewest number of joints, giving me a mitten-hand effect.

As a next step we imported our Mixamo-animated Fuse characters & our own avatars into Unity, and experimented with creating scenes:

NYC sewage system: toilet projection

I recently went on a Newtown Creek audio tour, a project by ITP professor Marina Zurkow, & alums Rebecca Lieberman & Nick Hubbard, where I learned many things I didn’t know about the sewage processing facility there. I was already fascinated by how cities process sewage and where there are opportunities to intervene to create a more sustainable system. Among other things, projection mapping offers an opportunity to put video in unexpected locations, so I thought it would be interesting to put information about NYC sewage system at what is for many people, the most obvious place they interact with it: the bathroom.

I did a bit of research and found some information and a number of videos on the topic. I decided to use “How NYC Works – Wastewater treatment,” an easily downloadable video from vimeo. I also incorporated sounds of urination and flushing, and the sounds of rain (rain in New York is known to cause Combined Sewage Overflows).

As an initial experiment, it was very useful to see what worked well and what didn’t. Below are some still of my favorite parts.

Hansel & Gretel with Twine

I teamed up with Angela Wang to re-imagine the fairy tale “Hansel and Gretel.” You can play here.

Our first step was to deconstruct the story to its main elements, symbols, and themes. We wanted to maintain important aspects but play with other elements: character portrayal, setting, plot. After a lot of ideation that included ideas inspired by the Hansel & Gretel show at the Park Avenue Armory, physical installations, and 360 interaction in the web, we settled on using Twine.

We imagined parallel story lines and different endings–this was the most fun and time consuming part! It was fun to riff off of each other and modernize the creepy aspects of the story. By allowing the different themes and storylines to manifest, it highlighted the oddness of sharing this story with children for so many years.

Some of Gabe’s feedback was to delay the reveal of the story, and to incorporate more of the Twine elements. It was definitely challenging to incorporate all of the game-play aspects that Twine makes available. Aside from fixing things like some awkward language and typos, I think it would also be good to incorporate the 2nd set of questions in the story in a more cohesive way.

Since presenting in class, we’ve had other people play the game, and I think the response has been pretty positive. People find it funny and disturbing, which was our intention. There’s also a certain amount of surprise when people try to go back in the story and find a different path, only to find that they are led to an even more evil ending.

Ricoh Theta: in-class experiment

In class we experimented with 360 photo and video using the Ricoh Theta camera and software. I ran into issues transferring the footage onto my new macbook using Image Capture, and ended up needing to load the camera as a drive.

I took video from different parts of the journey to Bobst library and different areas of the stack, but I haven’t gotten a chance to edit different parts of the footage together. Unlike last year, vimeo now supports 360 video, so I took just one of the scenes, from on top of a glass case, and uploaded that: