Coursework, notes, and progress while attending NYU's Interactive Telecommunications Program (ITP)

Packet Sniffing

Some general stuff I learned about data packets sent over the internet:

Packets have a header and a payload (or body), and a trailer (which I understand really just indicates the end of the packet). I found this nice definition that explains the header includes 20 bytes (usually) of metadata about the payload including things like protocols governing the format of the payload. HTTP headers can have many, many potential fields! Request and Response packets also include different fields. When you use HTTPS, both the header and payload are encrypted

Using Herbivore

Herbivore is very user-friendly software developed by Jen & Surya that shows HTTP traffic on the network you’re connected to. It does not show devices that are sleeping and only shows HTTP traffic. Using Herbivore, I found that HTTP/S packets are requested when you close a tab in your browser, or when you click a tab thats been dormant. I also found my computer was sending data to sites like http://www.trueactivist.com (fake news looking site?) and pbs.twimg.com (link to w3 snoop).

Some interesting packet header fields that were returned:

  • P3P: Apparently, a field for P3P policy to be set, but this was never fully implemented in most browsers. Now, websites set this field in order to trick browsers into allowing 3rd party cookies.
  • upgrade-insecure-requests: Tells the server that hosts mixed content that the client would like to use HTTPS.
  • access-control-allow-origin: One of many “access control” settings that indicate a site allows cross-origin-resource-sharing. I remember this being a sticking point when we built APIs in another class.
  • e-tag: ID for a version of a resource.
  • cache-control: specifies directives “that must be obeyed by all caching mechanisms along the request-response chain.” {wiki} Values I saw included: public, no-check, max-age=.
  • connection: control options for the current connection. I found values of: close, keep-alive.
  • vary: “Tells downstream proxies how to match future request headers to decide whether the cached response can be used rather than requesting a fresh one from the origin server.” {wiki}
  • x-xss-protection: cross-site scripting filter.
  • surrogate-key: Some header that helps Fastly purge certain URLs.
  • edge-cache-tag: Some header that helps Akamai customers purge cached content.
  • CF-RAY: Helps trace requests through CloudFlare‘s network.

I guess http-header naming conventions changed in 2012, and headers that begin with X- should no longer be used. Nonetheless, I found several:

  • x-HW
  • x-cache
  • x-type
  • x-content-type-options
  • X-Host

I did some tests, too. I found there was a lot of traffic between my computer and different google services when I signed into gmail:

And I replicated the wordpress username & password problem we noticed in class.

I found that by forcing https, by including it in the address bar, you could circumvent this problem.

Using WireShark

Looking at HTTP traffic using Herbivore was really interesting and fun. But I was left with questions about the other protocols my computer was using in order for me to use the internet. I wondered what other kind of traffic was observable and what kind of metadata would be available on the protocols I’ve been taught are secure. Can you see SSH? VPN? Email? I knew from an accidental experiment using Herbivore that you could not see web traffic when using a VPN. But would you be able to see something using Wireshark?

Wireshark has many supported protocols, including MAGIC

To figure out Wireshark, I just used my computer while navigating to the NYU Libraries site and captured all traffic. This is what I think I learned.

The Internet Protocol Suite wiki page helped me understand the Wireshark output and its references to frames https://en.wikipedia.org/wiki/Internet_protocol_suite

  1. My computer is talking to my router (using DNS) and my router is responding (using DNS). I think my router is figuring out where to send the request I made.
  2. It looks like my router is also doing some other Multicast stuff. I don’t know a lot about multicast but when I looked up these protocols my router was using (ICMPv6, IGMPv2, and SSFP) they all seemed to some way to discover devices, or “establish a multicast group membership.”
  3. Usually it was my router using these protocols with these weird IP addresses, but sometimes it was actually my computer. In these cases “M-SEARCH” is indicated instead of “NOTIFY.” I don’t know what this means.
  4.  My computer is also talking to the website I’m trying to reach via HTTP.
  5. There is also a bunch of TCP traffic between my computer and the site at NYU I was connecting to.    
  6. NTP is used for clock synchronization (application layer). You can see it’s UDP port 23. This was cool to find. 
  7. You could also see all the Transport Layer Security handshaking. I’m guessing it’s okay that you can see this session ticket. It also tells you if your session is reusing “previously negotiated keys,” or is resuming a session. 

To my questions:

  • Can you see SSH? Yes, and the traffic looks the same as SFTP. 

  • VPN? Yes: while connected, traffic appears as an “Encapsulated Security Payload” (ESP)

  • Email? I’m not sure–I used Gmail which probably uses some other protocol besdies SMTP to send email. SMPT is a Wireshark-supported software and it did not appear. But there was traffic generated that could be my email being sent and gmail updating the page.

More questions

I’m really curious to interface Wireshark with an SDR to see other kinds of signals! I began down this path using this RTL-SDR tutorial, but ended up stuck on two fronts. I was using the VMWare installation they suggested, but the Ubuntu machine would not detect the SDRs connected to it. In trying to detect GSM traffic on a device where they were detectable, it was not clear a signal would be detectable. I couldn’t even find my own cell signals. This is annoying and requires more investigating, but is luckily out of the scope of this week’s assignment.

Traceroute: visualizing web detours

For my traceroute project I ran traceroute to sites I commonly visit as well as sites I thought would be interesting to route, from places I usually connect to the internet. I downloaded iNetTools for my phone to run traceroute from my phone.  I wrote a short python script to get the geolocations for each of the hops from ipinfo and output the traceroute, company, and geographic information to a csv file.

I made a website that shows the starting points of my searches: my apartment, the Aarons’ apartment, NYU (work & school), and my commute. When you click on one of the starting points, you get options of where to navigate, but instead of ending up at the site, you end up at some weird intermediary–the company, or one of the hops along the way.

Some things I noticed

  • From my cellphone, packets bounced around the Sprint network in New York, then went to Summit, New Jersey, before being routed to their endpoints.
  • From my apartment, traffic travelled to Bethpage, NY and then Wingdale, NY (Cablevision).
  • From Aaron’s apartment, packets travelled through various Time Warner locations (Englewood, CO, Austin, TX, Los Angeles & Beverly Hills, CA) before being routed to their endpints.
  • From NYU, packets bounced around the NYU network before being routed through TATA or sometimes, Level3.
  • From Aaron’s Verizon hotspot, packets travelled through Cellco and Telia.
  • Encrypted google hopped to Mountain View, and then would sometimes hop to Seattle before hopping back.
  • CIA and NSA site sometimes took strange routes outside when I navigated from my apartment–to Germany. I compared their paths to that of healthcare.gov, a more innocuous government website to see the difference, and the the endpoint was consistent (Akamai in Massachusetts). NSA and CIA took domestic roots to a Time Warner or Akamai endpoint in Massachusetts.
  • I discovered Internet2 and NYSERNet, which you sometimes pass through leaving NYU. They are both non-profit ISPs.
  • I wasn’t sure if the geographic locations I was getting from ipinfo were right–but when I cross-referenced with the service providers associated with the IP, they usually had a location within a 5 mile radius of the listed geo-coordinates.

Soil micro-environments with augmented reality

For the final I experimented with projection and augmented reality (Unity & Vuforia) to tell the story of how plants remediate their environments. I narrowed this down to phytoremediation with sunflowers, specifically, to make this manageable for a week-long project. Sunflowers accumulate lead from the soil, but like all bioremediators, they then become toxic themselves. I also wanted to show the complexity of the micro-environments in soil. My grand aspiration was to create one of these experiences for each of the ways to remediate soil. 

I put everything together with some text and animations. Many of the soil images I put up on Vuforia as tests got much better tracking ratings when I made them brighter.  I made a big composite image of these trackable parts, and then used screenshots of the composite image as my image targets.

 

Gabe helped me find 3D models of bacteria and bugs, but I found they had so many vertices and faces they created a lag on the phone. I tried using Blender to “decimate” the objects but this made me want to throw my computer out the window. Instead, I used Blender to identify the objects with he fewest faces and used those. Gabe later suggested correcting this with shaders for mobile in Unity.

It might have been useful to add some more information with audio, since it seems better to avoid lots of text that you’d need to read. I would have liked to incorporate sound but couldn’t find a simple way to attach this to an event, like a found target or a rendered object. I’m not sure this would have added to the experience, since you can see multiple targets at the same time. It’s easier to attach sound to a camera, and maybe I could have added some ambient noise. 

My original project proposal:

Ready Player One

Ready Player One has been appropriately critiqued for being a superficial page-turner, propelled forward by pages on pages of cultural references and tired tropes. These aspects of the book made it exhausting to read, but there were nonetheless a some compelling ideas: the parallels of this VR-scape to the present day internet–particularly around its corporatization, and the details around the pervasiveness and integration of this future VR world.

Aspects of the immersive VR world where the book largely takes place reminded me of Sarah Rothberg’s description of a future work environment–that we may one day just put on headsets that function as our desktops. Wade Watts and his peers go to school in this VR world, but they also maintain robust personal lives, learn, play, and participate in this parallel economy. The reality that everyone is in fact connected to an actual, physical body is something to be exploited by those with power, who can kill. One troubling aspect in this respect was how little processing the characters do when they experience deaths of those close to them. Although, Wade does go to somewhat thoughtful measures to protect his physical body and his real identity. The digital ephemera of the dead person’s avatar is not something that’s addressed–something curiously lacking considering this is already a problem. The only person who gets this kind of forethought is Halliday.

Anonymity in this world is important, which was interesting to reflect upon since this has largely been lost on the modern, social internet. The commercial progress of the current internet seemed all the more apparent when taken to the extreme as it was in the book, where people go to great lengths and spend a lot of money to curate their VR lives. Halliday and OASIS represent some techno-utopian vision of a future VR-scape that has already failed in its lower-fidelity precursor (sad/boring/fiefdom/current internet).

3D Avatars & Unity

This week we scanned ourselves using structure sensors and Skanect, to create 3D avatars that we can animate in Mixamo. It was difficult to get a good scan: things to consider were keeping the sensor level, maintaining a wifi connection with the computer running Skanect, moving the sensor in the right direction at the right speed, and making sure to stay very still. Post-processing in Skanect allows you to color your scan, edit out the ground, and rotate the figure for importing into Mixamo.

While I maintained “claw hands” during my scan, I must have moved a little bit and my hands just messed up anyway. So when I rigged my figure, I used the fewest number of joints, giving me a mitten-hand effect.

As a next step we imported our Mixamo-animated Fuse characters & our own avatars into Unity, and experimented with creating scenes:

NYC sewage system: toilet projection

I recently went on a Newtown Creek audio tour, a project by ITP professor Marina Zurkow, & alums Rebecca Lieberman & Nick Hubbard, where I learned many things I didn’t know about the sewage processing facility there. I was already fascinated by how cities process sewage and where there are opportunities to intervene to create a more sustainable system. Among other things, projection mapping offers an opportunity to put video in unexpected locations, so I thought it would be interesting to put information about NYC sewage system at what is for many people, the most obvious place they interact with it: the bathroom.

I did a bit of research and found some information and a number of videos on the topic. I decided to use “How NYC Works – Wastewater treatment,” an easily downloadable video from vimeo. I also incorporated sounds of urination and flushing, and the sounds of rain (rain in New York is known to cause Combined Sewage Overflows).

As an initial experiment, it was very useful to see what worked well and what didn’t. Below are some still of my favorite parts.

Hansel & Gretel with Twine

I teamed up with Angela Wang to re-imagine the fairy tale “Hansel and Gretel.” You can play here.

Our first step was to deconstruct the story to its main elements, symbols, and themes. We wanted to maintain important aspects but play with other elements: character portrayal, setting, plot. After a lot of ideation that included ideas inspired by the Hansel & Gretel show at the Park Avenue Armory, physical installations, and 360 interaction in the web, we settled on using Twine.

We imagined parallel story lines and different endings–this was the most fun and time consuming part! It was fun to riff off of each other and modernize the creepy aspects of the story. By allowing the different themes and storylines to manifest, it highlighted the oddness of sharing this story with children for so many years.

Some of Gabe’s feedback was to delay the reveal of the story, and to incorporate more of the Twine elements. It was definitely challenging to incorporate all of the game-play aspects that Twine makes available. Aside from fixing things like some awkward language and typos, I think it would also be good to incorporate the 2nd set of questions in the story in a more cohesive way.

Since presenting in class, we’ve had other people play the game, and I think the response has been pretty positive. People find it funny and disturbing, which was our intention. There’s also a certain amount of surprise when people try to go back in the story and find a different path, only to find that they are led to an even more evil ending.

Ricoh Theta: in-class experiment

In class we experimented with 360 photo and video using the Ricoh Theta camera and software. I ran into issues transferring the footage onto my new macbook using Image Capture, and ended up needing to load the camera as a drive.

I took video from different parts of the journey to Bobst library and different areas of the stack, but I haven’t gotten a chance to edit different parts of the footage together. Unlike last year, vimeo now supports 360 video, so I took just one of the scenes, from on top of a glass case, and uploaded that:

Already programmed: responses

Connected, but alone?

Sherry Turkle argues in her TED talk, “Connected, but alone?” that too much texting is bad for us: she anticipates the unpopularity of the talk because this isn’t something people or companies want to hear. People are more comfortable with machines, which enable interaction without real emotional risk. But, they also can’t really empathize with us.

The argument that texting leaves out a lot of cues, and is a bad medium of conversation makes sense (even though I’d like to point out that it allows for other things, like video, photos, group chat, and sharing of virtual content that gives a different richness to these exchanges). I struggle with the assertion that we are more alone than we used to be. Maybe this is true, but I find it hard to imagine that the work required to maintain a basic standard of living, and the isolation of people that didn’t fit into a previously even-more-rigid social structure, allowed for greater social connection and less loneliness in years past (in Western societies). She gives anecdotes of cases where people she’s interviewed specifically avoided in-person interaction, but I wonder how common this really is, and what the alternative would have been before the internet and before texting. Maybe these individuals would have been even worse off.

 

We are all cyborgs now

Amber Case’s talk, “We are all cyborgs now,” is funny because she makes the absurdity of our new rituals around technology so apparent. She notes the difference between other human tools and computers is that computers act as an extension of our mental selves, rather than our physical selves. One point (which serves as the basis for one of Yueping’s projects!) is this idea that we find ourselves rummaging through this external brain, unable to find our documents or an old google search.

I really don’t need much convincing the we’re already cyborgs, which is her main thesis. I just loved how observant this talk is, and I wonder if other’s see these machines as extensions of our bodies in the same way?

 

Program or be programmed

Rushkoff’s talk, “Program or be programmed,” is a discussion of his book by the same name, where he makes the case for media literacy. That he feels he needs to persuade people of its importance is kind of ironic since this is the antidote for otherwise being blindly coerced by our history, social context, and capitalist/corporate environment.

I ended up watching his talk, “Open source democracy” (coerced by the youtube playlist) as well, because his historical accounts that bring us into our present socio-economic context and the proposals he makes for addressing the related issues are equally fascinating. That we should learn to code because it changes the way we think, and has real social/political/world consequences recalls Papert’s Mindstorms! And his call for local economies, local solutions, and local engagement recalls Bill McKibben’s Deep Economy.

Skeptically, I think even with greater awareness of how we are coerced, manipulated, “programmed,” I don’t think we have the power to fully reclaim our agency. Cognizance of these mechanisms only helps to the extent that we have power over our reality, and even if we are to argue in favor of free will, this would only give us control over a tiny proportion of our lives. Is it lucky then, if your programming allows you to see through the program?

Responses: Sharing on the internet

Breitbart-led right-wing media ecosystem altered broader media agenda

This analysis sought to examine the media ecosystems during the 2016 election cycle that led to Trump’s ascendance using “hyperlinking patterns, social media sharing patterns on Facebook and Twitter, and topic and language patterns in the content of the 1.25 million stories.” The study points to social media “as a backbone to transmit a hyper-partisan perspective to the world,” with Breitbart at the center. The authors present evidence of the role of social media platforms and the right-wing online media, something that I think many were attuned to at the time–after Trump’s election I cringed and navigated to Breitbart for the first time.

While the authors note (and this is clearly observable by living online) that disinformation is common, they found that political and media polarization online was asymmetric–the right-oriented people got more of their news from newer, more polarized sites. I didn’t realize that right-wing media was younger in general. In the mid-2000s, when I was horrified that the adults elected Bush (twice!) I had imagined the right as Fox-news consumers. Now, even Fox is vilified by this new right wing media. From the comparison of patterns on Facebook and Twitter the authors suggest, “human choices and political campaigning, not one company’s algorithm, were responsible for the patterns we observe,” although both are dependent on click-driven revenue. Regardless, the result is scary and unprecedented:

…the insulation of the partisan right-wing media from traditional journalistic media sources, and the vehemence of its attacks on journalism in common cause with a similarly outspoken president, is new and distinctive.

Rebuilding a basis on which Americans can form a shared belief about what is going on is a precondition of democracy, and the most important task confronting the press going forward.

 

Rebuilding the Web We Lost

It was so interesting to read Anil Dash’s perspective on the nature and promise of the old social web: from self-hosting your online identity, the importance of flickr, search, and non-monetized links, to norms around data use and tracking, and crowdfunding. My tween experience was mainly around AOL and AIM.  He wrote a companion piece outlining what could be done to bring the most valuable elements of what was lost back. The main things that stood out were diversity and funding: it seems like a combination of an insular industry and its ad-based funding streams resulted in these negative trends.

The biggest reason the social web drifted from many of the core values of that early era was the insularity and arrogance of many of us who created the tools of the time… Another way of looking at the exclusionary tendencies of typical Silicon Valley startups is by considering the extraordinary privilege of most tech tycoons as a weakness to be exploited.

Dash is hopeful about ways we can address these problems, suggesting that the insularity of Silicon Valley provides an opportunity to create something better that would compete with their flawed strategies. But to me this seems like such massive, ingrained problem that it will take several years if they are ever addressed. There are definitely people promoting diversity in tech, but I seldom hear this value espoused at high levels of the industry, where the power of culture change is manifest. The idea that we can promote blue-collar coding is actually somewhat alarming–like the value of the service is different dependent on who does it. It makes sense that industry partners are invested in teaching computer science for this exact reason.

And then there’s funding: is anyone working on funding websites a different way? Is this even possible for the conglomerates that have already taken over? Or do we have to wait for them to be unseated?

…the fundamental reason these sites refused to accommodate so many user demands is because of economics. Those sites make their revenues on models dictated by the terms of funding from the firms that backed them.

 

Habits of Leaking: Of Sluts and Network Cards

Wendy Chun’s piece addresses many aspects, traditions, and values of western culture, born out out of its colonial, racist, and patriarchal history, that give proper context to abuse (the focus is mainly abuse towards white women) on the web. One of the more salient arguments is around the right to space and the right to citizenship on the web, and how this is connected to the right to exist in public, to loiter.

We need an online public in which women are not victims but loiterers, actively engaging in its public sphere without a discourse of predators, pornographers, and slut-shamers waiting there to ruin them…we need to fight for the right to be vulnerable–to be in public–and not be attacked.

… mass loitering […] creates mixtures and possibilities that erode boundaries and establishes spaces that do not leak because boundaries are not compromised.

I was reminded of Jillian York’s piece on harassment and censorship, because she argues that women expressing themselves, existing, “shout[ing] louder over the din,” is one helpful way to address this abuse. A harmful and effective aspect of online abuse, before it begins to threaten physical bodies, is the way shame is utilized. There’s no a priori reason to feel shame as a victim outside of a culture that hoists shame upon women for existing in public–this does seem like something we could change to strip power of the people that weaponize this against women online, but if possible, will take a long time.

I’ll always remember how network cards work thanks to Chun’s description of how they operate “promiscuously.”