Skip navigation

Monthly Archives: April 2013

In 2010, Mimi Cabell and Jason Huff sent the entirety of the “violent, masochistic, and gratuitous” novel American Psycho by Bret Easton Ellis through GMail, one page at a time. Google tried to target ads to the text: for example, in a scene where a dog and a man are brutally murdered with a knife, Google displayed ads for knives and knife sharpeners. The artists used these ads to footnote the original text.

The artists noted Google’s persistant use of “standard ads” – ads which seemed to have no relation to the content. The most frequently displayed ad was for Crest Whitestrips coupons. The artists called this a “misreading,” claiming this disconnect “echoes the hollowness at the center of advertising and consumer culture.”

The perception of targeted content is slowly moving from the noise of traditional advertising (television, billboards) to the relevance of a close friend’s suggestion. The extreme divide between the data body of Ellis’ text and the Colgate Whitestrip coupons helps to expose a powerful and manipulative form of “nudging.”

What happens when we can no longer hide in a crowd? I do not mean that the crowd no longer exists, and I do not mean that we can no longer hide in the crowd because we have become too visually unique. I mean to say that we can no longer hide in a crowd because the crowd no longer provides safety: the exact opposite, it poses a threat.

I believe that so much of this “wonderful creepiness” of new media stems from its being a mass media of individualization. We are massively individuating ourselves, and doing so en mass by individuating through the same systems (YouTube, Facebook, MySpace, Twitter, etc.). The crowd is no longer a place of anonymity, but a place of identity. To attempt to hide in this crowd sans identity is to have the crowd turn on you, marking you as an outsider—framing you as a potential terror suspect. I would even go as far as to argue that this new fear of the crowd, of the crowd turning on us, is articulated in by the recent rise in popularity of zombie movies (that, and our fear of becoming “capitalist zombies”). The crowd is no longer safe when only way to become a part of the crowd, and therefore protected by it, is to expose oneself. This is the paradox of trying to hide in the crowd of our new media culture.

These crowds are threatening not only because we must expose ourselves to become a part of them, but also because we can never know what the crowd contains. Although new media sites would like to have us think threats come from outside the system, (think about the movie Inception, in which the protagonists face the constant threat of “crowds” turning on them when they recognize they are foreign to the dreamer’s conscience), this is not the case. For all the crowd’s emphasis on individualization, this individualization is shallow and easily manipulated. The most average conventional looking member of the crowd could easily be in disguise. What is even more horrifying about the crowd is that, once a part of it, we have no distance from it. We can only see its members for a very short period before coming into contact with them. We do not have access to a bird’s eye view of this crowd, and cannot know it’s trajectory or understand its actions in a larger context than simply what we can see in front of us. We do not see people approaching from a far, and we do not know where we are being lead in the long term.

Underlying all of this is the creepiness of the blurred boundary between private and public. Think of all the creepy online viral videos that people absolutely love. What is creepy about these videos is not that they exist, but that we have very public, and very easy, access to them. These videos are creepy because they are the private musings of the mind become public. Perhaps we ourselves have similar musings, but we never share them with anyone but ourselves. It is creepy in all this is that suddenly people can see these disturbing musings, what was invisible has been made visible, and we can comment on them. And what creeps us out is that people, just like us, like them, although societal values tells us we shouldn’t.

How can you even have a crowd when there ceases to be a divide between the public and the private? Perhaps you can still have a crowd, but it no longer operates anywhere close to the crowd as it was first conceived.

Foreign Affairs recently let out a new article titled, “The Rise of Big Data: How It’s Changing the Way We Think About the World” and it was perfect for coinciding with the lectures and readings of this week. But before looking at it in more detail, I also feel like the course as a whole was a great preparation for interrogating this article and its claims, and to see if the political ramifications involved are necessary consequences of the theory we so intimately discussed. As a political science concentrator, Foreign Affairs is a really important source of information and the articles that they publish are usually quite important topics in international politics.

The tone of the readings for this week seemed to me to be taking a mixed approach towards data in general.  This article does as well but seems to be much more optimistic and then decides to throw the negative potentiality of it towards the end of the essay. But it was this notion of raw data actually being cooked that I found interesting and particularly applicable. But as with all acts of cooking, what is of the most importance is who seems to be doing the cooking. Without a moment’s hesitation, most of us can answer that question: Google. Facebook. Amazon. Apple. Governments. So the question becomes (and I think what the FA article doesn’t address) is how does the average person become a “cooker” of sorts? “Big data is poised to reshape the way we live, work, and think” the article claims but it doesn’t talk about how we can influence its course.

The central claim of the article, and where I’m not sure if our class readings complement or contradict it, is that big data is solely mean to inform, not to explain. In essence, big data doesn’t care about the why?, just simply the what? For example, Google was able to track the spread of the flu across the U.S. just based on searches much more efficiently than the CDC was. But the data never asks why, it just explains what. Does this lead to a limitation of big data? Do we want big data asking the why? questions or do we find that it is more efficient without them? This is a point where I think all of the class readings failed to jump on. Big data raises big questions and I’m not convinced that our answers are sufficient enough to contain it. But then again, do we want to be able to?

One idea that resonated with me that I want to at least briefly explore is immateriality and the privileging of the material world/ culture. With immateriality comes uncertainty and mystery, which is why there is great unease when talking about technology and the transfer of information in the modern era (the 21st century). Fear oftentimes accompanies technology and thus fosters privilege for material goods such as books. I remember talking with a relative who was completely skeptical about using tablets to read because you lose the sensation of touching the book and experiencing the texture of the pages. In a way, a book more accurately embodies its own history because the book will change over time (since it degrades), whereas online texts will forever remain the same because they have no physical presence. We also privilege physical storage over immaterial storage which is similar to the conflicts with e-books versus physical books.

I also think it’s interesting to connect that fear and rejection of immateriality with the idea of privacy. The two are inextricably linked. The reason why the immaterial nature of modern information is so scary is because people are afraid of their invasion of privacy. More specifically, people are afraid that their information will be disseminated throughout the internet. There is a fear of exposure. We want to know who we are interacting with but we simultaneously do not want them to know about us. That is one of the major paradoxes that surround the internet community.

I was really pleased with the last two lectures of class especially since everything tied together very nicely and we saw how all of the different concepts and authors related to each other. The breakdown of the verbs used throughout the semester was helpful in framing my understanding of the course concepts and organizing my thoughts.

I recently read an interesting collection of articles called Collecting the WWWorld: The Artist as Archivist in the Internet Age, that grapples with the place of contemporary art in the context of digital technology, especially with respect to the large amounts of visual material that the Internet provides. The collection is a namesake of both a gallery show and a Tumblr site run by the same “curator”/”collector”, Dominico Quaranta. Interestingly, Quaranta’s Collecting the WWWorld project is self-referential and demonstrative of the fact that the Internet, like the project, is an ever-growing data repository. With respect to data collection and archiving, Quaranta makes an enlightening observation:

“More and more often, in fact, our personal memories are entrusted to camera lenses built into the countless gadgets that surround us: gadgets which we rely on to remember for us. We lavish on the Web our emotions and our most ephemeral thoughts. Virtually every attempt to remember anything at all comes down to a Google search.”

Though the idea of externalizing our memory is not new and can be dated back to the oral traditions where stories were passed down from generation to generation and beyond that, what is interesting is our increasing dependence on technology to “remember for us”. A clear indication of this is in our constant need to summarize and document moments of our lives either through text messages to friends, twitter updates and facebook posts. It has come to a point where, like Quaranta puts it, “we lavish on the Web our emotions and our most ephemeral thoughts”. This brings light to the I love Alaska example where user 711391 is shown as having an eerily intimate relationship with her AOL search engine.

What makes our relationship with the Internet and our use of it as a platform for externalizing our memory even more so worrying is the increased regulation of our data by algorithms. Facebook’s Edgerank algorithm that Tanya Bucher mentions in her article is one instance of this. The increased regulation of our data by corporations such as facebook is also illustrative of Raley’s point on “cooked data” whereby our actions online that produce data is pre-determined and influenced by the algorithmic actions that collect our data to begin with. In light of this, it is interesting to re-visit Quaranta’s point that “gadgets…remember for us”. This fact doesn’t just have implications for our future actions but is also reflective of how we remember our past. What are the implications of having gadgets remember for us? Does this change the way history is/will be recorded?  Also, (going back to a previous discussion on nostalgia) if we are supposedly in the age of nostalgia for our past analog selves, how will our increasing dependence on technology for memory affect our nostalgic perspectives?

http://www.lulu.com/items/volume_71/11140000/11140236/1/print/Collect_the_WWWorld_LINK_Editions_2011.pdf

“This shared sense of starting with data often leads to an unnoticed assumption that data are transparent, that information is self-evident, the fundamental stuff of truth itself. If we’re not careful, in other words, our zeal for more and more data can become a faith in their neutrality and autonomy, their objectivity” (Gitelson and Jackson 2-3).

As articulated in this essay and in this passage, the notion of “raw data” obscures the fact that, as “Lev Manovich explains, they have to be ‘generated’.” Data is produced, not found. In addition to considering the effects of that erasure, how might we compare it to the parallel erasure that occurs with “screen essentialism”? Doesn’t the supposed “rawness” of data assume an essential quality as well? In both cases the result or product—data or screen visualizations/imagery—are consumed with the supposition that the components of their production and composition did not do the labor or serve the function that they did. In both, then, it is the interface that is deleted. There seems to be a sense of purity as meaning without interface or mediation. When we consider ourselves living in a technological era, why the desire to suppose technological purity, essentialism? Why the desire to erase the interface?

It is interesting to consider how data accumulated from online preferences and habits not only reflect behavior but influence behavior and identity. In what Raley describes as the “electronic panopticon”, behavior is not only tracked but determined through data. Netflix tells me that based on my past viewing preferences that I enjoy “dark witty comedies with a strong female lead” and so I watch more “dark witty comedies with a strong female lead” not only that but perhaps I begin to see myself as a “dark witty, comedic strong female.” I search Google for bars in my area where perhaps I can find other such women and Google suggests the Dark Lady bar. I admire the women I meet here and search for clothing that is similar to theirs. The ads on every page I visit feature dark and edgy clothing that a strong female lead would wear. I order them secondhand through Amazon and they send me emails (featuring clothing, book, household products) every so often to help me maintain my witty, dark look. I have become an accumulation of data, I am composed of not human cells but data cells that have become big enough to eclipse and become me. I have, in turn, become it. I think that this concept is the biggest I have taken from the course, the intimacy sameness of digital and human interaction and it has made me rethink my relationship to the digital objects that I use and that use me.

I found the application of Big Data to Netflix’s House of Cards to be interesting. Both Big Data/data mining and gamification are powerful tools for predicting and driving behavior. I believe both methods have their limitations. In the case of Netflix, they used Big Data to see how interests align and created a television show based on the findings. The Salon article talks about how creativity and imagination might be diminished if networks only provide what we already want. An ABC executive has argued that finding out people’s habits and what they do can lead to new and innovative products/programming more than finding out what people want. She argues that focusing on what people do allows you to think about new ways in which people can do them. This knowledge serves as the foundation for innovation rather than the end goal communicated by the people. In my previous blog post, I mentioned the limitations of gamification which is that it does not allow the individual to acquire certain values or to use Morozov’s example, to recycle for the sake of recycling. Some other questions that I thought about this week include how education is changing or might change to prepare us to interpret all of this data. It seems as though in an era of Big Data, more and more people will need to become quicker at turning trends into concepts like Netflix’s House of Cards (vaguely like in the Matrix, looking at the code and seeing people, buildings, etc.) Also, has Big Data always sought to identify you as both individual and plural? If not when did it steer away from solely identifying the individual (vs. concerning how your interests align with other people)?

CISPA is back! And it’s making a big angry splash.

The Cyber Intelligence Sharing and Protection Act (CISPA) is a vaguely defined bill which proposes to allow companies and the government to share customers’ personal information with third parties in the name of “cyber security.”  Fair-game material may well include health records, credit information, and download habits, all of which could be shared if a corporation deemed themselves to be acting righteously in order to protect their networks. These distributing entities would not be held liable for violating the privacy of their customers.  The broad wording of the bill allows considerable leeway in terms of what data is share-able, who is allowed to distribute the data, who receives the data, what sort of situation would deem such sharing necessary for a company’s network security, and whose data might be targeted in such a situation.  CISPA has received support from tech companies Google, Yahoo and Microsoft.  Viacom, Time Warner, Verizon Wireless and others have done significant lobbying for the passing of this bill.

Unsurprisingly, a lot of people are up in arms about this puppy.  It most definitely gives me the heebie-jeebies.  But my question is, does CISPA really allow companies to do that much more than they are allowed to do already?  This article suggests that the data-sharing implied by CISPA is already a reality (is it hyped up too much?). Regardless, our actions, tastes, attitudes, locations, love interests and fetishes are all already being broken down, transmitted and reassembled elsewhere continuously.  As Lisa Gitelman and Virginia Jackson state in the intro to Raw Data is an Oxymoron:

Try to spend a day “off the grid” and you’d better leave your credit and debit cards, transit pass, school or work ID, passport, and cell phone at home — basically anything with a barcode, magnetic strip, RFID, or GPS receiver.

In an age where our information, and thus our identities (your data IS you), are being constantly shared, what information can we consider truly sacred, precious, ours?  Are health records and credit information the last frontier?  What is privacy and can we protect it?

 

links about CISPA (nothing you couldn’t find by googling the acronym):

CISPA in Limbo Thanks to Senate Apathy

Exposing U.S. Government Pirates

DIB Cyber Pilot, Shady Government “Spying?”

In our discussions of Big Data, is it easy to forget that the presentation of raw data is the most important aspect of interpretation and ultimately, decision-making. As Lisa Gitleman explains, the data are always “cooked,” which is to say they are organized, sorted, analyzed, and presented in order to reveal larger patterns and trends. Data on its own does not purport any particular meaning or significance (as any scientist would tell you). One of the difficulties with data analysis, however, is that not everyone is equipped to understand how data should be interpreted. We also cannot accept any one data interpretation as objective. Gitleman takes issue with the idea of objective data presentation: “Objectivity is situated and historically specific; it comes from somewhere and is the result of ongoing changes to the conditions of inquiry conditions that are at once material, social and ethical.” (4)

Now that many segments of Big Data and the software to analyze and manipulate them is publicly accessible, it is similarly easy for anyone to publish data analyses without making their full methodology available. One avenue for this frictionless publishing is infographics. Infographics are appealing because (when made well) they are easily shared, understood, and interpreted. Infographics make it easier than ever to disseminate data analysis and without delving further into data sources. The data analysis involved in infographic creation is inherently veiled. An infographic is not nearly as effective if it includes all of the data without some “cooking.” As a result, it is visually and psychologically easy to take an infographic at face value. The danger in doing this is clear: we simply accept what’s in front of us. A beautiful infographic does not let us see the “guts” of analysis, and thus, they are less transparent and have the potential to spread false information without the possibility of honest verification. What type of accountability should we expect from an infographic? What would it mean for us to hold an infographic accountable? (If the infographic below is to be taken at face value, should we expect more than 2-3 cited sources per graphic?) My biggest concern is that we do not hold our infographics to the same standards as other data sharing and information on the Internet. Infographics are cooked just like any other representation of data, and we should hold them to similar standards of factual rigor and acknowledge their inherent subjectivity.