Skip navigation

Monthly Archives: April 2013

In her “In Defense of the Poor Image,” Hito Steyerl documents the rise of the “poor image” – the mass-circulated, reedited, low resolution copy of an original image, warped by the sensitivity of internet connection and loaded with digital debris. For Seyerl, the poor image represents the liberation of the original from the shackles of the filmic medium and into the “digital uncertainty” of circulation, forsaking image quality for streaming speed. In this respect, the “poor image” represents the antithesis to the quest for medium refinement and higher resolution images in modern cinema, breaking free from the constraints of the physical medium and entering into a system of exchange, further degrading itself with every reedit and reupload.


While reading Seyerl’s essay, I couldn’t help but find myself feeling oddly nostalgic, remembering a childhood spent viewing poor images, downloading misspelled files, using intimidatingly complex video clients overrun with pop-ups to play television show clips so grainy that, as Seyerl writes, “one even doubts whether it could be called an image at all.” Indeed, I realized that Seyerl’s essay resonated with me so strongly because I myself am a member of the “poor image” generation, the generation of Limewire and Norton Antivirus, a time when Internet connection was fast enough to download content but the proper vehicles for that content had yet to be realized, resulting in a bustling economy of “poor images” on file sharing sites. Looking back, I am often amazed at the low resolution of video that I deemed “acceptable” for viewing; having no other access to these videos and images and having grown accustomed to the aesthetics of the “poor image” – the misspelled file names, the edited-in subtitles, the constant pop-ups, and the atrocious image quality – these negative qualities became just another natural part of the image to me.

I hope you all are (enjoying?) reading period. If any of you are interested and need a break from studying, there’s an event at RISD about Video Games as Art this Thursday, May 2nd at 7pm, College Building room 412 that might be interesting especially since we covered the idea of Gamic Spaces in class.

Good luck with finals!

Commenting on the introduction to Raw Data, somebody noted during section that data has become “singular”. More precisely, it has become a mass noun, a category of inherently-plural noun that is more usually applied to substances like water, liquid more generally, smoke certainly, but not many solids. (Note the mandatory ‘s’ on that last one.) Interestingly, discrete things like the dollar or datum can, in sufficiently large aggregate, also become mass nouns: cash and the new meaning of data respectively. I don’t think it’s too much of a leap to say that there’s a sort of commodification going on when this happens; “I have many dollar bills” is a strange statement, implying individual attention to each one rather than consideration of the cash in aggregate, as a resource. (Similarly, attention is a mass noun, despite the fact (increasingly ignored in the digital world) that it’s far from infinitely divisible.)


But like water, data has begun to be perceived as a fluid or continuous substance; as the Introduction notes, data are “corpuscular, like sand” but also “aggregative.” Sand is of course a mass noun; you can have a bucket of sand, a lot of sand, but not a sand or three sands. The individual grains of sand are not usually considered, any more than molecules of water are. This has odd implications for the social construction of hard drive space in an era of overabundant free space.


In the dark days of scarce storage space, the prevailing metaphor for a hard disk (as seen by the user — we’re being screen essentialists, here) was an office. Data (datums) were individual documents; larger organizational structures were files, folders, briefcases and so on. The implication was that free space was a scarce resource, like desk space (note the stigma of a cluttered Desktop even on modern machines), and that data were to be considered individually. Increasingly, this paradigm is being modified; data is more like sand or cash, a continuous resource that occupies and permeates the free space of a storage medium. Meanwhile, free space has also changed, in subtler ways. I asked a few friends today whether free space on a hard drive was more like free space in a closet, or free space in a wallet. The more technically-minded of them were much more inclined to say “wallet”. Crucially, the more piracy-minded ones were especially likely to take that view.


In such abundance, free space cries out to be occupied. If data is like water, free space is a sponge. But this permeability creates a demand that legal channels cannot fill. To be sure, high-definition movies and high-quality music fill space quickly, but a terabyte hard drive, full, represents thousands (or hundreds of thousands) of dollars of legal media files. Piracy is often associated with an attitude of obsessive data gluttony. A common joke is that “I heard a song I liked, so I downloaded their discography.” I don’t think piracy creates this attitude, though; I think it’s a response to it. Lawrence Liang and others largely focus on piracy as a response to necessity, such as that incurred by poverty or regional unavailability of cultural content. While this is a strong influence, it can’t explain the discography-paradigm of data overkill. By soaking up data, free space licenses and even encourages indiscriminate piracy: Why not download everything available? Free space exists to be filled, after all. In this sense, the permeability of free space implies a permittivity of free space. Together, they propagate a cultural wave of totally-accessible media, of Everything That Ever Was, Available Forever at the speed of light.

“It just means you are the sum total of your data. No man escapes that.”


I think this Don Delillo quote is a perfect one for our relationships with digital media, as far as we are used to develop new products or test possible responses. What I found most interesting about House of Cards was not that a formula could be reduced from a TV show based on viewing habits, but that it would be a successful show. I find that Big Data is often data collected through a supposedly objective means which intends to glean some subjectivity. This presents a problem for those collecting the data, but not so much for those producing it. Few of the web pages I visit are ones that I check regularly and like, rather most are out of curiosity and without my enjoyment. Because the data collected views this issue numerically, not beyond the screen and into my personal taste, I feel a sense of evasion. Of course, this is one small personal victory in a sea of providing my own data. I sense a duality, between the me that is the sum total of my data – which certainly exists – and the me that feels distant from the numerical and objective ways that I’ve been categorized.

In lecture on Monday, we focused on the elements of big data that are “creepy,” “weird,” and produce anxiety. We discussed these factors in terms of what data is (surveillance, capture) and what it does (prescription). For me, directed advertising, although it’s creepy, seems like the least threatening use of data since it is transparent in its functioning and effects. Suddenly your search terms appear in advertisements and Netflix updates its recommendations (Cerebral Visually Strking Art House Documentaries?). It is creepy, but it’s not a mystery where the data comes from or where it “goes,” at least in the short term. What, for me, becomes threatening is that all the data that’s being produced is, in the ideal functioning of the system, stored, analyzed, aggregated, etc.
“Big data” is one of the driving economic forces of the internet and related technologies. In recent years, big data has consistently led the market in digital technology.  Not only is big data produced by and directly applied to processes of online consumption; but the ability to store, analyze and aggregate amounts of data that are growing exponentially drives both hard and software sectors in the digital economy.
I know I keep bringing my discussions back to labor; but, the production of data is an unrecognized manifestation of free labor driving the digital economy. Every use of the internet is a market transaction. Not only by using the technologies and infrastructures that enable such connectivity, but through the production of data. Data that “creates jobs” in hard and software development and manufacturing, for workers in corporate and government offices analyzing it, and for marketers producing ads for increasingly particular niche audiences. One’s internet use is the productive (labor) act which gives rise to all these others.

“Indeed, data are so aggregative that English usage increasingly makes many into one. The word data has become what is called a mass noun, so it can take a singular verb. Sentences that include the phrase “data is…” are now roughly four times as common (on the web, at least, and according to Google) as those including “data are…” despite countless grammarians out there who will insist that data is plural.”

This passage in the introduction of “Raw Data” Is an Oxymoron resonated with me, as I remember quite clearly my mother telling me that the word data is plural and must always be treated as such (she is undoubtedly one of the grammarians Gitelman is referring to).  Our growing tendency to think of data as an aggregation, as a single entity rather than a collection of unique points, is representative of the loss of individuality that has come with the rise of Big Data. To massive information age entities such as Facebook or Google or Amazon, it doesn’t really matter what any individual user wants; the opinions and desires of the masses, of the collective body of users, are far more important to these companies than the problems or wishes of a single person. The voice of the individual is drowned out by the roar of the crowd. Users is saying it wants Netflix to include the original Star Wars on its instant streaming service.

As the amount of “raw data” available on the Internet continues to grow rapidly, fantasies have been conjured of the potential productive power of “mining” untapped data, of combing through the endless drudges of numbers to discover an unfound pattern or some hidden conclusion. However, while this fantasy of a “data revolution” is usually discussed in the context of a Big Brother style megacorporation feasting on “big data,” the dream of the hidden potential of data analytics has bled into many aspects of contemporary society, reaching an almost fetish-like level of obsession in arenas like professional sports. Sparked by the development of “sabremetrics” in baseball in the 1990s (as detailed in the film Moneyball), professional sports franchises have in the past 5 years dived head first into the world of “advanced stats,” totally reorienting their decision making practices by pledging allegiance to the objective power of numbers. With access to almost limitless varieties of statistics – released even to the casual fan on websites like – the Internet subculture surrounding sports like baseball and, particularly, basketball has become fundamentally governed by a logic of efficiency and data manipulation. Indeed, in basketball, this recent “statistical revolution” – led by forward thinking teams such as the Houston Rockets – has reached a fever pitch; along with the traditional events like the All-star game and the Slam Dunk contest, among the most covered events in the NBA today is the MIT Sloan Sports Analytics Conference.

However, this fixation on the productive power of complex and abstract statistics has fundamentally altered not only the way in which decisions are made by teams but also the way in which the viewer experiences and reacts to the spectacle of the game itself. Captivated by the same fantasy of ultimate efficiency promised by advanced data analytics as the coaches and managers of their teams, the passionate basketball fan is now more invested in their favorite player’s “Win Shares per 48 Minutes” and “Player Efficiency Rating” than their signature dunk or their intangible leadership qualities. In this respect, the dominance of this logic of the “statistical revolution” has fundamentally altered the very spectacle of basketball itself; the dramatic comeback narratives and clutch playoff performance have, in some sense, been replaced by the cold, hard, truth of numbers. Following this, advanced stats have dramatically changed the way in which fans and the media evaluate certain players – uncovering hidden statistical gems and shaming players that fail to produce the right statistics. This can be evidenced in the media reception to a player’s ability to draw fouls and “get to the line;” while shooting free throws is almost universally regarded as the most boring aspect of basketball, the ultra-high efficiency of the free throw shot has reconstructed the free throw as the pinnacle of the modern NBA offence – and has made stars and fan favorites out of players that can consistently draw fouls, James Harden being a notable example.

In this respect, for the passionate fan watching the NBA, now more excited by a hyper-efficient foul than an inefficient, hoisted midrange jump shot, the NBA has become data enacted, data as spectacle. While the passionate NBA fan used to tune in a few times a week to see their favorite players put on a show, in the age of “big data” the fan now watches data points interact, match ups play out, and efficiency overcome inefficiency.

One thing that really resonated with me throughout the course is the idea of this misrecognition of digital media as something that is less permanent and less real. What is the real distinction between the screen and reality? I thought Kirschenbaum’s view of the necessity of taking into what we see and the material substrate beyond or behind it. Material isn’t just what is visible. Things that are invisible to the human eye can also be material or of a physical nature. I think it would be interesting it connect this idea to Mcquire’s observations of the materiality/immateriality of architecture and media cities. I also thought Wark’s point of how the screen obscures what’s behind it is a cool thought. Thinking of digital as “friction free” acting like light itself, a pure form “free from material residue”. Another part of the course I found particularly interesting is the relationship/tension between architecture and freedom as Jenkins comments in his essay.

In this week’s readings, we revisited the concept of “Big Data” and reflected on its overwhelming relevance in today’s day and age. As Gitelman and Jackson state, in order to go completely off the grid, we would have to “leave our credit and debit cards, transit pass, school or work ID, passport, and cell phone at home — basically anything with a barcode, magnetic strip, RFID, or GPS receiver.” Such objects, although they have virtually become necessities in our everyday lives, are all items that can and will put us on the radar of technology and provide others with certain data regarding our interests, our whereabouts, our private and personal lives. Such items contribute to the massive and otherworldly pool of information that is “Big Data”: facts and figures that are not compiled in “not just terabytes but petabytes…where peta- is a prefix which denotes the unfathomable quantity of a quadrillion, or a thousand trillion.” Such a massive entity is almost unthinkable and, utilized with the purpose of interpreting the data, it can become very powerful because it unequivocally reflects the preferences and direction of the global population. Every individual’s data is compiled and analyzed to ultimately give insight into exactly which direction our world is headed. Furthermore, the accuracy with which such “Big Data” can be interpreted is a lightyear more advanced than the manner with which data was collected in the era preceding such advanced technology. Rather than trying to accurately represent the general population by choosing certain sample sizes, and rather than struggling to remain objective while conducting a research study, data today is handed to statisticians and data collectors on a silver platter. So, in an era where research and technology is changing at lightning speed, is “Big Data” here to stay? And is it moral, ethical, and reliable? Most of all, is it truly objective?

I saw it all go down. Information poured in— texts, Instagram photos, YouTube videos, Vines. Flipping betweens Google maps, my fervently refreshed twitter feed, and the Boston police scanner, I felt as if I had some role in the events last Thursday night. My shadowed room housed all of the sights and sounds of another world. There’s a certain wonderful creepiness about occupying a type of hyperreality in which you can hear an address called out over the (public?) police scanner and zoom in on the house on Google maps before the officer can verify that he heard it correctly. It’s more intimate than watching OJ careen down the freeway. In this moment in which there hardly exists an intermediary, what are the media? Up all night, thousands of people wanted the raw information from twitter and the scanner. How have mass media been replaced by grassroots, crowdsourced media? With an undeniable desire for immediacy, why wait for the AP to craft their tweet when there are dozens of #Watertown residents posting geo-tagged tweets like, “holy shit! explosion in the street!”? In a country in which even the largest media conglomerates err regularly, why not turn away from traditional sources? Holed up in their homes, people wanted to come together and communicate, valuing amplification over verification. But where’s the division of private and public in this moment? When everyone is watching, documenting, tagging and tweeting, how is this citizen surveillance any different than governmental surveillance or a surveillance state? What’s the difference between a crowd of iPhones surveying a street and a nest of surveillance cameras on every corner? Read More »