Skip navigation

Monthly Archives: April 2015

How does a group that has existed “under the radar” for its entire history go about changing their image in the public eye in such a way that not only gathers widespread support but also sparks meaningful political change? By queering the political landscape, according to Cristina Beltran’s article. This very idea is sort of alien to me, I’ve heard the word thrown around but I didn’t know what it meant in the context of political action. But thinking about it, new media provides perfect avenues for inverting, shifting, or otherwise rearranging the very ways people look at political issues, especially ones that deal with immigration reform. When DREAM activists decided it was time for their voices to be heard, they came out of the shadows on the Internet, exposing themselves to a network that can track and access their information. This decision to wave their flag in the face of a machine that doesn’t allow for them to exist makes the DREAMers not just visible but only impossible to ignore. Paired with a resonating message, they created a truly innovative and powerful grassroots movement.

I’m interested in the various modes of essentialism that have been discussed throughout several readings in this course. Kirschenbaum, of course, provides us with an in-depth analysis of screen essentialism—the process of effacing every element of a digital technology beyond its screen. Lisa Nakumura discusses the essentialization of Navajo women of color and the biopolitical stakes thereof, discussing a deliberately disseminated imaginary of women of color as “natural” fits for the sorts of affective labor or “women’s work” required by the tech industry. I’m curious, too, about other forms of essentialism that occur on Terranova’s ‘outernet’—I’m thinking about outsourced labor, for example, as a mechanism for the collapsing and effacement of individual difference and alterity.

Professor Chun’s lecture on big data reminded me of a site called DoNotTouch.org. When you visit the site, you become a participant in a worldwide game of mouse-moving. Your cursor is recorded as it moves through various obstacles, maps, and pictures. Then, about an hour later when the information has processed, your cursor becomes one of the hoard, navigating through this online space.

What’s really interesting, though, are the patterns of behavior that form amongst the little flitting arrows. Mainstream patterns emerge, but so do counter-movements and some jumbled circles. when observed individually, the cursors retain some semblance of uniqueness. But as a whole, they are more than predictable. I feel as if it is this principle, applied to much more complex behaviors, that drives big data as we know it today.

For the first time since starting Brown and the MCM program I’m feeling compelled to ask the questions, why are we studying this? It’s not because of some disillusionment with semiotics or critical theory that I’m questioning what has been so compelling for me the past two years, but rather because MCM230 was the first course in which I felt I had to negotiate “real life” and theory. I’m also not saying that theory isn’t “real life,” but I do think it’s easy to get caught up in the critical analysis of something and forget that it is actually a real thing, like Facebook or Myst. Digital Media seemed practical in a way other courses maybe haven’t been. The everyday-ness/ubiquity of the texts and objects we studied felt satisfying because we were calling into question actions and interactions that we completely take for granted in our everyday, digital lifestyles. So have I answered the “why” of MCM? Nope, not at all. But I at least think the appearance of that question in my conscious is a useful first step. Because ultimately, why would I study something that has no use? I’m feeling very preoccupied in other aspects of my life with the productiveness of things I do, so figuring out exactly how MCM is productive seems like an important goal right now, especially as I’m about to complete the first half of my time here at Brown. Two years left to find the answer to that question…

This content is password protected. To view it please enter your password below:

Cramer writes: “The term ‘post-digital’ can be used to describe either a contemporary disenchantment with digital information systems and media gadgets, or a period in which our fascination with these systems and gadgets has become historical‚ just like the dot-com age ultimately became historical in the 2013 novels of Thomas Pynchon and Dave Eggers.”

I thought this was a really interesting comparison—both between Bleeding Edge and The Circle (which, I’d argue, address hugely different digital moments, a difference which we could maybe delineate as the one lying between web 1.0 and web 2.0) as well as between a “fascination with these systems and gadgets” and the dot-com age. The so-called dot-com age, I think, had a clear endpoint: a literal collapse, a historical and economically quantifiable burst of the bubble.

I’d argue that the sort of becoming-historical of a widespread fascination with media gadgets described by Cramer is much fuzzier and more difficult to pin down. While I do think it’s true that much post-internet art has, as Cramer writes, begun to take the internet in stride, treating it not as a crazy, foreign, fascinating Other but as a naturalized thing, I also feel like we continue to ogle at the internet in various ways all the time: the amount of wide-eyed tech think-pieces published every day online is, I think, not something to dismiss.

Maybe what’s changed, then, is that the tools we now use to address and critically examine the digital are themselves digital—that our contemporary brand of fascination is no longer that of looking in from without, but is, instead, self-reflexive: we mostly talk about the internet on the internet. Perhaps what’s “historical” or obsolete about Pynchon and Eggers’ tech-focused novels isn’t the two recent moments in which they’re set, but rather the analog medium in which they present those digital moments.

I’m still trying to puzzle out Nishant Shah’s notions of exposure and exposé as described in Exposed Net Porn, and whether either of those terms are inherently linked to hegemonic power as he seems to suggest. Exposure concerns circulation and proliferation of images without one’s knowledge or against one’s will – exposé is that with a judgment cast, a narrative added with the goal of blaming, shaming and taming a body. Can exposure be understood as a purely technical phenomenon, as, for instance, Facebook algorithms choose which images to reproduce onto one’s feed? In contrast, exposé could be the addition of the human element of judgment, of deliberately detrimental commentary. Shah specifies that the victims of exposé are already willingly in the public domain, and possess agency to produce and disseminate images. Exposé is not forcing victims into the public domain, but rather robbing their agency to produce and disseminate their own images, rewriting the narrative of that dissemination to criticize victims’ participating in the public sphere at all.

My question, then, is this: can you call expository actions against powerful organizations, like those revelations made by the work of Edward Snowden and Anonymous, exposé? Is exposé inherently linked to some hierarchy of punisher and punished, or can it be reversed – can the citizen shame & tame sly corporations and weird religious organizations? If so, the regulations that arise from the moment of exposé, as wrongdoing is identified and incriminated, could do some good in the name of governmental transparency and free speech. Which isn’t to say that exposé’s dark side, as seen in the experiences of revenge porn victims and many more, shouldn’t be paid close attention to; but perhaps a *conscious uncoupling* of exposé from its slanderous connotations could be a productive way of understanding calling power structures out on their failures.

Mayer-Schonberger and Kenneth Cukier’s examination of the emergence and power of big data poses, as they argue, questions and problems for the dominance of theory in understanding the world. While I was initially skeptical that big data could provide much insight, they provide compelling examples as to how strong correlations found using expansive and interlinked datasets can reveal new phenomena and disprove causal relationships. However, as a student at a liberal arts university who is not studying computer science or statistics and is immersed in theories proposed by academics; I am interested in how theory and the problematic notion that “the body doesn’t lie” is central to the way that big data is currently used. It appears that many of the examples of big data use are relatively immediate: the onset of the flu, a pregnancy, what someone will buy next. While these questions are interesting for the multinational corporation which seeks to maximize profits every second, they are less useful for those interested in longer periods of time. Such longer periods of time have been tackled in recent years by digital historians using the longue duree approach, who by using massive text archives are able to examine long term trends in ideas, thought, and movements through following terms through time. Such analyses are intensely dependent on theory, not because the relationships are necessarily produced by theory, but rather because the search is informed by theory and the results are typically only interesting or useful to the historian with a theoretical interpretation. I am curious as to how future analytical tools, if developed for history or other ways of understanding longer durations of time, could use this kind of approach to understand the massive amount of data produced through the internet as a hsitorical phenomenon. For example, how might what people search during a flu epidemic change over a period of 20 years (say if a study was done in 2025), and what does that say about changing or static popular conceptions of disease? In what instances do people stop buying from a company due to labor abuses (which tend to be constant throughout a company’s history, whether they be Walmart or Amazon or Apple), and what does that say about forms of resistance to labor exploitation that may be most fruitful?

Both Terranova and Dyer-Witherford reveal how objects supposedly outside of the purview of capitalism–whether they be the high-tech gift economy or World of Warcraft–are still firmly situated within capitalism and racialized forms of exploitation. In World of Warcraft, the exploitation of Chinese workers for the benefit of the American players comes with similar exploitation and racism. “Chinese” gold farmers ruining “our” (American’s) game appears very similar to complaints about China “stealing” American jobs or producing shoddy parts. Terranova, from a different perspective, examines how free labor is essential to the late capitalist cultural economy, rather than a fundamental challenge to capitalism. For both, the dichotomy between what is play and labor, what is capitalist and what is freely given, are not rigidly separated but fully intertwined. Dyer-Witherford’s examination of gold farming and the strive towards a perfect game that is outside of the market economy is antithetical to the intensely capitalist nature of World of Warcraft – gold, consumption, and material goods are essential to the game structure; seen from this light it appears nearly impossible to separate the game market from the real one, even if Blizzard was able to block bots. Terranova concludes by saying that her analysis is not intended to be a strategy for social action, but rather a recognition of how capitalism “mutates”, rather than simply takes actions and is responded to. In what can different forms of play and labor–perhaps not world of warcraft but games that make fun of capitalist entities, perhaps labor that builds community rather than potentially useful products–mutate far enough to be considered resistance?

It is interesting to imagine what a “post-digital” world might look like.  It would seem that much of the realm of the science-fiction genre might give us an idea of what we imagine or expect to be post-digital, a continuation of the digital form but amplified beyond simply the computer.  However, many of those imagined, futuristic, worlds utilize the same types of similar interfaces that we are accustomed to: screens and keyboards, desks and panels with buttons and levers.  If we ever reached a post-digtial age, would the interfaces that we use continue to remain constant?  It is interesting to question whether the development of technology will be constrained by our pre-conceptions of what the future should/might look like.