Skip navigation

Category Archives: Beth’s section

Wikipedia’s open source model—its defining quality—is fundamentally built on the idea that there are no grounds for power, but the sheer number of people who become active as a result should be enough to offset those who post misinformation. In order to understand this idea, we need to understand both the concepts of Rancière’s democratic paradox and Mayer-Schönberger and Kukier’ concept of big data (these explanations will be fairly similar to those I used in my paper on video games, big data, and the democratic paradox.) As a contributor to Wikipedia, I was able to experience firsthand how accurate this assumption is and how well the model works.

Starting from the notion that democracy is defined by a lack of grounds for authority, Rancière essentially argues that democracy as a form of government is essentially indistinguishable from democracy as a form of social and political life. Political activity thus becomes a struggle against categorization, putting these boundaries into play and disrupting them. This implies that policy and institution are both legitimized and delegitimized by the people. That is to say two things: while the peoples’ approval determines whether or not policies are enacted, the people as a whole both give rise to these policies and render them powerless because those the policies governs have equal grounds to rule as those who wrote it. Thus, the lack of grounds for authority would seem to necessitate an excess of political activity; a constant challenging and restructuring of these policies and institutions against categorization and oppression. This is one of the assumptions behind Wikipedia’s edit policy: each article will be constantly challenged and edited, a perpetual work in progress, getting infinitely closer to the objective truth. This only works, however, if there are enough edits to maintain this perpetual improvement.

Thus, the second component of Wikipedia’s assumption behind this model is that there is enough edits to offset poor quality ones. This is a fundamental component of Mayer-Schönberger and Kukier’s idea of Big Data: we need not be concerned with the “why” so long as we can observe and apply the “what”, and further, we need not be concerned about low quality data, so long as we have enough data that its effect is negligible.

In my experience as a contributor, Wikipedia was relatively (although far from perfectly) successful by both of these measures. I initially submitted an article for review, which was denied by a senior editor to be an independent article due to the relatively journalistic style in which I had written it. This creates a hierarchy which Rancière would argue technically defies democracy for two reasons: firstly, it creates a qualification for power which goes against democracy’s most essential value. Secondly, according to him, any structure that is created in a democratic society can (and should) be immediately demolished. However, this is paradoxical in itself because it is precisely this structure which allows the encyclopedia to be maintained more efficiently.

In order to make a successful contribution, I added my content to an article I believe it was relevant to. Although there have been no subsequent edits since my own in several days, a look at the article’s edit history and talk page reveal an extensive process of editing and reshaping which have created quite a thorough and robust product. Although the edits are not constant, in my view, they are frequent and extensive enough to be successful in maintaining a high standard of neutrality and accuracy.

Overall, Wikipedia’s policies and structures can certainly not be classified as inherently perfectly democratic, nor is there enough data to prove unequivocally that the articles are without serious flaws. However, it works reasonably well according to what the encyclopedia was designed to do, and presents a strong case study to observe how democracy is compromised in many areas of the internet in order to maintain a sense of reputability.


Link to the article I edited

Link to my edit

Viktor Mayer-Schönberger and Kenneth Kukier claim in Big Data: A Revolution That Will Transform How We Live, Work, and Think that the use of big data is so pervasive, so effective, and so radical that it is fundamentally changing the empirical process across practically every discipline from emphasizing the “why” to the “what”. That is, human beings need not be concerned with the causal mechanisms that underly correlations so long as we can observe and apply them; further, these correlations should give rise to our scientific theories and experiments rather than the other way around. This kind of shift would completely change how we think about the challenges we regularly come across in our lives, not to mention the hundreds of industries that have an ever-stronger grip on everything we do. The essay Narrative Architecture and Big Data by Jacob Stern claims that this shift will translate to video games, incentivizing an emphasis on sales as opposed to depth and complexity.  In contrast, according to principles of Ranciere’s democratic paradox, big data does not pose a threat to innovation and creativity within video games as an artistic medium because emphasis on sales and emphasis on game quality are not necessarily mutually exclusive. This is idea fundamental to the game industry and player culture, which rendering the distinctions Stern makes between games developed for the masses or created for their own sake largely invalid.

In order to understand the democratic paradox as it relates to video games, we must first understand its implications as a sociopolitical construct. Rancière presents the idea first as a truism that is relatively intuitive: the democratic government is threatened by the excess of social/political activity it needs in order to function properly. Social/political activity must be somehow regulated in order to keep the government and society stable, which makes most democracies as we know them today actually aristocracies with the approval of the masses. The conception of the paradox pits democracy as a form of government against democracy as a form of social and political life; however, as Rancière argues, it from this very notion—this separation between the government and the people—from which a much truer, much more profound paradox arises. If democracy is defined by the absence of grounds or qualifications for power, then everyone should have the opportunity to exercise this power equally; to move between universals and particulars, between citizen and man, to challenge the order society follows. In this type of society, he claims, the function of political activity to challenge this order and the function of control and policing to maintain it are inherently intertwined. Democratic government is not threatened by social and political life, but indistinguishable from social and political life, making Democracy the institution of politics as such. Political activity thus becomes a struggle against categorization, putting these boundaries into play and disrupting them.

Stern claims that big data will incentivize “pop” games based on sales and other types of consumer data (such as non-substantive additions to already lucrative franchises) over more innovative “independent” games designed purely out of creative spirit (e.g. indie games). He/she is implying that video games as an art form are threatened by games as a product; however, games as a product and video games as an art form are not so neatly separated either.

Perhaps the clearest way the video game market exhibits the democratic paradox is in the sense that, just as policy and institution are both legitimized and delegitimized by the people who make them up, the video game market is both legitimized and delegitimized by those who make it up. That is to say two things: players’ response and sales already have an enormous impact on which types of games are created and which are not, but more importantly, players both give rise to the market’s existence and render it powerless to quantify a game’s artistic value because they are just as capable of judging as those who created it. (Policies and institutions function in the same way; while the peoples’ approval determines whether or not policies are enacted, the people as a whole both give rise to these policies and render them powerless because those the policies governs have equal grounds to rule as those who wrote it.) This is exemplified by games which are not quite as immensely popular as those in the top tier, but have massive cult followings which validate their artistic genius and encourage innovation and experimentation. For example, Supergiant Games received massive praise for their cult hit Bastion, known for its incredible aesthetic beauty and high quality of gameplay despite a relatively traditional post-apocalyptic story. However, instead of following that simply with a game of the same type, the studio produced an equally beautiful game called Transistor with deeper characters and a more complex narrative that defies video game and scientific conventions while challenging the player to experiment and develop unique combat styles. While these games may never enjoy the sales of games such as Titanfall or Call of Duty, this does not discourage the developers in any way from continuing to create incredibly well made and thought provoking games.

The line between creators and consumers of video games is also becoming less and less defined as many companies such as Bossa Studios are making player feedback an integral part of the game creation process, while subcultures surrounding particular games based in YouTube playthroughs, commentaries, and other related fan content can become an integral part of the player’s experience. For example, we discussed gold farming in World of Warcraft and how the game’s economy became increasingly intertwined with that of the real world, which drastically impacted how certain players participated in the game and what they got out of it. This is not limited to MMOs, however; it takes place across genres, content, and fan bases. Super Smash Bros, Nintendo’s immensely popular fighting franchise showcasing the company’s most iconic characters, has a very active professional competitive scene and wildly dedicated fan communities surrounding it. In the case of sandbox games such as Minecraft or god games such as Civilization, among many others, mods and DLC (downloadable content) created by players form huge proportions of the games’ content (Minecraft has a number of unique online servers containing player-created worlds that are dedicated to particular styles of play as well).

Finally, Just as political action is the disruption and putting into play of lines that divide categories of people, innovation in the game world often involves the disruption and putting into play of lines that divide categories of games (as well as other forms of media). Henry Jenkins frames these boundaries in terms of franchises such as Star Wars or Pokémon that traverse media as parts of what he calls a “larger narrative system.” In Game Design as Narrative Architecture, he describes these systems as unique domains “which [depend] less on each individual work being self-sufficient than on each work contributing to a larger narrative economy.” Aspects of the narrative structure are experienced in different ways through different media to create a broader, more complex, and more fleshed out universe and story. “In such a system, what games do best will almost certainly center around their ability to give concrete shape to our memories and imaginings of the storyworld, creating an environment we can wander through and interact with,” again according to Jenkins. These boundaries can also be disrupted within games themselves; for example, The Stanley Parable combines the rigid path structures of a choose-your-own-adventure book, the panoptic gaze of an omniscient narrator, and the illusion of freedom of an RPG to create an incredibly witty and insightful commentary on traditional video game tropes as well as remarkably self-aware commentary on choice and authority.

All of these modes of interaction prove that video games as an art form and video games as a product are not two distinct realms. By extension, it cannot be assumed that Big Data will cheapen the artistic depth of video games and reduce them to empty pieces of entertainment because they are fundamentally intertwined. The future of these games will see innovation and experimentation just as much as response consumer feedback—often in the same places—and we must embrace this interplay to allow their full creative potential to be realized.  In Jenkins’ words, “there is not one future of games. The goal should be to foster diversification of genres, aesthetics, and audiences, to open gamers to the broadest possible range of experiences.”

The tension between the digital and the analog, as explored in Cramer’s “What is Post-Digital?”, is comparable to that between the power structures enacted by big data and autonomous computing and Rouvroy’s concept of virtuality. In the former, while it’s explained that popular proponents of the post-digital era often use a looser definition of “digital” and “analog”, the more stringent definition best exemplifies the sentiments of the post-digital: by digital is meant discrete and quantifiable and by analog is meant continuous and undivided. The appeal of analog media, then, is in its continuity, which lends it its air of tangibility and permanence; it is not easily abstracted and thus seems more tangible, it is not easily reconstructed and thus seems more permanent than digital media. Because the digital is so easily expressible, transformable, replicable, it appears to lack the mystery, the unfathomable glue that seems to underpin reality.
The same aspects underly Rouvroy’s concept of virtuality. Rouvroy argues that a regime of autonomous computation (whose mode of perception is big data capture) threatens what makes us fundamentally human: the notion of a human being as an entity existing through time, and that of the virtual—the essential, unquantifiable, spontaneous aspect of a human being that exists outside of herself and informs her identity and serves as a utopic destination for herself.
To the permanence of the human self: the decisions that big data have made so far and the decisions enacted by a regime informed by big data involve this very assumption; that past events can exemplify future events, and that a person’s past experiences inform her future actions. To virtuality: this construct seems in line with that of arguments for human freedom and against that of a deterministic world—humans can behave unpredictably, are therefore free agents, and retain their humanity. Of course, this seems like a game of limited information; is there was a machine or computer that could sample and correctly analyze enough data of the behavior of a human, why would a human’s actions be privileged to be unpredictable?
Predictive models are never infallible, however, and account for a level of unpredictability. It then seems that, although it’s possible to quantify human behavior, such tools only serve as an approximation, and admittedly. And so the question remains of whether anything is truly analog, or spontaneous, or unpredictable, or rather, that there has yet to be a tool capable of fully capturing, digitizing, the real, the physical, the human. Perhaps humanity’s opposition to regimes founded upon predictive models comes not from rejecting them or fearing them, but from the acceptance that complexity need not be fully irreducible to allow for (at least the semblance of) the virtual ideal self, that spontaneity informs and validates predictive systems, and that predictability does not imply determinism, but rather bolsters the rationale of a power structure that, for the sake of democracy, can be self-justifiably opposed.

It is interesting to imagine what a “post-digital” world might look like.  It would seem that much of the realm of the science-fiction genre might give us an idea of what we imagine or expect to be post-digital, a continuation of the digital form but amplified beyond simply the computer.  However, many of those imagined, futuristic, worlds utilize the same types of similar interfaces that we are accustomed to: screens and keyboards, desks and panels with buttons and levers.  If we ever reached a post-digtial age, would the interfaces that we use continue to remain constant?  It is interesting to question whether the development of technology will be constrained by our pre-conceptions of what the future should/might look like.

What will the continuation of new media forms and the advent of the “post-digital” age mean for society, economics, law, entertainment? Cramer describes the post-digital age as an extension of the digital, the journey into what lies beyond the current structures. What is the future? Perhaps we’ll find other reasons to read, ceasing to be bound by traditional meanings and embracing free thought. We will understand the awesome creepy power of the internet and become less visible, fearing the possibility of a distant person piecing together who you are. Big data is already informing our conceptions of analysis, preciseness, and causality. “Data was no longer regarded as static or stale, whose usefulness was finished once the purpose for which it was collected was achieved” (Mayer-Shonberger and Cukier). Will we value privacy be overshadowed by capture and surveillance or will we simply be at ease, having entrusted these to persons who will not abuse their powers? Perhaps we will rediscover what it means to be public. New inventions and their interfaces will inform how we interact with the world. We will forget the things that are hidden behind them, falling prey to screen essentialism. We will discover new worlds—new games. Perhaps in the post-digital age, activism will cease to be activism and government and democracy will also lose their meaning. Will we be better for it?


The notion of post-digital is an intriguing one, and illustrates the artistic and social anxieties of my generation to define themselves in relation to the constant flow of technology that surrounds them. Florian Cramer discusses the dual meaning of this term “post-digital” and I believe both are relevant in a discussion of the current artistic and social relationships within the “digital age”, within new media. In one sense, post-digital may describe disillusionment with technology, perhaps because of its mass reproducibility (the internet memes Cramer describes for example) or its deterioration of privacy. One may have arrived at this conclusion themselves, or perhaps post-Snowden they joined many others in a newfound disappointment with these new technologies. This definition corresponds to the man in the park with the typewriter. The solution is to disconnect and go back to the old, safer, better way. Interestingly enough I find this phenomenon is rarely present in those who actually had to use typewriters. Both of my parents grew up with typewriters and would never trade in their new digital toys for the “frustrating and unforgiving” clacking of a typewriter, which they don’t seem to find nearly as romantic a sound as I do. I have to agree with Cramer in thinking that this desperate return is a bit naïve, and ultimately will not do anyone any good. Of course, artists are the exception to this rule and generally have reasons other than nostalgia for returning to older materials or methods. I was so excited this summer to try out a digital drawing tablet, but in about 10 minutes I was so frustrated with the minute delay between my stylus and the marks that I ended up drawing with pencil, scanning it in and only coloring it on the tablet. Digital technology is not known for giving its users physical control.

The second definition of post-digital is slightly different. Cramer uses the example of post-feminism as an example, post-feminism does not imply that feminism is over and something has come after it, but simply that this new movement is a continuation of feminism as well as a different movement in its own right. So post-digital could be looked at as an extension of the digital, a move towards a restructuring of our interactions and relationships with the digital, without abandoning it.

What I take away from this course is a mix of these two definitions. At once, I am disenchanted by new media, but at the same time in studying its history I cannot help but consider its future: a post-digital that is aware and “enlightened” to the mistakes and concerns of its past, without abandoning technology. I suppose the moral of the story is that even though new media is problematic and violent, hateful, oppressive and scary, it is also creative, indefinable, communicative, a tool for activists, an instrument of change, and infinitely the most interesting phenomenon being tackled in critical media theory.

I want to use this week’s blog post as a space to discuss what this class has taught me. Never having taken an MCM class before, I began this class thinking that the subjects we would learn would be far from applicable to the real world. While many of the readings and themes we discussed in lecture are framed in highly theoretical ways, the main ideas we gather informs our understanding of how digital media affects us in our daily lives. From the “promiscuity” of my computer through its shared networks to the possibility of exposure, this class taught me how to recognize and how to be critical of the technologies that surround me. While it is easy to trust technology because of the benefits that it provides us, it is also easy to hate technology because of accounts, such as those of Edward Snowden, that demonstrate how the technologies we love can be used against us. While it can be scary to realize the hold that technologies have in our lives (before this class I didn’t think it ran so deep), I have also learned to not fear the technology since knowing how something works reduces the fear of it.

“The fact that some things are forgotten and others reminded is what gives human History a kind of normativity: ordinary lives are not inscribed in History. Exemplar existences and deeds are, and this filtering of the ‘real’ through human memory and historical inscription is how humans transmit normative evaluations from one generation to the other. Individual and collective human memory are of course not objective, but that lack of objectivity has proved absolutely necessary for the functioning of individuals, and for the organization of societies. What all this suggests is that an intensive replacement of human observation, evaluation and prediction by autonomic processes might well deprive us, in part at least, of our abilities to make normative judgements, and, more fundamentally even, to set new norms” (Rouvroy, 16)


I chose this passage from the Rouvroy reading/lecture because I find it compelling but also not entirely convincing. On the one hand, I think that his notion that pre-emptive dispositives pose certain dangers in their effective employment, insofar as norms would be enforced in a way that bypasses contestation, rendering impotent our capacity to dissent, resist, or construct “counter-conducts” to governmental/corporate/mass cultural rule is a really important point. On the other hand though I’m having trouble conceiving of a world in which individual and collective human memory is replaced by automatic processes entirely. Moreover, I don’t necessarily vibe with the idea that the normative functions of the remembering/forgetting dichotomy are so entirely indispensible to humanity. Yes, “norms” in an abstract sense have their utility, but I hesitate to applaud any discussion of the merits of silencing voices of the past—those condemned to obsolescence for not embodying the normative values of their epoch/proceeding epochs—without at least passing reference to the incredible dangers involved in the writing of history. Mostly though, I just think that the point is overstated. I can’t conceive of a world in which historicism and memory are substantially replaced by autonomic processes; no matter how much human observation, evaluation and prediction will be subsumed by autonomic process, there will remain a human consciousness there to observe the resulting conditions of this shift—and so if it leads to an outcome that is negative, that negativity, though not constituting an awareness of the methods of observation, evaluation and prediction that were formerly conscious, will constitute an imperative to rediscover the conscious means of human observation, evaluation and prediction.

In “What is the Post-Digital,” Florian Cramer describes a phenomenon where the technologies that we use everyday and that essentially define our society will one day be seen as obsolete and will no longer be seen as “technology.” In the case of the typewriter, people were once excited by its newness and innovation, but in this day and age it serves no practical purpose, only used by “hipsters” as Cramer claims, or displayed in a museum. Its identity has completely changed. I remember when I was a kid, I remember playing internet games with my friends over dial-up. Images would take minutes to load, instead of instantly like today, and my connection would disconnect whenever my mom would answer the phone. Yet I was always excited to go online everyday after school. Kids today don’t have to deal with these issues today, and they would view the internet that we grew up with as a totally foreign object. It is fascinating to think that society’s view on dial-up will continue to change, and maybe someday future hipsters may use dial-up or emulate it for its artistic value or to make a social statement. Regardless, in the future, dial-up and modern day internet will take on new interpretations and will no longer be valued for its practicality or technological innovations. It is both thought provoking and scary to think that the technologies that we use everyday are dying and will be replaced.

In the Cramer piece there is a quote from Martha Jurksaitis: “[I]n this so-called ‘digital age’ people are more and more drawn to things that they can materially connect with….”  This got me thinking about vinyl’s recent resurgence in the music marketplace.  As overall music sales, especially digital sales, have declined in recent years, vinyl sales were up 49% in 2014.  Vinyl’s market share is still very minor—around 2% of total music sales—but it, along with total sales, is slowly growing back from its low point in the late 1990s.  I think it’s safe to toss out the argument of superior quality that Cramer points to on P. 704.  We’re at a point now when full quality digital media is just as good as analog.  Vinyl doesn’t necessarily sound any better than .wav, digital photos can be just as high quality as film, etc.  This wasn’t true 10 years ago, but it is now.  I also don’t think the question of why creators opt for analog is one that really needs to be asked—plenty of creators, myself included, like to be in the world with their work.  They like using their hands, and having real-world control.  It’s why I buy analog synths, why a photographer opts for a film camera.  But the question of why consumers care—why vinyl is making a comeback—is an interesting one to me.  I think chalking it up to a collector’s mindset, or a purist mindset, falls short of explaining the 9.2 million records sold in 2014.  So yeah, I don’t know.  I guess I’m just circling back on what the reading already said.  But it’s a question I kick around in my head quite often, one that I still haven’t been able to solve.