Skip navigation

Category Archives: Fran’s section

Rouvroy’s piece covers a lot of ground. I think it is one of the more intriguing articles we have read this semester. I am interested in the intersection of “post-modern government rationality” and new autonomic computing infrastructures that “‘translate’ or ‘transcript’ the physical space and its inhabitants into constantly evolving sets of data points.” What are the stakes of this convergence?

I think two aspects of post-modern government rationality are crucial here: (1) The system, embodied by autonomic technology, becomes transparent—not as in the eradication of a veil, but as in the very entity itself. The mechanisms of the system are invisible, entirely imperceptible. Thus, “transparency” attains a new negative connotation in post-modern rationality; as opposed to a positive connotation within modern rationality (government functioning as visible, open). (2) Post-modern government is concerned with prediction and avoidance rather than causal identification and remediation.

These post-modernist traits, armed with the ubiquitous sensors of autonomic computing, pose many threats and contradictions. For one, the notion that such predictive computing technologies are innately “objective” is dubious. To bring in an idea from the Mayer-Schonberger text, “our choices affect our results.” The way these technologies are designed and used necessarily involves human reasoning, and human bias. Thus, autonomic computing’s “objective” detection of “objective” threats and danger seems inherently impossible. Rather, such detection is simply a reflection—and inevitably serves—pre-existing notions and human theories. In our predictive pursuit of “threats”, we are simply pursuing our choices.

Moreover, I am interested in how this convergence poses a threat to human subjectivity, and subsequently, “to the future”, or the progression of time. Although these technologies purport to operate in “real time”, doesn’t the fact that they are intended to predict and pre-empt place them out of so-called “real-time” and in a space that has yet to unfold? More insidiously, the objective of these technologies of pre-emption is often to prevent or to destroy this space. In this way, a system that constantly pre-empts does not move forward because it eliminates the forward. Simultaneously, this elimination of the “forward”, of possibility, also seems detrimental for humans’ ability to think forward, “of themselves beyond themselves”. The result is a crystallization, a freezing or stalling out of time and thought.

Big data is a phrase that is misused frequently in popular media. (A lot of terms are misused by popular media, but I digress.) According to Wikipedia (totally legitimate, as we learned in lab last week): “Big data is a broad term for data sets so large or complex that traditional data processing applications are inadequate.” So what counts as big data? The information collected by PRISM? Certainly. (Hi, NSA!) The data used to develop targeted, behavioral advertising? Yes. But what about sensor data from the Large Hadron Collider (LHC)? That too! (Fun fact: The LHC records 25 petabytes per year, which accounts for less than 0.001% of its streamed sensor data.) What do these applications say about the morality of big data?

The discussion of the morality of any big data system reminds me of one of my favorite quotes by Noam Chomsky: “As far as technology itself and education is concerned, technology is basically neutral. It’s like a hammer. The hammer doesn’t care whether you use it to build a house or whether on torture, using it to crush somebody’s skull, the hammer can do either.” I believe this applies to big data: It is not inherently unjust, but the applications define its morality. Big data allows Google to notify me that I might be interested in My Little Pony plushies, but it also allows physicists to demonstrate the existence of the Higgs boson.

I have thought a lot about this sometimes just randomly, thinking about where we can go from here in terms of technology. It seems like nothing is never completely “new” anymore, everything is redone or remade to be attractive to a certain group of people or designed to be more convenient. However, for example, I don’t think the “SmartWatch” blew anybody away – I think we all knew it was coming. In Cramer’s post-digital article he says, “In this so-called ‘digital age’ people are more and more drawn to things that they can materially connect with. The aesthetic possibilities offered by cine film are not simply visual, they can also be felt…it has a physical tangibility…and the images therefore have a material basis in a way that the digital image can never have.” This is very interesting. Every film buff is obsessed with the 35mm and/or this also relates to the recent phenomenon of Polaroids making a comeback—not just literally but also within the design of the app Instagram – that is what its whole idea is based off of: how can we make art through a white square frame? Another quote I found engaging from that article was “In this sense, ‘post-digital’ is more or less synonymous with the contemporary-art term ‘Post Internet’ as coined by the artist Marisa Olson and defined by the critic Gene McHugh in 2010 on his blog of the same name: “Any hope for the Internet to make things easier, _to reduce the anxiety of my existence, was simply over_ – it failed – and it was just another thing to deal with. It became the place where business was conducted, and bills were paid. It became the place where people tracked you down.” This is relatable because we all live in this world where technology is so infiltrating in terms of tracking us, that we like to escape into film noir/polaroids/typewriters, etc.

I think Cramer’s point about the current usage of both new media and old media depending on requirements is pretty interesting. Whenever something new comes along, it is often the case that people will assume that it is inherently better than older alternatives, when in reality the old and the new both have their positives and negatives.The bike is especially interesting to me, because it is more energy efficient than a car, and can better avoid issues with traffic than a car, but is often overlooked just because it is older. It takes time for people to see past the novelty of a new product to realize that it might not actually be the best, or at least not always the best.

The texts by Rouvroy and by Mayer-Schöenberger and Cukier bring up one of the most interesting and thouroughly-debated issues in the broad realm of human knowledge: artificial intelligence—along with Big Data, its little brother so to speak. The term itself betrays the standpoint that most people (or should I say, “natural humans”) share on the topic; computers can never match up to humans. Even when they outmatch natural humans in every calculable way, conventional wisdom says that AI will always lack the “je ne sais quoi” that makes natural humans human. Indeed, the use of the word “artificial” betrays the fear that a creation of ours could surpass us in everything we hold as our defining qualities.

The fear is that AI, as well as Big Data (BD), will end up dehumanizing human beings. Instead of being unique and special, as any kindergarten teacher would remind us, BD and AI make us all into a pile of numbers and variables. The transition from individual to (the dreaded) dividual will be complete. As Mayer-Schöenberger and Cukier argue, this future is not to be feared. We should instead consider the myriad benefits of BD: lower costs, higher efficiency…more profits for corporations. Splendid! In this future, we are no longer consumers to be wooed by advertising and better offers—a more primitive form of the manipulation enabled by BD—but datasets to be relentlessly plied by algorithms, not to convince us we want something, but to make it easier to acquire what we already want. It is this distinction that is so disturbing. Mayer-Schöenberger and Cukier allude to Chapter Eight, in which they address the dark side of BD, but in Chapters One and Four, they seem smitten with the golden horizon enabled by BD. They assume that the correct lens through which to view the world is the capitalist lens (capital-normativity?), but when viewed from an intellectual perspective, the displacement of the “why” in favor of all those other interrogatives is highly disturbing.

Time to get personal: In a future where BD will predict exactly what films people want to see, and AI will be able to churn out those films for almost nothing, what place in the world will aspiring filmmakers such as me hold? In a world where BD and AI render “why” obsolete, what place will academics hold? What good will philosophers be if their purpose in life is made irrelevant? Even though Mayer-Schöenberger and Cukier acknowledge that the “why” and the scientific method will never completely die, it is hard to imagine that the arts and humanities will not become even less valued than they already are.

Nevertheless, I see a silver lining to this cloud (pun intended). Perhaps instead of a process of—or a brief period of—dehumanization, we will begin a process of “re-humanization”. With our future society awash with man-made creations acting just like us, we will hopefully come to rethink what human means. I realize that this sounds just like the ending to the most utopic of science-fiction stories, but it is a likely outcome. In fact, the ironically academic notion of re-humanization has precedent in the contemporary socio-political moment. When it comes to queer people, people of color, and other marginalized groups, we as a species struggle, theorize, argue, and physically fight, but ultimately, we embrace. We will do the same with Big Data and AI. Why? It’s only human.

Rouvroy’s argument fits perhaps too neatly into a tropes of computers and machines as emotionless systems and humans as curious, questioning and thereby disruptive individuals.  As Heidi M. Ravven describes in The Self Beyond Itself the idea of human bodies as unique individuals with distinct and free wills is inaccurate (she approaches this from neuroscientific, psychological, and historical perspectives) and carefully culturally constructed (from Christianity through Descartes to Neoliberalism).  I therefore think that the Arendtian perspective that was brought up by Rouvroy is extremely useful in order to detail the ways in which a society dominated by data-driven decision making could both fail to provide a stable structure and allow for the emergence of totalitarian or similarly oppressive governments.  Arendt recognizes the role of the newborn as central to the functioning of a healthy society of people and as a threat to totalitarian regimes in her work The Origins of Totalitarianism.  The newborn for her embodies the potential for outside or outsider knowledge and perspective to enter the public space without precedent or anticipation.  With the newborn comes the potential for normative and normalizing ideologies to be ruptured.  Ravven similarly recognizes the outsider/dissident/whistleblower/newborn as a (perhaps the) central figure in a society.

The argument for the outsider is complicated by efforts to delineate “good” and “bad” dissidence.  The Tsarnaev brothers certainly functioned as outsiders but it would be very difficult to justify their behavior (although admittedly that is the whole point because I am within ideology).  Their particular example of a “Terrorist” threats is exactly what data-driven policing is advertised as preventing.  Snowden is an excellent example of what might appear to be a “good” dissident in that his actions were nonviolent and informative.  Unfortunately, his actions would have been prevented by data-driven policing if it had been possible.  That “if it had been” is important, however, and questioning to what extent technology actually can be complete and live up to its hype almost inevitably leads to the conclusion that it cannot.

One perspective that I would also like to bring up is Mayer-Schönberger and Cukier’s vaguely technophilic understanding of the ways in which big data has been helpful with regards to disease prevention.  I think that from within the protective confines of critical theory one can be easily detached from the very real ways in which humans are suffering from natural causes or catastrophes.  There seems to be – and there may be further – ways through which big data can aid humanitarian efforts.  Since the ideology of big data devalues humans, however, I think that a delicate balancing of needing to address the suffering of others and of restricting the use of big data is required.  The use of biodata for purposes other than humanitarian or health aid could perhaps constitutes an abuse of that data.

 

I imagine that someone else will also link this video but just in case : music video that is definitely worth critiquing on the grounds that putting big quotation marks around something in no way gives you a free pass but nonetheless mobilizes the discourses of big data in a pretty funny way.

 

“In Book VIII of the Republic, Plato describes the state of the democratic city as a sate in which, instead of ruling, rulers have to obey, in which fathers obey their sons, and the elder imitate the younger, in which women and slaves are as ‘free’ as men and masters, and in which even the asses in the streets ‘hold on their way with the utmost freedom and dignity, bumping into everyone who meets them and do not step aside.’” (Ranciere, p. 49-50)

I personally find this state of “chaos” described by Plato to be a fairly accurate representation of democracy as it is often conceived of in America. Although it isn’t always accurate in practice, the idea of American democracy as allowing anyone to do whatever they like is very common. From people yelling “’Merica” while doing something stupid to declaring their right to free speech in situations where it isn’t necessarily applicable, many people see democracy as a free ticket to say or do anything.

The idea that “rulers have to obey” the people is also incredibly important in the common conception of America. This idea, articulated in “by the people, for the people,” typically makes a person think that the government will be fully responsive to the needs of the general population. Although this isn’t always the case with the American government, it is important to see that it is perceived as true.

In Wednesday’s lecture, Wendy raised the point that the Rodney King case is a case for surveillance, because technology is thought to reveal the truth. And connecting this to a more recent instance of police brutality, Walter Scott’s murder would have likely gone either unnoticed, or not resulting in any restoration of justice against the police officer without concrete video evidence. That’s why the idea of police body cameras seem to be the solution to this problem. Also connecting this idea to Mike’s lecture about police violence, I’m still grappling with the ideas of privacy and digital media, and how that intersects with other identities like race and gender. Also within democracy, how does it relate to citizenship?

Is there a way for people of multiple identities to have digital privacy? Or are some identities forced to relinquish their privacy for their safety? Or vice versa, like with the example of revenge porn.

 

 

Ranciere’s police = a form of othering, democracy thus becomes a mechanism for coping with otherness. Importantly, citizenship works through a logic of exclusivism. The inclusion of specific bodies in the narrow arena of citizenship often falls upon race and gender formations. As Ranciere writes, “It is the logic of the police to carry out a continuous privatisation of the universal” – privatisation too, works through an exclusionary ethic through an implicit form that “restricts the sphere of citizenship to a definite set of institutions/problems/agents/procedures”.

As the private becomes public; as the distinction between the two terms becomes increasingly blurred. In a biopolitical sense democracy is an instrument of control that compartmentalises and fragments people, therefore justifying violent, institutional acts of sexism and racism. The DREAM Act is an adequate demonstration of this, and also points to the notion that technology is said to be ‘dissolving’ the political.

I have seen other posts that put into question the efficacy and meaningfulness of online social activism, casting doubt over the purported puissance of DREAMers’ work and the possibility of actualising political change. I would offer the position that perhaps, according to Ranciere, the processes of standardised political action such as voting are simply the workings of the police. The exclusion of DREAMers from the categories of citizenship make public intrusions not just powerful, but necessary. The play on “coming out” is a performative act of high stakes; coming out as homosexual becomes symbolically paralleled with coming out as a non-citizen.

I thus found the description of DREAMers as ‘gothic subjects’ somewhat perplexing. There seems to be a misplaced reinforcement of their otherness in feeling the need to describe the nature of their experience as ‘gothic’ and thus somehow attached to weird notions of fear, attraction and excess… idk

I really enjoyed the video for the #nosomosdelitos campaign. For one, it challenged notions of “slacktivism,” which is the idea that users may use social media as a cop out to actually getting involved and engaged with a social movement. The idea that with a simply “like” or “retweet” one has done enough for the cause.

Skeptics have argued against the effectiveness of social media in creating change, arguing that social media campaigns lack the tools to mobilize and integrate themselves into the political institutions that will ultimately formalize progressive policy (In the U.S. these institutions include lobbying Congress or standing before the Supreme Court).  In essence, social media movements would need to bureaucratize and vertically integrate in order to face these political institutions which are already bureaucratized and vertically integrated. This is inherently difficult as new media campaigns are often based upon networks.

In requiring one to take a picture to go along side their petition signature, I think the #nosomosdelitos campaign showed that the aforementioned ideas are not necessarily true. By inserting these faces and bodies into places they were not invited to, it forced people to care. And that is where the true power of social movements lie: in creating allies that may not necessarily identify with the cause, but can sympathize. Seeing a picture, and not just a name on a piece of paper, has the potential to invoke within us all that is human, and this stands in stark contrast to the bureaucracies that govern policy today. In my opinion (and I may be naive), this can add a sense multidimensionality and fervor to social movements that can’t be ignored.