Skip navigation

Rouvroy’s piece covers a lot of ground. I think it is one of the more intriguing articles we have read this semester. I am interested in the intersection of “post-modern government rationality” and new autonomic computing infrastructures that “‘translate’ or ‘transcript’ the physical space and its inhabitants into constantly evolving sets of data points.” What are the stakes of this convergence?

I think two aspects of post-modern government rationality are crucial here: (1) The system, embodied by autonomic technology, becomes transparent—not as in the eradication of a veil, but as in the very entity itself. The mechanisms of the system are invisible, entirely imperceptible. Thus, “transparency” attains a new negative connotation in post-modern rationality; as opposed to a positive connotation within modern rationality (government functioning as visible, open). (2) Post-modern government is concerned with prediction and avoidance rather than causal identification and remediation.

These post-modernist traits, armed with the ubiquitous sensors of autonomic computing, pose many threats and contradictions. For one, the notion that such predictive computing technologies are innately “objective” is dubious. To bring in an idea from the Mayer-Schonberger text, “our choices affect our results.” The way these technologies are designed and used necessarily involves human reasoning, and human bias. Thus, autonomic computing’s “objective” detection of “objective” threats and danger seems inherently impossible. Rather, such detection is simply a reflection—and inevitably serves—pre-existing notions and human theories. In our predictive pursuit of “threats”, we are simply pursuing our choices.

Moreover, I am interested in how this convergence poses a threat to human subjectivity, and subsequently, “to the future”, or the progression of time. Although these technologies purport to operate in “real time”, doesn’t the fact that they are intended to predict and pre-empt place them out of so-called “real-time” and in a space that has yet to unfold? More insidiously, the objective of these technologies of pre-emption is often to prevent or to destroy this space. In this way, a system that constantly pre-empts does not move forward because it eliminates the forward. Simultaneously, this elimination of the “forward”, of possibility, also seems detrimental for humans’ ability to think forward, “of themselves beyond themselves”. The result is a crystallization, a freezing or stalling out of time and thought.