Skip navigation

Monthly Archives: September 2015

  1. I’m was also prompted to think about how digital media (and other systems that channel information) restricts us, after reading Terranova, and how this relates to the theme of issues of communication presented through out the readings. Terranova is very aware of the platforms that constrict human interaction (“challenge of informational miles and topologies,” page 37). A sort of strange, but exciting claim I found in Weaver’s essay was the notion that these constraints actually create a larger channel capacity. For example, the constraints in the English language force us to use some words (such as the, and, but) and thus result in us saying 50% in order to convey our information. This is frightening because this could add to the miscommunication of signals (packaged information) and restrain us to a platform of communicating with limitations. However, I think it’s important for people to question the platform they are using and think of alternatives.

  2. Another issue I was thinking about involved how many authors assume human copiousness is all the same. I think human consciousnesses are similar, but we all deal with different ideas, themes, thoughts ,and symbols because we go through different experiences (someone might be having a bad day, while someone else might be having a good day, half-full/ half-empty debate). Weaver comparing humans to transmitters and receivers and then packaging/ unpacking messages from signals was very interesting, but I just couldn’t believe it. Terranova, on the other hand, takes into account various issues, but ends up giving us a definition of information that I don’t think works either (). I feel that Information functioning on the internet may mimic how information works in our minds, but is vastly different from how it actually works, just as the Turing machine can mimic humans with binary answers (and elements of randomness).

In Weaver’s essay the term ‘redundancy’ in regards to information is defined as “the fraction of the message that is unnecessary (and hence repetitive or redundant) in the sense that if it were missing the message would still be essentially complete, or at least could be completed” (p.13). He further explains this term using an example of the English language and the fact that English has an extremely high redundancy, essentially meaning that almost half of the letters or words we choose in writing and speaking are really controlled by the statistical structure of the language, even though we’re usually unaware of it.

 

Half a century since Weaver’s essay has seen the world and a number of linguistic theories later, Tiziana Terranova writes in Network Culture “Whether it is about the Nike swoosh or war propaganda, what matters is the endurance of the information to be communicated, its power to survive as information all possible corruption by noise”(p.16). Maybe because it had words ‘information’ and ‘power’ in one sentence, but more so because it seemed to relate to the term ‘redundancy’ this sentence stroked a chord with me.

 

It seems that accumulation of redundancy, absorbing so-to-speak the corruption by the noise, increases the endurance of information. But at the same time redundancy decreases the freedom of choice in constructing messages, and thus the entropy of information milieu. I wonder if today redundancy can be attributed not to a single source, but rather networks of sources, enabled by bid data overlapping and projection; if we can talk about the redundancy of networks or even networked redundancy as a part of the “networked culture” of today.

 

This week I was drawn to the relationship between culture, power and communication within “informational space”.

 

To begin with, in the Turing text I was struck by the role of the interpreter in the imitation game and the importance of truth-value in communication networks.  Turing’s question was how do we know when we are in the presence of a machine thinking?  Following on discussions from last week, this struck me as powerful in the sense that we are projecting human qualities onto what thought could be, limiting the potential of thought coming from a radical ‘outside’Not only did the machine have to pass, or to tell a convincing lie that it was human, but it would also have to claim a gender. This anthropocentric view of intelligence seems very damaging not only in its reductive ‘for-us’ projections into other realms of life but it must also plays a huge role in segmenting what counts as human/inhuman/nonhuman.

 

Following from the Turing text into the Terranova, I find there to be many instances of informational dressage in the ways in which information shapes bodies, materials, and subjectivities.  After mapping the processes of informational power, she calls for resistance by resisting the limitation of the virtuality of the social and on the other hand engaging collectively with informational flows and their potential to reinvent life.  She highlights vacuoles of non-communication (not passing), performances of improbable futures, and demonstrations of the virtuality of possible worlds as as examples of resistance to informational control (25-28).

 

My question then refers to Terranova’s understanding of information as a milieu in which meaning is made possible and where production occurs.  Here, classical formations of spatial scale such as macro/micro fall apart.  “Scale becomes informational when it presents an excess of sensory data…a radical indeterminancy in our knowledge and a nonlinear temporality involving a multiplicity of mutating variables and different intersecting levels of observation and interaction.  It is not so much a three-dimensional, perspectival space where subjects carry out actions and relate to each other, but a field of displacements, mutations and movements that do not support the actions of a subject, but decompose it, recompose it and carry it along. (37)”

 

There seems to be a paradox between informational space and real social space.  How do we reconcile the territory, demarcated by hard borders, clashes with the police over land rights, and the panoptic gaze, with informational space described by Terranova with its fragmented oligoptic mode of control.  Is there a positive feedback between state territory and informational space where the production of dividual subjectivities, already-outside of classical state modes of control, will in turn trigger cascades and undermine dominant modes of land use?

Terranova’s intervention in the tension between a meaning-centric approach to communication (with which the liberal ethics of journalism associate) and the operational emphasis of information theory (with which communications managers are more concerned) prompts me to consider how “contact and tactility” illuminate possible convergences of sign and signal.

Terranova invokes Gilbert Simondon’s work to demonstrate that information is not so much the content of communication but the indication of the “direction of a dynamic transformation” (19). In this sense, contact cannot be reduced to an informational command, as the informational dimension of communication demonstrates an “unfolding process of material constitution” (Terranova 20). Terranova notes that information involves a level of “distracted perspective,” and therefore transforms bodily habits by “plugging” it into a field of action. She writes that “the informational dimension of communication is not just about the successful delivery of a coded signal but also about contact and tactility, about architecture and design implying a dynamic modulation of material and social energies” (19).

This section of Terranova’s chapter helps me think about how meaning assessment and technical signal delivery converge in the emergence of social media. It seems that, on platforms like Twitter, journalistic ethics of “truth” evaluation and public relations goals of channel clarity (eliminating noise) blur into one another. Hashtagging, social media curation, and storified formats are at once acts of efficient signal transmission and perceived as acts of truth finding. (Journalists use Twitter as the process of reporting and the delivery of news). Moreover, transmission is not between a single sender and receiver, but a multi-directional flow of searches, exchanges, and changing or developing “truths” about an event.

In this sense, information is less transmitted content than it is a kind of process. What is particularly interesting to me is how “contact” is literalized (materialized) in these processes. These forms of news media transform bodily habits in their processes by “plugging” bodies into fields of action—mobile technologies kept near the body or attached to the hand, quick tactile contact with screens, physical gathering of bodies at sites based on tweets, etc.

Thus, I wonder, perhaps in connection to Myles’s response, whether the notion of contact and tactility can bring us back to last week’s conversation about language and the body. How/does language emerge or materialize out of information? How can architecture and design help us theorize language within/against information theory?

The first two readings the week are both instances of a new technology encouraging the re-definition of familiar concepts — information, communication, and redundancy in Weaver’s paper, and humans, machines, and thinking in Turing’s. In much the same way that we have understood Big Data to problematize our accepted definitions of anonymity and consent, the digital computer puts pressure on the categories of “communication” and “thinking,” and Weaver and Turing seem deeply concerned in redefining these terms in a way that easily accommodates computers. Communication becomes an act that any agent capable of affecting with another agent is able to do, and thinking becomes an act a universal machine can do — humans and their behaviors become a special case.

 

I’m fascinated by the way that Weaver and Turing seem eager to generalize their new (/modified) concepts, and also by their motivations and attitudes towards others who might disagree — Turing’s tone seems more than a little bit mocking towards those luddites who would dismiss the thought of a thinking machine entirely because it couldn’t, say, enjoy strawberries.

 

I’m also left wondering what we might lose when redefining these terms (here, I meant communication, specifically): it seems consistent that one who see the “fundamental problem of communication” as “that of reproducing at one point either exactly or approximately a message selected at another point” might not be too concerned with, say, what the receiver of such a signal does in response (Shannon 32).

 

To bring this back to what we’ve been discussing in class already, I think the enlightenment ideal of rationality slips easily slips into Shannon & Weaver’s formation of communication: a coherent message, sent in such a way to counteract noise, etc., that reaches another actor, ought to have a certain (obvious, or at least beyond our area of interest) effect.
Weaver’s overview of information theory divides communication into three tiers (technical, semantic, and effectiveness) which divide these concerns, and although he suggests that they aren’t as simply divisible as one might imagine, he still seems convinced that one can work out the theory of “level A” to design communication systems that are indifferent to the content of messages. Is there danger in imposing such a division and assuming that one can engage and work on one “level” of the problem of communication without touching the other? (Perhaps similar to the danger of an engineer assuming their craft or invention is purely neutral?)

While defending his theory from another proposed critic, Turing clarifies that programmed machines are prone to ‘errors of functioning’ and ‘errors of conclusion.’ Of the former, he writes:

‘Errors of functioning are due to some mechanical or electrical fault which cases the machine to behave otherwise than it was designed to do. In philosophical discussions one likes to ignore the possibility for such errors; one is therefore discussing ‘abstract machines’. These abstract are mathematical fictions rather than physical objects. By definition they are incapable of errors of functioning’ (449).

Of the errors of conclusion, he describes how there is no way for an observer to describe an error of conclusion as a mistake, as the qualification of the ‘mistake’ depends on who or whatever is placing value on the meaning of the machine’s output. This is, essentially, arbitrary

 

Turing then uses this distinction between the philosophical abstract machines and the errors produced by physical ones to make an interesting claim: ‘in this sort of sense a machine undoubtedly can be its own subject matter. It may be used to help in making up its own programmes, or to predict the effect of alterations in its own structure’ (449). In the case of the fictive, speculative temporal mode of the abstract machine, the engineer’s concept of the machine does work on itself before being committed to a physical form. When those machines that take a physical form commit ‘errors,’ they, in a sense, work on their own future iterations by communicating subject matter and output to their human counterparts, which in turn interpret and place value on the error before altering the future iteration of the machine. I think it’s important to point out that Turing seems to be suggesting in both of these cases that machines exist temporally in a way where a machine’s own matters of thought and subject matter do work on the future designs and prospective construction of its form. In this sense, the operator doesn’t so much work on a malfunctioning machine – as a machine cannot make mistakes – but instead the machine communicates the blueprints for a future rendition of self. Conceived as an abstract dream of an engineer, then taking a physical form, it is the machine’s ‘thought’ that develops the rapport with human use and its own value of efficiency.

 

It is interesting to think the fictive and speculative nature of the machine alongside Shannon/Weaver, who note that the principles of noise and uncertainty in communication are also a formative part of the system’s structure. How do machines exist in time and how is a machine’s temporality different from a human’s? If we look at it a certain way, is it possible that machines conceive and recompose their own form in a temporality simultaneous to the human present? Are the speculative dreams of philosophers, science fiction authors, and engineers like Turing the origin of a given machine’s thought, and how are each of the authors we looked at this week both imagining and mapping out a future we are familiar with today?

 

Just as a final note, Claude Shannon famously constructed one of these useless machines imaged below and kept it on his desk while he was a professor at MIT. Reportedly, on a visit to Shannon, Arthur C. Clarke saw the device and became fascinated with it. Another fictive/speculative encounter in the history of thinking machines?

giphy

Roughly halfway through his paper, Alan Turing describes a type of question that “the machines must fail on.” Those questions, questions about identity, about what kind of machine the machine really is, seems to be an impossible arena. If the machine is asked to predict the answer of another machine similar to itself, it’s at a loss:

When the machine described bears a certain comparatively simple relation to the machine which is under interrogation, it can be shown that the answer is either wrong or not forthcoming. This is the mathematical result: it is argued that it proves a disability of machines to which the human intellect is not subject. (445)

Is Turing saying here that machines – in the form that he envisions – are incapable of self-reflection? That machines are inherently solipsistic, and incapable of empathizing with another machine if the other is too similar to itself? This, to me, is a super interesting stage for a discussion of “big data” ethics, which Turing himself presciently (or clairvoyantly) describes, namely a “method for drawing conclusions by scientific induction… by the time the experiment is done these assumptions [made by the experimenter / programmer] have been forgotten” (451). That description of a “surprising” result echoes or foreshadows the discussion of informed consent in the Barocas and Nissenbaum essay of last week. If a machine is incapable of self-contemplation, of understanding others as they are rather than as they serve its own inputs, is there a way to think about machine learning to accommodate or “correct for” that inherent lack of social awareness?

Building off of Sophie’s post, I found Turing’s question regarding the thinking and learning ability of machines worth pursuing. As Turing articulated, in order to even attempt to program a computer to think like a human, “the best strategy for the machine may possibly be something other than imitation of the behaviour of a man” (Turing 435). With humans, Turing continues, enough unpredictable events, contexts and experiences occur throughout a lifetime to prevent entirely predictable (rational?) decision-making processes to occur. Thus, “it is not possible to produce a set of rules purporting to describe what a man should do in every conceivable set of circumstances” (Turing 452), making it difficult (impossible?) to program a machine to adjust to any and all extenuating circumstances. Even with “more information” in computers, does that increased knowledge lead to “better decision-making,” or even decision-making similar to humans’? I would argue that increased information does not inevitably lead to an ability to think, considering humans’ ability to apply information to various contexts. Is that application skill possible through programming additional information (i.e. does the presence of information lead to critical thinking?), or is it only possible through the experiences gained as a person?

Perhaps Weaver’s findings on the communication system could offer insight as to why this subjective experience as a person separates “human thinking” from “machine thinking.” In particular, Weaver’s assertion that “information is a measure of one’s freedom of choice when one selects a message… The concept of information applies not to the individual messages (as the concept of meaning would), but rather to the situation as a whole, the unit of information indicating that in this situation one has an amount of freedom of choice, in selecting a message, which it is convenient to regard as a standard or unit amount” (Weaver 9). The “freedom of choice… in constructing messages” (Weaver 13) is critical — and that freedom seems to come from a range of not only information, but various contexts in which that information can be applied (thereby producing further information  — does that process make sense?). In Weaver’s terms, the noise that occurs between the information source and destination is so unpredictable for humans, that the message can be altered and processed in a way that may not be true for machines.

In terms of takeaways from these two pieces, Weaver and Turing both problematize the notion that knowledge is power. Simply knowing a piece of information does not inevitably lead to critical thinking and “powerful” thought processes that were previously associated with knowledge. Perhaps context, experience, and/or something else entirely causes complex thought processes — and how can those notions be applied to machines, if at all?

I was particularly interested in the 1950 article “Computing Machinery and Intelligence” by A.M. Turing. In particular, I was intrigued by Turing’s conception of thought, machine learning, and man’s relationship with the machine. He poses the question: “May not machines carry out something which ought to be described as thinking but which is very different from what a man does? (435). However, as he unpacks his experiment, his question transforms from “Can machines think?” to “Are there imaginable digital computers which would do well in the imitation game?”

The way in which this question shifts is quite curious as he slowly makes the delineation between human and machine though a set of arguments. In particular, I thought Turing’s 8th argument: the argument from Informality of Behavior was quite fascinating. He argues “If each man had a definite set of rules of conduct by which he regulated his life he would be no better than a machine. But there are no such rules, so men cannot be machines” (452).

In relation to the article “Some Recent Contributions to the Mathematical Theory of Communication” by Warren Weaver, he defines communication as “the procedures by means of which one mechanism (say automatic equipment to track an airplane and to compute its probable future positions) affects another mechanism (say a guided missile chasing this airplane)” (3). It is striking how this definition applies to both humans and machines.

I would enjoy further discussion of thought and communication and how they differer in humans and machines. What about these differences are important and how do they influence our discussion and understanding of information?

(In response to Victor’s point “there seems to be some ambiguity as to how ‘information’ as a term is being used”): I read Terranova’s discussion of her first proposition to be a criticism of the isomorphism that has arisen between the two sense of “information”. Terranova uses “information” interchangeably, but I think a distinction is clear in corollary 1b between information in the ‘micro’ information-theoretic channel&entropy sense, and information in the ‘macro’ “cultural politics of information” communication-managerial branding sense. Corollary 1b makes the claim that “the cultural politics of information” (communication management), has experienced “a return to the minimum conditions of communication” (information theory). The critique arises from the application of the information-theoretic form to communication management and larger cultural understanding of what communication is about: “the unsatisfactory assumption that communication is simply about two people now knowing what only one knew before.” (18) The criticism is that the isomorphism between macro and micro information does not hold.

A salient example of the presumption of some an isomorphism is visible in the current controversy surrounding safe spaces. Opponents argue that these practices are only about blocking out unwanted information, thus limiting participants’ knowledge. Thus, the controversy is cast in terms of individuals choosing to block (deem ‘noise’) information/signals. For instance, in an NYT op-ed, Judith Shulevitz describes safe spaces as intended to “Shield [students] from unfamiliar ideas.” Terranova describes on 14 how such a practice (or broadly, any form of communication) could be characterized as brainwashing, if one operates within the signal/noise framework.

However, Terranova concludes that macro information doesn’t work like micro information, and rather, that macro communication is much more complicated, involving “an unfolding process of material constitution” (19). Thus, a proponent of safe spaces might argue that macro information is responsible for “material organization … that moulds and remoulds the social field” as a result of “distracted perception”. The categories that are transmitted through language are responsible for the unconscious use of certain understandings and concepts that practically affect how a group is capable of discussing and relating to the topic at hand.

Terranova further discusses this effect after the second proposition, where she argues that the codes we use foreclose a certain ground of possibilities, fundamentally limiting the range of options that are conceivable. Terranova writes “the probability of a system’s being in a certain state is not a property of its being” (26). I understand this as making the claim that there is a fundamental misunderstanding involved in the use of informational codes wherein categories (which are deployed in response to molecular/micro phenomena in statistical distribution) are thought to be ontological properties of these distributions.

I don’t think I understand why this is a property of information cultures, rather than a critique of language. Is it that this is a process that is vastly accelerated in information cultures? Perhaps a hint arises with the claim that “Space becomes informational […] when it presents an excess of sensory data”. Although this is specifically referring to how we interact with spaces, perhaps it is the case that a similar shift occurs socially when the quantity of information generated, available, and absorbed surpasses some excess point?

Another question: Is ‘dividuation’ then an attempt to surpass the “identity” macrostate, which can never access the microstate, instead producing another larger, more complex set of possible macrostates? How does this relate to the concept of network identity? How does language fit in, and how does language presuppose/constitute the macro differently than information does, differently than (big) data does?