Skip navigation

Gitelman and Jackson’s pieces about the idea of “raw data” struck a significant chord with me this week. I have a variety of thoughts about how the study of “big data” – it seems to reveal as much about out society’s priorities and beliefs as it does about the specific topics it pertains to.

On one level, data, especially en masse can reveal much about specific topics. As Gitelman and Jackson specify, of course, this data is always “cooked.” Big data removes the possibility of drawing conclusions without data manipulation. Of course, even small amounts of data must be manipulated and analyzed (“cooked”) in order to be of any use, but never is this more relevant than with massive amounts of data, when the data itself must be handled by experts and writing code and managing computing power to extract, store, and analyze the data is often as herculean a task as the actual research process (identifying an issue, making a hypothesis, drawing conclusions, etc.).

On a perhaps more interesting level, however, big data reveals much about our society and its processes and values. Many theorists over time (from Ian Hacking to Helen Longino) have explained the ways in which science, considered in much of Western society to be the epitome of objectivity, is closely tied to and affected by social circumstances. In relation to data, specifically, Gitelman and Jackson take this idea even further. They write, “Objectivity is situated and historically specific; it comes from somewhere and is the result of ongoing changes to the conditions of inquiry, conditions that are at once material, social, and ethical” (4). In this context, analyses of science as a socially-affected process are part of a larger pattern of socio-historical “objectivity.” Gitelman and Jackson reach this conclusion through their analysis of data and the ways in which data cannot be “raw” because it is always collected, handled, and reflected upon within a specific cultural and historical context.

In an analytical important move, Gitelman and Jackson take this conclusion even further. By alleging that all knowledge is culturally and historically situated, they are able to expand the very idea of data to include traditionally “subjective” stores of information – literature, art, and other humanities-produced work. This may at first appear to be a somewhat cyclical logic: by examining the ways in which data is worked on and utilized, the authors are able to expand the very definition of data itself. This is, in my opinion, a very powerful move, because it puts into reach of social analysis ideas and practices considered objective – the sciences, for instance – as well as those areas in the middle of the subjective/objective continuum, such as photorealistic art and photography.

Other miscellaneous thoughts: The authors reference the idea that “numbers are objective” – this reminded me a lot of Bertrand Russell’s work in philosophy of mathematics on his Principia Mathematica, and the way even the very basic principles of mathematics cannot be objectively confirmed. I loved the way these authors use big data, a “hot topic” these days, to tie into theory of science and science/technology studies. It seems to me to be a strong theoretical move that analyses and utilizes rather than falls into the hype around the topic.