Skip navigation

The texts by Rouvroy and by Mayer-Schöenberger and Cukier bring up one of the most interesting and thouroughly-debated issues in the broad realm of human knowledge: artificial intelligence—along with Big Data, its little brother so to speak. The term itself betrays the standpoint that most people (or should I say, “natural humans”) share on the topic; computers can never match up to humans. Even when they outmatch natural humans in every calculable way, conventional wisdom says that AI will always lack the “je ne sais quoi” that makes natural humans human. Indeed, the use of the word “artificial” betrays the fear that a creation of ours could surpass us in everything we hold as our defining qualities.

The fear is that AI, as well as Big Data (BD), will end up dehumanizing human beings. Instead of being unique and special, as any kindergarten teacher would remind us, BD and AI make us all into a pile of numbers and variables. The transition from individual to (the dreaded) dividual will be complete. As Mayer-Schöenberger and Cukier argue, this future is not to be feared. We should instead consider the myriad benefits of BD: lower costs, higher efficiency…more profits for corporations. Splendid! In this future, we are no longer consumers to be wooed by advertising and better offers—a more primitive form of the manipulation enabled by BD—but datasets to be relentlessly plied by algorithms, not to convince us we want something, but to make it easier to acquire what we already want. It is this distinction that is so disturbing. Mayer-Schöenberger and Cukier allude to Chapter Eight, in which they address the dark side of BD, but in Chapters One and Four, they seem smitten with the golden horizon enabled by BD. They assume that the correct lens through which to view the world is the capitalist lens (capital-normativity?), but when viewed from an intellectual perspective, the displacement of the “why” in favor of all those other interrogatives is highly disturbing.

Time to get personal: In a future where BD will predict exactly what films people want to see, and AI will be able to churn out those films for almost nothing, what place in the world will aspiring filmmakers such as me hold? In a world where BD and AI render “why” obsolete, what place will academics hold? What good will philosophers be if their purpose in life is made irrelevant? Even though Mayer-Schöenberger and Cukier acknowledge that the “why” and the scientific method will never completely die, it is hard to imagine that the arts and humanities will not become even less valued than they already are.

Nevertheless, I see a silver lining to this cloud (pun intended). Perhaps instead of a process of—or a brief period of—dehumanization, we will begin a process of “re-humanization”. With our future society awash with man-made creations acting just like us, we will hopefully come to rethink what human means. I realize that this sounds just like the ending to the most utopic of science-fiction stories, but it is a likely outcome. In fact, the ironically academic notion of re-humanization has precedent in the contemporary socio-political moment. When it comes to queer people, people of color, and other marginalized groups, we as a species struggle, theorize, argue, and physically fight, but ultimately, we embrace. We will do the same with Big Data and AI. Why? It’s only human.