Skip navigation

Google is, increasingly, a navigation company just as much as a search software one. Android phones are known for their tight integration with Google Maps data, allowing users to mumble “navigate airport” and be presented with an immersive, pseudo-3D map with turn-by-turn guidance.

This shift in their focus is hardly a surprise, however; as Manovich argues, technological spaces are becoming more and more navigable. Google’s navigation app is of course haptic rather than optic: all extraneous objects are rendered as a flat, solid field on top of which roads and navigational data are overlaid in the simplest possible form. The route to be taken becomes the only interesting object in this navigational (and navigable) space, and the identification between that space and reality is tenuous at best: they have the same street layout, but where are the buildings, people, and plants?

 

More recently, Google has revealed their new augmented-reality game Ingress. The interface is remarkably similar to the navigation app, because it is derived from it; but this time, there is even less optical context. In navigation, there is at least a distinction of color between major and minor roads, between green park polygons and white residential polygons, and so on. Ingress removes all such color cues, leaving only a bright grey-blue street network on a night-black void. The goal of the game is to interact with and control “portals,” nexuses of “exotic matter” that exist near sculptures, museums, and other major public locations*.

The focus on portals increases the hapticity of the experience in a number of ways. First, the portals are not marked in reality: they can only be seen (as animated 3D fountains of energy) from the phone app. Additionally, the extensive spaces (such as University Hall) that constitute some portals are reduced to single points, determined by latitude and longitude down to the limits of the floating-point precision of the game. Finally, and crucially, they make the underlying reality subservient, turning it into a transport layer for players. Several times I have walked past the SciLi portal and “hacked” it from my phone without so much as glancing at the physical object that the portal allegedly corresponds to. In this way, reality itself becomes a haptic space through which users travel only in order to affect objects occupying the virtual space in their phones. This can have strange effects on ones mindset — such as the time I nearly walked to Kennedy Plaza at 2 in the morning, desperate to recapture the Soldiers and Sailors monument before the enemy could reinforce it.

 

*On a Baudrillardian note, the backstory of Ingress claims that these portals have always existed; the exotic matter influences people in strange ways, drawing them to the portals and inspiring them to erect monuments in the form of, say, the new Circle Dance sculpture on campus. (The Brown Bear statue, University Hall, and the SciLi are also all portals). This inverts the reality of the app, which is that Google has gathered various sources of significant locations, such as from the National Historic Landmarks database, and inserted virtual “portals” at the appropriate GPS coordinates. It is an extremely literal rendition of Baudrilliard’s modification of Borges’ story of the country-scale map; Google is, if in fictional form, attempting to supplant the haptic universe of “interesting locations” with a digital universe of identical portal-locations under their control. In the process, they have appointed themselves the final arbiters of interestingness: users can submit proposed portals to Ingress, together with a photograph and description, but there is no guarantee that a new portal will be “discovered” there.