Making Sense of Autonomous Technologies, 40 Years Later

— Colin Garvey —

With the help of many others, I organized a panel at 4S 2017 entitled ‘Making Sense of Autonomous Technologies, 40 Years Later’, intended to mark the 40th anniversary of the publication of Langdon Winner’s seminal work, Autonomous Technologies: Technics Out-of-Control as a Theme in Political Thought (1977). As I proposed in the call, the 40-year mark provided an ‘opportunity to reflect both on an increasingly automated Anthropocene as well as the field of STS itself at the opening of 21C.’ Unfortunately (or perhaps appropriately), the ‘Great Man’ at the center of our panel was unable to attend, and sent his regards instead.

The first session explored the ‘politics of artifacts’. Madeleine Elish opened the first session by reminding us of the ‘epistemological duct tape’ necessary to package autonomous technologies like chatbots and digital assistants into the seemingly—or as she put it, ‘seam-full’—magical packages in which they are sold. Amanda Jurno’s talk, ‘Do Artifacts have Cosmopolitics?’, interrogated the assumptions underlying Facebook’s nudity policy to suggest that the tech behemoth’s indifference to criticisms from outside the US betrays their status as de facto defenders of Western colonial values. Tiago Soares excavated ‘technologies of dissent’ from Brazilian social movements of the 1960-70s, such as the Concretists and Neo-Concretists, groups that promoted radically alternative sociotechnical imaginaries, meriting further exploration today. Peter Asaro closed the session with an update on his ongoing work lobbying the UN for a ban on Autonomous Weapons Systems through the Campaign to Stop Killer Robots, pointing to the continued relevance of Winner’s work to discussions of ‘meaningful human control’ in decision making on the battlefield.

Whether felicity or influence of the current zeitgeist I cannot say, but the second session focused primarily on autonomous vehicles. Keita Sugihara and William McMillan both brought a practitioners’ careful eye to oft-missed social considerations in designing autonomous systems, while Erik Stayton’s precise analysis of the complex sociotechnical infrastructure undergirding the successful operation of driverless cars left little doubt that these machines are better described as ‘networked’ than ‘autonomous’. My impression was, however, that undergraduates Charles Boyd and Chase Collins stole the show with their talk reflecting on their experiences in an integrated STEM program under the direction of Emily York. Their thoughtful account of the innovative pedagogy, laden with auto-ethnographic insight, gave me hope for the future of engineering, and would have made past 4S President Gary Downey weep with joy.

The third session reacquainted us with the marvel that is human language. Raúl Tabarés probed the forms of co-production that take place as we rapidly integrate ‘conversational interfaces’ such as chatbots and virtual agents into our lives, emphasizing their capacity to reinforce the hegemony of global languages like English at the expense of that vastly greater number of tongues on the margins. Sara Bell’s account of the attempts and failures of technoscientists efforts to capture human speech in its full emotional complexity illustrated how there still exist significant limits to the power of ratiocination in the digital age. As she wisely noted, machines not only fail to understand humor, they can’t even fake a laugh. I gave the final talk, ‘On the Democratization of Artificial Intelligence’, where I pleaded for more deliberation among more people in the research and development of AI as a means to better govern emerging risks.

We closed the panel with excerpts from an interview I had conducted with Dr. Winner when it became clear he would be unable to attend. In it, he looks back on the origins of the book Autonomous Technologies, as well as his own origins as a scholar, before looking forward to offer some advice for STSers of the future. I encourage you to check it out. < >