Operational Listening and Computational Eugenics
185 Pelham Street
Operational Listening: Mark Andrejevic
This presentation takes as its starting point the work of Harun Farocki and Trevor Paglen on the rise of the 'operational image' to consider the related rise of what might be described as automatic or 'operationalised' listening. This type of listening is becoming increasingly familiar thanks to the deployment of smart speakers and a growing array of networked audio sensors (from gunshot sensors to workplace monitors, smart phones, and audio surveillance on public transport).
The talk describes the stakes of operationalism as the displacement of symbolic interpretation by action, and draws on psychoanalytic theory to consider the implications for subjectivity. The goal of the talk is to raise the question regarding what is lost in the shift from comprehension and interpretation to operation. What does it mean to say that Alexa will gather information about your words but it doesn’t know what you mean (beyond purely operational commands)? The talk makes some speculative claims about the emergence of a world in which symbolic efficiency is replaced by operational efficiency. This is, perhaps needless to say, a fundamentally undemocratic process that is already becoming all-too familiar in these post-truth, post-deliberative, post-political times.
Computational Eugenics: Jake Goldenfein
Over the past decade, researchers have been investigating new technologies for categorising people based on physical attributes alone. Unlike profiling with behavioural data created by interacting with informational environments, these technologies record and measure data from the physical world (i.e. signal) and use it to make a decision about the ‘world state’ – in this case a judgement about a person.
Automated personality analysis and automated personality recognition, for instance, are growing sub-disciplines of computer vision, computer listening, and machine learning. This family of techniques has been used to generate personality profiles and assessments of sexuality, political position and even criminality using facial morphologies and speech expressions. These profiling systems do not attempt to comprehend the content of speech or to understand actions or sentiments, but rather to read personal typologies and build classifiers that can determine personal characteristics.
While the knowledge claims of these profiling techniques are often tentative, they increasingly deploy a variant of ‘big data epistemology’ that suggests there is more information in a human face or in spoken sound than is accessible or comprehensible to humans. This paper explores the bases of those claims and the systems of measurement that are deployed in computer vision and listening. It asks if there is something new in these claims beyond ‘big data epistemology’, and attempts to understand what it means to combine computational empiricism, statistical analyses, and probabilistic representations to produce knowledge about people.
Dr Jake Goldenfein, Swinburne University of Technology
Dr Jake Goldenfein
Swinburne University of Technology
Jake Goldenfein completed his PhD at the University of Melbourne and joined Swinburne Law School as a lecturer in 2016. His research addresses the intersection of law and technology, focusing on: surveillance, privacy and identity; distributed ledgers and blockchain platforms; intellectual property; automation and legal theory; and law in augmented reality and other cyberphysical systems. He is an admitted lawyer in Australia, and previously practiced as a solicitor in in the areas of privacy and administrative law. Jake is a board member of the Australian Privacy Foundation and the experimental arts organization Liquid Architecture. Dr Goldenfein’s recent publications have explored blockchain applications in smart cities and intellectual property, the potential for automation of privacy law, the history of law enforcement intelligence databases, the relationship of privacy to police photography, and computer surveillance in remote indigenous communities.
Professor Mark Andrejevic, Monash University
Professor Mark Andrejevic
Mark Andrejevic (Professor, School of Media, Film, and Journalism, Monash University) contributes expertise in the social and cultural implications of data mining, and online monitoring. He writes about monitoring and data mining from a sociocultural perspective, and is the author of three monographs and more than 60 academic articles and book chapters. He is the author of *Reality TV: The Work of Being Watched* (2004), which applies critical theory to the example of reality TV to explore the changing character and portrayal of surveillance in the digital era; *iSpy: Surveillance and Power in the Interactive Era* (2007) which examines the deployment of interactive media for monitoring and surveillance in the realms of popular culture, marketing, politics, and war; and *Infoglut: How Too Much Information Is Changing the Way We Think and Know* which explores the social, cultural, and theoretical implications of data mining and predictive analytics. Andrejevic has experience conducting both quantitative and qualitative research and is experienced in the focus group and interview methodologies. His work on the personal information project, for example, generated a book, 11 articles and book chapters, and a report on Australian attitudes toward online privacy that was launched by the Federal Privacy Commissioner.