Comment by Olga Galanova
On June 22, Andrei Korbut presented a lecture on “Robotic Technoperformances: Interactional Indeterminacies in Public Appearances of Anthropomorphic Robots.” The title of his presentation places the topic at the intersection of various STS-relevant research fields, such as AI, gender, the body, homocentrism, marketing, televisual communication, performance, audience engagement, as well as techno-interactional malfunctions and their repair. The lecture combines various aspects that make Andrei’s contribution exciting, thoughtful, and discussion-provoking.
The focus of the presentation centers on anthropomorphic robots as the primary subjects of public events in “technoperformances” (as coined by Mark Coeckelbergh) within electronics shows, specialized conferences, TV programs, and more. Originally, the analysis of these subjects falls within the realm of computer science and engineering, where considerable efforts are made to enhance robots’ capabilities and naturalness. However, Andrei’s research delves into the mutual construction of complex social situations involving interactional negotiation of failures and their subsequent repair. These situations are intriguing and challenging for several reasons: while televisual formats are typically well-organized and staged, they often contain disruptions and uncertainties. Audiences are accustomed to seeing flawless communication, but moderators grapple with confusion before the viewers. Andrei’s main questions revolve around how such situations are possible, how they unfold, and how participants address and resolve them.
As an introductory example, Andrei presents an event in the UK Parliament, specifically the Communications and Digital Committee of the House of Lords, on October 11th, 2022. The Committee had invited an art dealer who brought along a robot named Ai-Da (named after Ada Lovelace), developed by his team. Ai-Da was introduced as a robot artist, used to illustrate the challenges contemporary technology poses to the art world. The audience was allowed to engage with Ai-Da using natural language. However, during a member of the Committee’s question to Ai-Da, the robot experienced a technical malfunction, causing its eye cameras to move erratically. To address the confusion, the director placed sunglasses on the robot’s head. The irony here lies in the fact that despite the complex technical preparations and arrangements required for such events, the failures led to disruptions in human interaction, which were managed spontaneously and elegantly.
For the analysis of his data material, Andrei employs the ethnomethodological approach of multimodal conversation analysis. This approach allows him to describe not only what is said but also what is conveyed through actions during the performance. The main focus lies on three categories of discontinuities: speech exchange discontinuities, topic discontinuities, and format discontinuities.
The first type of confusing action involves the disruption of progressivity and the flow of interaction, as seen in examples of delayed responses and uncontrolled overlaps. While the robot experienced difficulties, the moderator attempted various methods to make his speech more understandable to the machine, including changes in prosody, removal of colloquial markers like “uh,” and adjustments in speech volume. These actions helped overcome the emerging indeterminacy regarding the progression of the interaction, which was intended to develop turn by turn. Topic discrepancies played a crucial role in the analysis, involving misalignments between adjacent utterances, topical jumps, and unexpected facial expressions, which shifted the conversation to new topics. Lastly, format indeterminacies included uncontrolled changes in the genres of communication, both in speech and facial expressions. These examples underscore the fragility of technical and media infrastructures. Importantly, these infrastructures are produced, adapted, corrected, and integrated into the meaning structures of the situation through ongoing interactions.
The interaction itself serves as a testing ground for the machine’s integration into the human world. Paradoxically, human likeness can be counterproductive, leading to irritation rather than facilitating mutual understanding. It generates false expectations and presents participants with inappropriate physical signals. Consequently, what unfolds is not a conversation between two humans but a situation in which a human interacts with a female doll. In this dataset, human likeness becomes an issue and a challenge, despite the significant resources invested by robot producers. This raises concerns as producers appear to prioritize their visions and dreams of the future over achieving natural interaction, which they have not yet realized.