Gina's story
Invisible Interfaces
Home
Invisible interfaces
I am a sound, I take place in time, but I have a special relationship to time unlike other fields that register in human sensation. I only exist when I am going out of existence. I am not simply perishable but essentially evanescent, and I am sensed as evanescent.

All sensation takes place in time, but no other sensory field resists a holding action in quite the same way I do. Vision can register immobility, it favours immobility, to examine something closely by vision, it is often reduced to a series of still shots, there is no equivalent of a still shot for sound.










Writing and speech, as modes of language have been practiced for thousands of years. Of approximately 300 languages spoken today, only 78 have a form of literature…the basic orality of language is permanent (Ong, 1982). Disembodied voices however are a relatively recent phenomenon, present only since the invention of the telephone and radio in the 1900s.

As a large component of the meaning communicated when speaking to someone is subtly transmitted via micro gestures and facial expressions that add intent and emphasis, what happens to our perception of meaning when the physical aspects of communication are removed? when the voice is disembodied? and what cues remain to determine whether or not to trust the source of the spoken word, to know whether or not the source of the words is human or a machine?

The attribution of a voice to a digital assistant is one of the few elements of interface design that are available to garner our trust, and it is a powerful tool in allaying our distrust in these digital interlocutors as voice is one of humankind's most primal forms of communication.

The shift from screen based interaction with computers (characterised by use of keyboards and screens) to speech based interaction with voice enabled digital assistants underpinned by Artificial Intelligence (AI), marks a shift from a visual-haptic mode of receiving information to a sensory-haptic mode. The experience of communicating with AI technologies via voice enabled digital assistants is a sensory-haptic experience.

The sophistication of voice enabled digital assistants is due largely to the increased capabilities of Natural Language Processing (NLP) and Natural Language Generation (NLG) technologies (two subfields of AI) that are now able to process human language well enough that they can communicate in it, i.e., they can understand messages in human language and they can respond using human language.

Human-Machine Communication (HMC) is an emerging area of Communication studies, that is focused on the ‘meaning that is created in interactions between people and technology and the implications for individuals and society’ (Guzman, 2018). What distinguishes HMC from other forms of Communication studies is that it is focused on human interaction with technologies that are designed specifically as communicators as opposed to technologies that are message channels such as television and radio etc.

From an HMC perspective, the study of the social positioning of technology includes how a person interprets what a particular technology is in relation to themselves, the factors contributing to such interpretations, and, in turn, how such conceptualisations inform their interactions. (Guzman & Lewis 2020). As the social roles and relationships in human-AI interactions are also sites for the investigation of power dynamics between people and technology (Guzman 2017), conversations with social AI can provide invaluable information about the potential of Human-Ai relationships.

In The Blurring Test, Peggy Weil reflects on conversations people engaged in with Mr.Mind, a Chatbot she created and monitored over a sixteen year period. She argues that Chatbots and related forms of conversational AI designed to function in a human-like role, have the potential to blur the line between human and machine and as such, challenge the ontological boundary between people and technology unfolding in and through communication (Guzman 2020)

















References:

Ong, Walter J. “Orality and Literacy: The Technologizing of the Word.” Orality and Literacy. Florence: Taylor & Francis Group, 1982. Print.

Guzman AL (2018) What is human-machine communication, anyway? In: Guzman AL (ed.)
Human-Machine Communication: Rethinking Communication, Technology, and Ourselves.
New York: Peter Lang, pp. 1–28.

Guzman, Andrea L, and Seth C Lewis. “Artificial Intelligence and Communication: A Human–Machine Communication Research Agenda.” New media & society 22.1 (2020): 70–86. Web.

Guzman AL (2017) Making AI safe for humans: a conversation with Siri. In: Gehl RW and
Bakardjieva M (eds) Socialbots and Their Friends: Digital Media and the Automation of
Sociality. New York: Routledge, pp. 69–82.

Gehl, Robert W., and Maria Bakardjieva. Socialbots and Their Friends : Digital Media and the Automation of Sociality . New York, New York ;: Routledge, 2017. Print.




















Designed to Obfuscate
Social AI
Mythologising
Emotional AI

Adaptation from writing of Walter. J Ong
Orality and Literacy: The Technologizing of the World 1982.