Anthropomorphization in Natural Language Processing

Level:

Fourth-Year Natural Language Processing

Class Time:

Two 1-hour classes

Last Modified:

Tue 19 March 2024

Anthropomorphization
Deception

The first module begins with a challenge to students: can they distinguish an artificially generated voice clip from a real voice clip? From there, the computer science instructor plays a clearly anthropomorphized voice clip and asks students to describe the features that make it seem human. This leads to a presentation of the different techniques used to make audio and written text appear more human-like. The presentation then shifts to the philosophy instructor, who introduces two psychological factors that lead people to treat systems as human: "effectance" and "sociality". Effectance concerns our tendency to model systems as human to reduce our uncertainty about them, and the philosophy instructor leads a discussion about how this tendency might produce benefits and harms with respect to therapy bots. Sociality concerns our tendency to model systems as human to fulfill our social needs, and the philosophy instructor discusses how this tendency might be used to help or exploit users.

The second module is centered around two case studies. The first case study, about Replika, discusses which anthropomorphization techniques might be appropriate to use in Replika, and to what degree it would be appropriate to use them. The second case study, about Google Duplex-powered customer service chatbots, gets students to think about the dangers of intentional and unintentional deception in commercial chatbots. The module concludes by asking students to propose moral and legal rules that should govern the use of anthropomorphization, taking existing laws as templates.

This module was developed by Steven Coyne, Gerald Penn, and Graham Hirst. Diane Horton and Sheila McIlraith provided feedback on this module.

Materials

Module materials coming soon.