Doctor of Linguistics and Psychology, Member of the National Council for Digital Technologies, Justine Cassell is Research Professor at Inria (National Institute for Research in Digital Science and Technology) in Paris and chair of the Prairie-Institute of Artificial Intelligence in Paris.
What is called “artificial intelligence”?
First of all, you should know that artificial intelligence is an approach, a paradigm, it is not a technology: technology develops in the service of artificial intelligence. The term “artificial intelligence” was introduced in 1956 by American mathematician John McCarthy at a conference in Dartmouth. He and his colleagues believed that every aspect of human intelligence could be modeled by machines, and in this way these machines would learn to think autonomously. But for me, the emphasis on autonomy led discipline to a dead end. Moreover, another group, led in particular by anthropologist Margaret Mead, was working in parallel in the United States to create machines that could work with humans. This approach is called cybernetics. They kept people at the center of their research.
For you, the machine should remain in the service of a person?
I really decided to create a “social artificial intelligence” that allows you to understand how collusion is built, how empathy for the other is born … This approach probably comes from the fact that I am studying social sciences: I wrote a dissertation on the role of gestures in language learning in children. I then looked at hand gestures in adults, as well as facial expressions, head movements, and prosody (tempo and tone of speech).
All this quite unconsciously plays a role in the construction of our sentences and in how they are perceived. I was looking for ways to better understand the relationship between verbal and non-verbal, and I had the idea to create a program that would be similar to a person and would distinguish between different types of relationships between subjects. Thus was born the conversational agent, sometimes called an “avatar” or “chatbot”, with the participation of an incredible team. It was in 1993, but today I continue to work on it, to study the role of the body in the social.
How to formulate body research and work with robots – by definition, without a body?
Together with the National Council on Digital Technology, we are preparing a dossier on the body, for which we are studying the role of artificial intelligence in future work, which we have decided to call “the distribution of bodies between people and machines.” This includes the use of robots in the workplace, which we unconsciously interact with in the same way that we interact with humans using our bodies.
For example, without realizing it, we follow every glance or glance to the side, body movement, hand gesture, smile and draw conclusions about the trustworthiness of the person opposite, about his involvement in the conversation, about what she thinks. of us. This ability to use the body to get to know each other begins at birth and lasts throughout life: children do this even before they start talking. Older people who forget words as they age use their bodies to communicate. It’s nice to know that we’re doing the same in front of a robot.
Does this require the robot to assume human form?
The more human-like a robot is, the more it generates these gestures, looks, and head movements from us. The more we appreciate it, the more we include it in our conversation. I work with computer images projected onto a very large screen, the appearance of which can be chosen down to the details of clothing, and here we observe the same phenomenon. On the other hand, at a certain point of realism, people no longer want to interact with the machine. Hyperrealism brings about what is called the “valley of the strange”: we are almost disgusted by these creatures that look like us, but are not us.
In addition to robots and conversational agents that serve as interlocutors, there are two other technologies in which one can clearly talk about the distribution of the body between the user and the machine: the first is a telepresence robot that goes to a place to work instead of a person who cannot go there. Such is the case in Tokyo, where cafe servers are robots run by people with disabilities from home, with obvious social benefits.
The second technology is the avatar, or virtual person, that exists in virtual worlds such as Mark Zuckerberg’s Metaverse. Researchers have shown that building self-esteem is stimulated by confidence acquired in places where there are no more flaws, where a person chooses to be tall, short, yellow or blue. So, back in the real world, this technology can serve as a backbone for human interaction.