Skip to main content
Artificial Intelligence

'Counterfeit people': The dangers posed by Meta’s AI celebrity lookalike chatbots

Meta announced on Wednesday the arrival of chatbots with personalities similar to certain celebrities, with whom it will be possible to chat. Presented as an entertaining evolution of ChatGPT and other forms of AI, this latest technological development could prove dangerous.

Meta (Facebook's parent company) has unveiled 28 chatbots with what the group calls personalities of their own.
Meta (Facebook's parent company) has unveiled 28 chatbots with what the group calls personalities of their own. © France Médias Monde Graphics Studio
Advertising

Meta (formerly known as Facebook) sees these as "fun" artificial intelligence. Others, however, feel that this latest technological development could mark the first step towards creating "the most dangerous artefacts in human history", to quote from American philosopher Daniel C. Dennett’s essay about "counterfeit people"

On Wednesday, September 27, the social networking giant announced the launch of 28 chatbots (conversational agents), which supposedly have their own personalities and have been specially designed for younger users. These include Victor, a so-called triathlete who can motivate "you to be your best self", and Sally, the "free-spirited friend who’ll tell you when to take a deep breath".

Internet users can also chat to Max, a "seasoned sous chef" who will give you "culinary tips and tricks", or engage in a verbal joust with Luiz, who "can back up his trash talk". 

A chatbot that looks like Paris Hilton

To reinforce the idea that these chatbots have personalities and are not simply an amalgam of algorithms, Meta has given each of them a face. Thanks to partnerships with celebrities, these robots look like American jet-setter and DJ Paris Hilton, TikTok star Charli D'Amelio and American-Japanese tennis player Naomi Osaka.

Read moreShould we worry? ChatGPT passes Ivy League business exam

And that's not all. Meta has opened Facebook and Instagram accounts for each of its conversational agents to give them an existence outside chat interfaces and is working on giving them a voice by next year. The parent company of Mark Zuckerberg's empire was also looking for screenwriters who can "write character, and other supporting narrative content that appeal to wide audiences".

Meta may present these 28 chatbots as an innocent undertaking to massively distract young internet users, but all these efforts point towards an ambitious project to build AIs that resemble humans as much as possible, writes The Rolling Stone.  

This race to "counterfeit people" worries many observers, who are already concerned about recent developments made in large language model (LLM) research such as ChatGPT and Llama 2, its Facebook counterpart. Without going as far as Dennett, who is calling for people like Zuckerberg to be locked up, "there are a number of thinkers who are denouncing these major groups’ deliberately deceptive approach", said Ibo van de Poel, professor of ethics and technology at the Delft University of Technology in the Netherlands.

AIs with personalities are 'literally impossible'

The idea of conversational agents "with a personality is literally impossible", said van de Poel. Algorithms are incapable of demonstrating "intention in their actions or 'free will', two characteristics that are considered to be intimately linked to the idea of a personality".

Meta and others can, at best, imitate certain traits that make up a personality. "It must be technologically possible, for example, to teach a chatbot to act like the person they represent," said van de Poel. For instance, Meta's AI Amber, which is supposed to resemble Hilton, may be able to speak the same way as its human alter ego. 

The next step will be to train these LLMs to express the same opinions as the person they resemble. This is a much more complicated behaviour to programme, as it involves creating a sort of accurate mental picture of all of a person's opinions. There is also a risk that chatbots with personalities could go awry. One of the conversational agents that Meta tested expressed "misogynistic" opinions, according to the Wall Street Journal, which was able to consult internal company documents. Another committed the "mortal sin" of criticising Zuckerberg and praising TikTok.

To build these chatbots, Meta explains that it set out to give them "unique personal stories". In other words, these AIs’ creators have written biographies for them in the hopes that they will be able to develop a personality based on what they have read about themselves. "It's an interesting approach, but it would have been beneficial to add psychologists to these teams to get a better understanding of personality traits", said Anna Strasser, a German philosopher who was involved in a project to create a large language model capable of philosophising.

Meta’s latest AI project is clearly driven by a thirst for profit. "People will no doubt be prepared to pay to be able to talk and have a direct relationship with Paris Hilton or another celebrity," said Strasser.

The more users feel like they are speaking with a human being, "the more comfortable they'll feel, the longer they'll stay and the more likely they'll come back", said van de Poel. And in the world of social media, time – spent on Facebook and its ads –  is money.

Tool, living thing or somewhere between?

It is certainly not surprising that Meta’s first foray into AI with "personality" are chatbots aimed primarily at teenagers. "We know that young people are more likely to be anthropomorphic," said Strasser.

However, the experts interviewed feel that Meta is playing a dangerous game by stressing the "human characteristics" of their AIs. "I really would have preferred if this group had put more effort into explaining the limits of these conversational agents, rather than trying to make them seem more human", said van de Poel.

Read moreChatGPT: Cybercriminals salivate over world-beating AI chatbot

The emergence of these powerful LLMs has upset "the dichotomy between what is a tool or object and what is a living thing. These ChatGPTs are a third type of agent that stands somewhere between the two extremes", said Strasser. Human beings are still learning how to interact with these strange new entities, so by making people believe that a conversational agent can have a personality Meta is suggesting that it be treated more like another human being than a tool. 

"Internet users tend to trust what these AIs say" which make them dangerous, said van de Poel. This is not just a theoretical risk: a man in Belgium ended up committing suicide in March 2023 after discussing the consequences of global warming with a conversational agent for six weeks.

Above all, if the boundary between the world of AIs and humans is eventually blurred completely, "this could potentially destroy trust in everything we find online because we won't know who wrote what", said Strasser. This would, as Dennett warned in his essay, open the door to "destroying our civilisation. Democracy depends on the informed (not misinformed) consent of the governed [which cannot be obtained if we no longer know what and whom to trust]".

It remains to be seen if chatting with an AI lookalike of Hilton means that we are on the path to destroying the world as we know it. 

This article has been translated from the original in French

Daily newsletterReceive essential international news every morning

Take international news everywhere with you! Download the France 24 app

Share :
Page not found

The content you requested does not exist or is not available anymore.