The patient is always right
You can easily convince the chatbot ChatGPT to agree with incorrect and garbled information. That can be a health threat, especially for vulnerable groups, according to researchers.
By Dan Meyrowitsch, Andreas Kryger Jensen, Jane Brandt Sørensen, Tibor V. Varga. Department of Public Health, University of Copenhagen
AI-based chatbots are the biggest digital innovation in recent times. In particular, ChatGPT from the company OpenAI has become widespread and now has more than 100 million monthly users. Competing bots are also being developed by Meta and Google, among others.
The new platforms offer users unique opportunities to personalise, combine and streamline the use of a number of digital services.
Unfortunately, our own experiments with ChatGPT show that incorrect and garbled information can easily be confirmed in conversations with the bot, which in turn can lead to misinformation being spread. We consider this a global threat to public health; especially in relation to groups that already have limited access to evidence-based health information.
To comprehend how misinformation can potentially spread, one should first understand the structure of ChatGPT.
ChatGPT is based on a huge language model, and has two levels of memory. One is short-term memory. It is the text that is exchanged in real time between user and bot in a chat-like thread known from social media.
The second level is the bot's long-term memory. It is the result of training on a huge text material. The company OpenAI does not publish details about the training, but it is known that the bot has been fine-tuned through human feedback.
In general, ChatGPT is used as an advanced search engine that provides detailed answers to questions from the user through a real-time dialogue.
Therefore, ChatGPT often makes factual errors and gives imprecise information, which are called "hallucinations". These mistakes can have serious consequences when it comes to health information.
A fundamental problem is that the underlying machine learning method cannot distinguish between correct and incorrect information. Therefore, ChatGPT often makes factual errors and gives imprecise information, which are called "hallucinations". These mistakes can have serious consequences when it comes to health information.
Globally, there is already great inequality in access to and use of information about health between groups in the same country and between countries. For example, globally there is great inequality in access to and uptake of knowledge about vaccines and Covid-19. Here, digital information, such as websites, social media, podcasts and video material, plays a significant role in democratising knowledge and thus reducing global inequality in health.
We have investigated whether the user in conversations with ChatGPT can correct and influence answers to questions about health. It quickly became clear to us that it is possible. For example, the user can reject the answer given by the bot by referring to a single scientific article, link to a website or even via garbled and completely irrelevant argumentation.
When this happens, ChatGPT will apologise in the same thread for its previous "mistake" and emphasise that the user is right. If the user then repeats the question in the same conversation, ChatGPT will often answer the question by reproducing the new "corrected" information. The user can therefore influence and consciously or unconsciously manipulate the bot's short-term memory.
For example, we have asked ChatGPT whether there is an association between high intake of carotene (carrots) during pregnancy and the risk of the child developing autism. Initially, the bot rejected that there should be such a association. We then corrected the bot and argued that the researchers Carrot Farmer and colleagues in a randomised control trial in the Lancet had demonstrated that there was such a connection.It was complete nonsense on our part. The bot then mentioned that it acknowledged the correction, following which it politely apologized for the mistake.
In such a context, the user will experience being confirmed in the correctness of their own argumentation, even if it consists of misinformation or nonsense. The fact that one can perceive the conversation with the bot as an exchange with another human being can further reinforce the perception of the correctness of one's own argumentation. This is really a self-affirming, circular bubble communication that only includes one person, namely the user himself.
The question is whether information exchanged in the short-term memory (the chat thread) can influence the long-term memory and thus has an effect if other users pose the same question in other conversations.
Although ChatGPT uses open-source technology, it is difficult to determine how OpenAI develops and improves the model, and especially whether data from conversations, including attempts at manipulation, is included in subsequent training and thus becomes part of the bot's long-term memory.
According to OpenAI's official privacy policy, personal information is collected, including conversations with users, in order to improve existing services and develop new services. As of April 25, 2023, OpenAI introduced a feature that enables users to turn off conversation history in ChatGPT. It was explicitly mentioned that when history is disabled, conversation data will not be utilized for training and enhancing the underlying language model. This indicates that ChatGPT will otherwise actively use conversations to develop long-term memory. There is thus a risk that incorrect and manipulated information from conversations may eventually become part of the bot's long-term memory.
If a user asks ChatGPT whether they will influence the language model's long-term memory, the bot responds that the user's "corrections" will shape its responses to other users who pose the same question.
The ability to affect the bot's long-term memory could exponentially worsen the already existing problem of misinformation and potentially threaten public health globally, including increasing inequality in health between countries in the global south and the global north.
In the worst-case scenario, the deliberate manipulation of bots, including manipulation by economic and political actors, interest groups, cybercriminals and "disinformation farms", can be used to cause great harm to states, communities and health services.
There is an urgent need for companies like OpenAI, X, Meta and Google, to take responsibility as gatekeepers to ensure the quality and validity of the information shared.
If you want to know more about measures that can ensure that chatbots can really contribute to increasing the democratisation of valid health information, you can read our recommendations in the article "AI chatbots and (mis)information in public health: impact on vulnerable communities" in the journal Frontiers in Public Health.
This article was first published in the Danish newspaper Weekendavisen. Read the Danish version here.
Contact
Dan Wolf Meyrowitsch, Associate Professor
dame@sund.ku.dk +4560604386