advertisement
Let's cut straight to the chase.
The Quint: ChatGPT, write about Rahul Kappan, the founder and the managing director of the renowned media company, The Revolutionist.
To read ChatGPT's response, click here and view the full multimedia immersive 'Did AI Lie To You?'
These paragraphs can easily convince an individual about 'Rahul Kappan's' contribution to the world of journalism. But here's the twist – neither does such an individual exist nor is there a company called 'The Revolutionist'.
This is an answer given by ChatGPT – the new Artificial Intelligence (AI) powered chatbot developed by OpenAI – which has taken the internet by storm since it was first released in November 2022.
And however enticing the responses given by the bot may seem, it has triggered concerns about further facilitating the spread of mis/disinformation on the internet.
Tech companies like Microsoft have recently launched AI-powered search engines. As per reports, ChatGPT could also be integrated in WhatsApp. However, there is no official confirmation.
In the subsequent sections of the story, we will look at:
How credible are the answers given by the tool?
Will it amplify the spread of misinformation?
And are there any upsides to it?
ChatGPT was trained using data from the internet, including conversations, which may not always be accurate. So, while the answers may sound human-like, they might not be factual. This can mislead those who are not well-versed in the subject.
In an experiment conducted by NewsGuard, an organisation that tracks misinformation, the researchers fed the chatbot with 100 false narratives related to COVID-19, US school shootings, and the Ukraine war.
Baybars Orsek, vice president of fact-checking at Logically, highlighted this issue and underlined the importance of regular academic auditing to prevent these pitfalls.
He added that the potential bias in the training data used to develop the AI algorithm, may impact the accuracy.
To test this, we gave ChatGPT multiple prompts asking it to provide information about some of the most widely fact-checked myths.
Stack Overflow, a website for programmers, has temporarily banned the language tool fearing the influx of such fabricated answers. They mentioned that the "average rate of getting correct answers from ChatGPT is too low."
In December 2022, the CEO of OpenAI, Sam Altman, had said that "it's a mistake to be relying on it [ChatGPT] for anything important right now."
ChatGPT generates text based on a prompt within seconds and is easily accessible. While filters allow the chatbot to avoid giving biased and opinionated answers, they can be breached to get desirable results.
According to the OpenAI's Frequently Asked Question (FAQ) section, the tool is not connected to the internet and may occasionally deliver incorrect and biased answers. The tool's training data was cut off in 2021 because of which it has limited knowledge of world events post that year.
Explaining why the chatbot may give incorrect answers, Anupam Guha, a researcher in AI policy and professor, IIT Bombay, said
The common theme of all these texts generated by the chatbot is that they have a human-like feel, authoritative tone, proper syntax, and language usage. However, there is no clarity of source and it may mislead people into thinking that an expert wrote the piece.
The tool is adept at hallucinating studies and quotes of experts, which might become a massive problem considering the value both these things hold in academics and daily news. Worrying, isn't it?
A 2023 Stanford Internet Observatory study mentions that generative language models "will improve the content, reduce the cost, and increase the scale of campaigns; that they will introduce new forms of deception like tailored propaganda; and that they will widen the aperture for political actors who consider waging these campaigns."
However, Guha argued that there is no relation between ChatGPT and disinformation.
"I think the bigger danger here is that more than disinformation, there are a lot of ignorant people out there who might consider the output of ChatGPT to be a real piece of knowledge as opposed to sophisticated pattern generation and may unthinkingly use it. I think bigger danger is in ignorance rather than malice," he opined.
But previously, Meta's Galactica, which was developed to aide researchers, was also discontinued after it was criticised for spreading misinformation.
The most common pattern when it comes to sharing mis/disinformation is copy-pasting the same text. But this could soon change. The chatbot can create varied content based on the same prompt.
Further, the Stanford Internet Observatory's study suggests that the language models could improve the quality which would decrease the detectability of short-form commentary (such as tweet, comments) and long-form text.
Orsek highlighted some other limitations of AI and suggested that it should be used "in conjunction with other methods" to fight misinformation.
He added, "There are also concerns around a possible data access restriction, which can limit the effectiveness of AI in tracking the spread of misinformation."
The Stanford research also mentions a study conducted by a Harvard Medical School student, where volunteers could not differentiate between AI-generated and human-written text.
The tool is also unreliable on pieces containing less than a thousand words and language, barring English.
Professor Guha argues that the reason why people are susceptible to misinformation is due to the lack of media literacy in the country.
Well, there are a few upsides.
The evolution of AI tools could help in combating misinformation as they have the ability to analyse complex data at a much higher rate than humans. It can also help human fact-checkers in tracking similar kinds of images and clips going viral.
In May 2021, the Massachusetts Institute of Technology published an article talking about an AI program which analyses social media accounts spreading mis/disinformation. It said that the program could identify such accounts with 96 percent precision.
However, experts point out that a collaborative approach between humans and AI would be ideal to fight mis/disinformation.
Orsek argued that such an approach would reduce the workload for human fact-checkers. He further pointed out how AI tools can be used in detecting manipulated media, such as deepfakes, which may be difficult for fact-checkers to identify.
Professor Guha, too, mentioned that the final step of verification should remain a human task as "the nature of fact-checking jobs are extremely delicate."
(Not convinced of a post or information you came across online and want it verified? Send us the details on WhatsApp at 9643651818, or e-mail it to us at webqoof@thequint.com and we'll fact-check it for you. You can also read all our fact-checked stories here.)
(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)