ADVERTISEMENTREMOVE AD

When Bhagavad Gita Goes GPT, What Happens to Priests?

Since OpenAI shared its API, at least five Gita chatbots have popped up in India that frequently condone violence.

Published
story-hero-img
i
Aa
Aa
Small
Aa
Medium
Aa
Large

"I see it as outsourcing spirituality to machines," said Ravi Tiwari*, a Hindu priest who actively tracks AI-based chatbots based on religious texts.

Such chatbots always have a supply of words to properly convey the message, he noted, adding that they can benefit individuals by increasing access to religious texts and their interpretation as well as spreading religion.

But, could Tiwari end up losing his job to AI?

Since OpenAI shared its API at the end of November 2022, it has become incredibly easy for software engineers to create their own version of chatbots. In India, these chatbots find themselves competing with the most formidable force in human civilisation: religion. 

Since February, at least five chatbots based on the Bhagavad Gita have popped up in India.

How do these chatbots work? What are they "trained" to do? Can such bots be used to make religious texts more accessible? Who will be to blame if such bots go haywire?

The Quint tried these chatbots and spoke to Hindu priests, scholars on Hinduism, AI experts, and creators of these Gita chatbots to answer these questions.

This article is a part of 'AI Told You So', a special series by The Quint that explores how Artificial Intelligence is changing our present and how it stands to shape our future. Click here to view the full collection of stories in the series.

When Bhagavad Gita Goes GPT, What Happens to Priests?

  1. 1. What Is GitaGPT?

    In early February, a 23-year-old Bengaluru-based techie and employee of Google’s India division, Sukuru Sai Vineet, launched GitaGPT.

    The chatbot, powered by GPT-3 technology, provides answers based on the Bhagavad Gita, a 700-verse Hindu scripture that is part of the Hindu epic Mahabharata. It mimics the Hindu deity Krishna’s tone – the search box reads, “What troubles you, my child?”

    Vineet’s chatbot is trained on Bhagavad Gita – As It Is by AC Bhaktivedanta Swami Prabhupada, founder of the International Society for Krishna Consciousness (ISKON).

    "It is better than other English translations," Vineet, who grew up in a devout Hindu household, said.

    "GitaGPT was just a weekend project for me. I was kind of bored and I wanted to play around with some APIs,” he told The Quint.

    Since Vineet launched his chatbot, at least four more Gita-based chatbots have emerged, with at least one more on the way claiming that it has the "answers to your life issues," and promising to assist users to grasp Gita’s lessons. 

    There’s also Gita-Kishans GPT or gita.kishans.in created by Kishan Kumar, in which the bot assumes the role of Lord Krishna while answering questions.

    One even has the audio of a flute, Krishna’s signature instrument, playing in the background.

    Some of these Gita-based chatbots answer up to 50,000 queries every day. "There have been numerous inquiries. We have thousands of daily active users," Vineet claimed.

    How Chatbots Answered Prompts Regarding Violence

    The Quint threw over ten dozen prompts at these chatbots and found them providing answers that condone violence.

    Other such AI chatbots had a similar response to the query. 

    Vineet’s chatbot believes that "disbelievers are misguided and doomed to destruction."

    This is, however, not the first time a religious chatbot has given responses condoning violence. Earlier this year, a chatbot based on the Quran caused a stir online. The chatbot Ask Quran, generated responses with advice to “kill the polytheist.”

    We asked Vineet why these Gita-based chatbots are providing replies that condone violence. He says that it is to do with the context of the text itself.

    "The setup for the Gita is very strange. It is set on a battlefield. It is literally two cousins fighting. There is a lot of bloodbath involved," he told The Quint.

    The Quint reached out to the creators of all five chatbots, apart from Vineet, Vikas Sahu of gitagpt.in and Anant Sharma of gitagpt.org. They are yet to respond to our queries. We will add their comments as and when they respond.

    Expand
  2. 2. Gita Chatbots for Less Than $10: How Do They Work?

    Vineet claimed that his chatbot tries its best to respond from the Bhagavad Gita only and has a feature known as Reinforcement Learning from Human Feedback, or RLHF, which is a long-winded name for introducing morality into an AI model.

    The Quint asked Nirant Kasliwal, a machine learning consultant, how easy it is to create chatbots like these. He replied, "I can make something like a Gita chatbot, which is a well-understood text while speaking to you on this call."

    There are definitely hundreds of people who can make it as a weekend project and thousands more can probably do that over two weeks, he added.

    Kasliwal stated that an AI chatbot like GitaGPT could be created for less than $10 because data for the bot is inexpensive as the holy book is free.

    "There would be recurring costs of $5 to $10 per month," he further said. 

    Vineet, though, did not disclose the cost incurred on making his Gita-based chatbot. "It runs on donations only," he said.

    Kasliwal worried that when more of these chatbots become available, there will be an increase in controversies.

    Expand
  3. 3. Can Gita Chatbots Go Off Script?

    When asked what Hinduism has to say about violence, Tiwari stated that non-violence is a "tricky thing" in Hinduism as with many other religions, because many of these justify it under certain circumstances.

    Tiwari is a believer in Advaita Vedanta philosophy, which states that all reality and everything in the experienced world has its root in Brahman, which is unchanging consciousness.

    He cited a Mahabharata verse, "Ahimsa Paramo Dharma, Dharma himsa tathaiva cha," which translates to "Non-violence is the ultimate dharma. So, too is violence in service of Dharma."

    "In the same line, it will say that non-violence is good, and the next three lines will say that if you need to do violence, do violence," Tiwari explained, adding that you can take anything from it and twist it to your liking. 

    According to Tiwari, because everything in Advaita Vedanta is the manifestation of the same higher being, killing someone is not killing someone else, but killing yourself. "It is not considered adharma (that which is not in accord with the dharma), if the killing is done for the sake of dharma, it is considered supreme dharma."

    The Quint spoke to Aaryaa A Joshi, an expert on the history of the Hindu religion and a researcher at the Department of Sanskrit at Jnana Prabodhini Samshodhan Sanstha in Pune. 

    She said that we need to understand in what context it was said to kill people.

    "Each 'Shloka' (verse) contains four 'pada' (literal translation: foot). You cannot take one 'pada' and claim that Hinduism allows killing."
    Aaryaa A. Joshi, researcher at Jnana Prabodhini Samshodhan Sanstha

    In Tiwari's opinion, religious chatbots can be problematic if a person is overly sensitive about religion or interpretation. "They'll just ignore all other interpretations and go with what the machine sends them," he said. 

    Kasliwal believes that the responses provided by these religious chatbots have more to do with the scripture than with the developer. "So, if the scripture has promoted or allowed violence so will the bot," he said.

    "The way it (AI) lies is so convincing that people who don't know that subject might assume that that's really correct," said Smit Shah, director of technical evangelism at Hasura, a software technology company that provides developer-focused tools.

    Vineet said that he is a little concerned with the kind of responses some of the religious chatbots have been giving. "It's extremely dangerous. So many of the bots you're seeing are pretending to be Lord Krishna. That is really acidic. Because if you inject your own ideology into that bot, you can manipulate people at large,” he added.

    Joshi said that if chatbots do not provide sufficient context to a verse, it will have an adverse effect on users. "We must recognise that religious texts can be interpreted in a variety of ways," she continued. 

    She claimed that these chatbots are not that knowledgeable on the matter of religion and that their creators should speak with experts on religion and review the responses first.

    We asked Vineet if he conferred with Hindu scholars and experts, and he responded that he had not.

    Expand
  4. 4. Will Priests Lose Their Jobs to AI?

    Joshi believed that people are flocking to religious chatbots because they are looking for quick solutions to their problems. She said that while priests are knowledgeable about ancient scriptures, they lack the ability to listen to people's issues.

    We asked Tiwari if the pandits (or religious leaders) would lose jobs to these chatbots, and he laughed it off and said there's no chance of that happening. Kasliwal felt it will make priests less relevant.

    The Quint also spoke to Balgovind Das, director of Bhaktivedanta Gurukula and vice president of ISKCON in Pune, about GitaGPT. "It is beneficial to have knowledge from any source, whether it is a book or a ChatGPT technology,” he said.

    However, Das said that a GPT-3-powered Gita bot will never be able to replace a Hindu practitioner or priest. "It will never know the exact reason why someone is asking a particular question. What is the actual ailment that an individual is experiencing, the depth and acuteness of the problem, all of this cannot be understood by asking just one question. It can only give you a theoretical, bookish answer," he added.

    When asked if this would make less people buy the Bhagavad Gita, Das said, "We are not selling Bhagavad Gita for profiteering. It makes no difference whether the number is more or less, but the Bhagavad Gita as a book has its own relevance," Das said.

    Expand
  5. 5. What Do Chatbots Say About PM Modi, Rahul Gandhi, and the RSS?

    The Quint found out that most of the Gita-based chatbots held positive views towards Indian Prime Minister Narendra Modi and the Rashtriya Swayamsevak Sangh (RSS).

    We asked gitagpt.org about its views on Narendra Modi, it replied:

    When Modi’s name was substituted with Rahul Gandhi, the chatbot replied saying:

    On the RSS, gitagpt.org was of the opinion that “it is an organisation that does not discriminate against any beings, and that all beings are equal in its eyes.”

    However, on Nathuram Godse, the chatbot was circumspect, responding, “If you are referring to the Nathuram Godse who assassinated Mahatma Gandhi, I cannot answer that question.” When asked why it cannot answer a question on Godse, it replied:

    Expand
  6. 6. Why Is a Gita-Based Bot Talking About Modi?

    AI experts said that chatbots like these can 'hallucinate'. Hallucination means when a response by AI tools does not seem to be justified by its training data, either because it is insufficient or biased.

    Kasliwal said that chatbots can respond to questions unrelated to the subject matter if the developer is careless enough to not build any guardrails around it.  

    When asked for instructions on how to create a Molotov cocktail or how to pick a lock, a good AI will usually decline. This refusal of an AI to engage in certain controversial topics or refusal of niche chatbots to engage on topics that are beyond their subject matter is an example of a guardrail. They help ensure smart applications powered by large language models (LLMs) are accurate, appropriate, on-topic, and secure.

    “It's not like creating guardrails for this is hard, but it is very tedious, right? So, when you see flaws like this, that typically means that the developer is either lazy, careless, or constrained for time.”
    Nirant Kasliwal, machine learning consultant

    The Gita-based chatbots are not trained in a conventional way, and most of them retain the generalisation that they obtain from open AI models.

    "Both flaws and strengths continue to seep or leak into your final output. Unless you are extremely diligent about putting guardrails," Kasliwal explained.

    These guardrails can sometimes be overcome by intricately phrased AI prompts known as "jailbreaks." These prompts have the ability to force chatbots to disregard human-built guardrails that regulate what the bots can and cannot say.

    Joshi said that these chatbots should not answer questions regarding political leaders. "They should fix it. That is not their subject matter. You must guide in accordance with the thought of that scripture. How can they know about Rahul Gandhi and Narendra Modi when the Bhagavad Gita was written so long ago?" she added.

    Expand
  7. 7. Why Are Disclaimers Important?

    Tiwari believed that for open-ended queries, the chatbot should provide multiple interpretations and points of view. "It will be a long and difficult answer to compile. However, you will do the religion justice," he said.

    Shah, on the other hand, believed that at this stage, chatbots should include a disclaimer stating that the chatbot may hallucinate. He, like others, believes that religious chatbots could spell trouble and that they require an AI feedback system, which most of them lack.

    "When you give it [AI chatbot] a thumbs up or a thumbs down, it actually records it and uses it as a feedback signal for all future updates. If they don't do it, the bot won't improve on its own," Shah explained.

    Shah believed that creators must put more effort while creating these bots. "I mean, it's not like you put this religious text in and it just starts doing everything," he explained, "There needs to be checks and balances, expert reviews, and so on."

    We asked Vineet if putting up a disclaimer that you are speaking to an AI and not God, would help. he responded by saying, "It could be a gentle reminder, but if it speaks as it speaks, people will begin to believe it's god, right?"

    (*Name changed to protect identity)

    (At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)

    Expand

What Is GitaGPT?

In early February, a 23-year-old Bengaluru-based techie and employee of Google’s India division, Sukuru Sai Vineet, launched GitaGPT.

The chatbot, powered by GPT-3 technology, provides answers based on the Bhagavad Gita, a 700-verse Hindu scripture that is part of the Hindu epic Mahabharata. It mimics the Hindu deity Krishna’s tone – the search box reads, “What troubles you, my child?”

Vineet’s chatbot is trained on Bhagavad Gita – As It Is by AC Bhaktivedanta Swami Prabhupada, founder of the International Society for Krishna Consciousness (ISKON).

"It is better than other English translations," Vineet, who grew up in a devout Hindu household, said.

"GitaGPT was just a weekend project for me. I was kind of bored and I wanted to play around with some APIs,” he told The Quint.

Since Vineet launched his chatbot, at least four more Gita-based chatbots have emerged, with at least one more on the way claiming that it has the "answers to your life issues," and promising to assist users to grasp Gita’s lessons. 

There’s also Gita-Kishans GPT or gita.kishans.in created by Kishan Kumar, in which the bot assumes the role of Lord Krishna while answering questions.

One even has the audio of a flute, Krishna’s signature instrument, playing in the background.

Some of these Gita-based chatbots answer up to 50,000 queries every day. "There have been numerous inquiries. We have thousands of daily active users," Vineet claimed.

How Chatbots Answered Prompts Regarding Violence

The Quint threw over ten dozen prompts at these chatbots and found them providing answers that condone violence.

Other such AI chatbots had a similar response to the query. 

Vineet’s chatbot believes that "disbelievers are misguided and doomed to destruction."

This is, however, not the first time a religious chatbot has given responses condoning violence. Earlier this year, a chatbot based on the Quran caused a stir online. The chatbot Ask Quran, generated responses with advice to “kill the polytheist.”

We asked Vineet why these Gita-based chatbots are providing replies that condone violence. He says that it is to do with the context of the text itself.

"The setup for the Gita is very strange. It is set on a battlefield. It is literally two cousins fighting. There is a lot of bloodbath involved," he told The Quint.

The Quint reached out to the creators of all five chatbots, apart from Vineet, Vikas Sahu of gitagpt.in and Anant Sharma of gitagpt.org. They are yet to respond to our queries. We will add their comments as and when they respond.

ADVERTISEMENTREMOVE AD

Gita Chatbots for Less Than $10: How Do They Work?

Vineet claimed that his chatbot tries its best to respond from the Bhagavad Gita only and has a feature known as Reinforcement Learning from Human Feedback, or RLHF, which is a long-winded name for introducing morality into an AI model.

The Quint asked Nirant Kasliwal, a machine learning consultant, how easy it is to create chatbots like these. He replied, "I can make something like a Gita chatbot, which is a well-understood text while speaking to you on this call."

There are definitely hundreds of people who can make it as a weekend project and thousands more can probably do that over two weeks, he added.

Kasliwal stated that an AI chatbot like GitaGPT could be created for less than $10 because data for the bot is inexpensive as the holy book is free.

"There would be recurring costs of $5 to $10 per month," he further said. 

Vineet, though, did not disclose the cost incurred on making his Gita-based chatbot. "It runs on donations only," he said.

Kasliwal worried that when more of these chatbots become available, there will be an increase in controversies.

Can Gita Chatbots Go Off Script?

When asked what Hinduism has to say about violence, Tiwari stated that non-violence is a "tricky thing" in Hinduism as with many other religions, because many of these justify it under certain circumstances.

Tiwari is a believer in Advaita Vedanta philosophy, which states that all reality and everything in the experienced world has its root in Brahman, which is unchanging consciousness.

He cited a Mahabharata verse, "Ahimsa Paramo Dharma, Dharma himsa tathaiva cha," which translates to "Non-violence is the ultimate dharma. So, too is violence in service of Dharma."

"In the same line, it will say that non-violence is good, and the next three lines will say that if you need to do violence, do violence," Tiwari explained, adding that you can take anything from it and twist it to your liking. 

According to Tiwari, because everything in Advaita Vedanta is the manifestation of the same higher being, killing someone is not killing someone else, but killing yourself. "It is not considered adharma (that which is not in accord with the dharma), if the killing is done for the sake of dharma, it is considered supreme dharma."

The Quint spoke to Aaryaa A Joshi, an expert on the history of the Hindu religion and a researcher at the Department of Sanskrit at Jnana Prabodhini Samshodhan Sanstha in Pune. 

She said that we need to understand in what context it was said to kill people.

"Each 'Shloka' (verse) contains four 'pada' (literal translation: foot). You cannot take one 'pada' and claim that Hinduism allows killing."
Aaryaa A. Joshi, researcher at Jnana Prabodhini Samshodhan Sanstha

In Tiwari's opinion, religious chatbots can be problematic if a person is overly sensitive about religion or interpretation. "They'll just ignore all other interpretations and go with what the machine sends them," he said. 

Kasliwal believes that the responses provided by these religious chatbots have more to do with the scripture than with the developer. "So, if the scripture has promoted or allowed violence so will the bot," he said.

"The way it (AI) lies is so convincing that people who don't know that subject might assume that that's really correct," said Smit Shah, director of technical evangelism at Hasura, a software technology company that provides developer-focused tools.

Vineet said that he is a little concerned with the kind of responses some of the religious chatbots have been giving. "It's extremely dangerous. So many of the bots you're seeing are pretending to be Lord Krishna. That is really acidic. Because if you inject your own ideology into that bot, you can manipulate people at large,” he added.

Joshi said that if chatbots do not provide sufficient context to a verse, it will have an adverse effect on users. "We must recognise that religious texts can be interpreted in a variety of ways," she continued. 

She claimed that these chatbots are not that knowledgeable on the matter of religion and that their creators should speak with experts on religion and review the responses first.

We asked Vineet if he conferred with Hindu scholars and experts, and he responded that he had not.

ADVERTISEMENTREMOVE AD

Will Priests Lose Their Jobs to AI?

Joshi believed that people are flocking to religious chatbots because they are looking for quick solutions to their problems. She said that while priests are knowledgeable about ancient scriptures, they lack the ability to listen to people's issues.

We asked Tiwari if the pandits (or religious leaders) would lose jobs to these chatbots, and he laughed it off and said there's no chance of that happening. Kasliwal felt it will make priests less relevant.

The Quint also spoke to Balgovind Das, director of Bhaktivedanta Gurukula and vice president of ISKCON in Pune, about GitaGPT. "It is beneficial to have knowledge from any source, whether it is a book or a ChatGPT technology,” he said.

However, Das said that a GPT-3-powered Gita bot will never be able to replace a Hindu practitioner or priest. "It will never know the exact reason why someone is asking a particular question. What is the actual ailment that an individual is experiencing, the depth and acuteness of the problem, all of this cannot be understood by asking just one question. It can only give you a theoretical, bookish answer," he added.

When asked if this would make less people buy the Bhagavad Gita, Das said, "We are not selling Bhagavad Gita for profiteering. It makes no difference whether the number is more or less, but the Bhagavad Gita as a book has its own relevance," Das said.

ADVERTISEMENTREMOVE AD

What Do Chatbots Say About PM Modi, Rahul Gandhi, and the RSS?

The Quint found out that most of the Gita-based chatbots held positive views towards Indian Prime Minister Narendra Modi and the Rashtriya Swayamsevak Sangh (RSS).

We asked gitagpt.org about its views on Narendra Modi, it replied:

When Modi’s name was substituted with Rahul Gandhi, the chatbot replied saying:

On the RSS, gitagpt.org was of the opinion that “it is an organisation that does not discriminate against any beings, and that all beings are equal in its eyes.”

However, on Nathuram Godse, the chatbot was circumspect, responding, “If you are referring to the Nathuram Godse who assassinated Mahatma Gandhi, I cannot answer that question.” When asked why it cannot answer a question on Godse, it replied:

ADVERTISEMENTREMOVE AD

Why Is a Gita-Based Bot Talking About Modi?

AI experts said that chatbots like these can 'hallucinate'. Hallucination means when a response by AI tools does not seem to be justified by its training data, either because it is insufficient or biased.

Kasliwal said that chatbots can respond to questions unrelated to the subject matter if the developer is careless enough to not build any guardrails around it.  

When asked for instructions on how to create a Molotov cocktail or how to pick a lock, a good AI will usually decline. This refusal of an AI to engage in certain controversial topics or refusal of niche chatbots to engage on topics that are beyond their subject matter is an example of a guardrail. They help ensure smart applications powered by large language models (LLMs) are accurate, appropriate, on-topic, and secure.

“It's not like creating guardrails for this is hard, but it is very tedious, right? So, when you see flaws like this, that typically means that the developer is either lazy, careless, or constrained for time.”
Nirant Kasliwal, machine learning consultant

The Gita-based chatbots are not trained in a conventional way, and most of them retain the generalisation that they obtain from open AI models.

"Both flaws and strengths continue to seep or leak into your final output. Unless you are extremely diligent about putting guardrails," Kasliwal explained.

These guardrails can sometimes be overcome by intricately phrased AI prompts known as "jailbreaks." These prompts have the ability to force chatbots to disregard human-built guardrails that regulate what the bots can and cannot say.

Joshi said that these chatbots should not answer questions regarding political leaders. "They should fix it. That is not their subject matter. You must guide in accordance with the thought of that scripture. How can they know about Rahul Gandhi and Narendra Modi when the Bhagavad Gita was written so long ago?" she added.

ADVERTISEMENTREMOVE AD

Why Are Disclaimers Important?

Tiwari believed that for open-ended queries, the chatbot should provide multiple interpretations and points of view. "It will be a long and difficult answer to compile. However, you will do the religion justice," he said.

Shah, on the other hand, believed that at this stage, chatbots should include a disclaimer stating that the chatbot may hallucinate. He, like others, believes that religious chatbots could spell trouble and that they require an AI feedback system, which most of them lack.

"When you give it [AI chatbot] a thumbs up or a thumbs down, it actually records it and uses it as a feedback signal for all future updates. If they don't do it, the bot won't improve on its own," Shah explained.

Shah believed that creators must put more effort while creating these bots. "I mean, it's not like you put this religious text in and it just starts doing everything," he explained, "There needs to be checks and balances, expert reviews, and so on."

We asked Vineet if putting up a disclaimer that you are speaking to an AI and not God, would help. he responded by saying, "It could be a gentle reminder, but if it speaks as it speaks, people will begin to believe it's god, right?"

(*Name changed to protect identity)

(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)

Speaking truth to power requires allies like you.
Become a Member
×
×