Websites Using AI to 'Undress' Women Receive Unprecedented Traction, Study Finds

The study found that these websites received over 24 million unique visits in September alone.

Abhishek Anand
Tech News
Updated:
<div class="paragraphs"><p>The study found over 24 million users visit websites that let them undress an individual using AI.</p></div>
i

The study found over 24 million users visit websites that let them undress an individual using AI.

(Photo: Altered by The Quint)

advertisement

Websites that used artificial intelligence (AI) to undress people, especially women, received more than 24 million unique visits in September alone, a study conducted by a social media analytics firm named Graphika found.

The findings highlight the rampant misuse of AI and an urgent need for laws to regulate them.

It analysed around 34 websites that use AI tools to manipulate existing images and videos of people to make them appear nude without their consent. The firm called the services offered by these websites "non-consensual intimate imagery" (NCII).

These NCII providers misuse the popularity of different social media platforms to market their "services" and divert traffic to their websites. The study found that the volume of referral link spam has increased by more than 2,000 percent on platforms, such as X (formerly Twitter) and Reddit since the beginning of this year.

How Do These Models Operate?

These websites function on a freemium model. This means that users are allowed to generate a few free images but once that expires, they required to purchase additional "credits" to access premium tools or to further generate such images. The manipulation of existing images raises privacy concerns for everybody, especially women, who can be targeted using the help of such websites.

It further said, "Users are required to purchase additional "credits" or "tokens" to access features such as higher resolution exports, "age" and "body trait" customization, and inpainting - a feature where the AI model will replace a highlighted part of the image with requested content, such as removing clothing."

Earlier this year, we saw deepfakes of Bollywood actors Rashmika Mandanna and Kajol going viral on social media platforms. While these clips prompted responses from different public figures, it also highlighted the threats that AI tools possess when misused by bad actors.

Why Are These Websites Emerging?

The study highlighted that the growth in accessibility of open-source artificial intelligence image diffusion models has allowed several websites to easily provide such realistic NCII at a large scale. You may ask how? It is because of the availability of such tools have significantly brought down the time and cost of generating such images.

"Without such providers, their customers would need to host, maintain, and run their own custom image diffusion models - a time-consuming and sometimes expensive process," the study said.

ADVERTISEMENT
ADVERTISEMENT

What Next? 

Recently, Prime Minister Narendra Modi had expressed his concern over the threats posed by AI tools and deepfakes while addressing a gathering. The Quint had analysed how the use of deepfakes could impact the elections, and how bad actors can use them to further narratives and spread disinformation across social media platforms. You can read the analysis here.

The union government is soon planning to introduce regulations to control the spread of deepfakes on the internet. As the accessibility of tools increase, the future will tell if the regulations would address the privacy violations on such websites.

Responding to this report on the study, a Reddit spokesperson told The Quint that "they strictly prohibit any non-consensual sharing of intimate or sexually explicit media including depictions that have been faked and including links sharing or promoting such content."

"Our global sitewide policies strictly prohibit any non-consensual sharing of intimate or sexually explicit media including depictions that have been faked and including links sharing or promoting such content. As such, we have banned several domains in question and will continue to enforce these policies across our platform. We have dedicated internal Safety teams that use a combination of automated tooling and human review to detect and action non-consensual intimate media (NCIM) as well as violating third-party links,” the statement added.

(Not convinced of a post or information you came across online and want it verified? Send us the details on WhatsApp at 9540511818, or e-mail it to us at webqoof@thequint.com and we'll fact-check it for you. You can also read all our fact-checked stories here.)

(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)

Published: 11 Dec 2023,03:45 PM IST

ADVERTISEMENT
SCROLL FOR NEXT