Members Only
lock close icon

Here’s What WhatsApp Can Do to Address the Fake News Problem

The encryption provides privacy for users, and the anonymity becomes an enabler for the spread of fake news.

Nikhil Pahwa
Opinion
Updated:
Challenge of dealing with fake news on WhatsApp lies in its encryption that doesn’t allow even the platform to access messages.
i
Challenge of dealing with fake news on WhatsApp lies in its encryption that doesn’t allow even the platform to access messages.
(Photo: Harsh Sahani/The Quint)

advertisement

With over 200 million users in India, WhatsApp has a massive fake news and misinformation problem. The platform is now rife with all kinds of misinformation, whether political, religious, historical, medical, social or legal.

Political parties use the platform to rally supporters, and “IT Cells” are believed to be using the platform for spreading misinformation and hate speech. These are then forwarded by their supporters to millions of others, from person to person.

Local administration often doesn’t know what to do when mobs start collecting and rioting, apart from shutting down Internet access. According to SFLC.in’s Internet shutdowns tracker, India had 70 internet shutdowns in 2017 – and in the first six months of 2018, we’re already at 65. 

More recently, mobs have attacked and killed people, following the spread of a video clip warning about gangs kidnapping children. This is going to get worse.

The Platform Solution to This Problem

This is a complex problem with no single solution: there is no silver bullet here. Solutions include counter speech, user education and debunking of misinformation from both the government administration and media. We need strong law enforcement to prevent mobs, as well as speedy justice for the victims (as a deterrent).

The challenge with dealing with fake news and misinformation is that WhatsApp’s end-to-end encryption doesn’t allow even the platform to access messages. While the encryption provides privacy for users when they’re messaging, given the anonymity involved in forwarded messages, the platform becomes an enabler for the spread of misinformation.

The challenge of enforcement in social media, as it was in the 66A case earlier in this decade, is that messages online are both communication (person-to-person, and hence private) and media (for wider consumption, and hence public).

The solution for WhatsApp as a platform lies in separating the public from the private — give users power over what they make public and allow to be forwarded, and thereby holding them accountable for what they choose to make public.

Here are the changes that I recommend for the platform:

Change #1: Users can make messages either public (media) or private (P2P message). The default setting for all messages should be private. This will impact virality on the platform, but that’s a price it will have to pay for bringing in accountability. This will create a level of friction while forwarding: they will be frustrated when they cannot forward certain messages. WhatsApp could keep a slightly different background colour for public messages.

Change #2: The original sender/creator of the message should have the power to allow for a message to be forwarded (and made public). This ensures that a message that was meant to be private cannot be made public (by forwarding) without consent. It also attributes intent, when an original sender/creator chooses to make a message public. To forward even your own message to multiple people, you would have to make first make the message public.

Change #3: When a creator makes a message public, the message gets a unique ID, which gets tagged with the creator’s ID. This means that the message, when public, is “media” and has proper attribution to the creator, every time the message is forwarded. A log is kept by WhatsApp only if the message is public. This allows both the platform and law enforcement agencies to trace the message back to the creator. From a platform perspective, there are two things: WhatsApp is now in a position to suspend this particular account, and secondly, it can disable the message, wherever it has been forwarded.

Change #4: Allow users to report forwarded/public messages as misinformation, which can then be reviewed by WhatsApp. WhatsApp already reviews spam-related complaints. Users should be able to identify the Sender and/or Message ID by selecting the message and tapping on the information (i) icon which appears.

ADVERTISEMENT
ADVERTISEMENT

A few concerns:

  • Privacy issues: There’s a legitimate concern about whether this unique ID for the creator should be the phone number or not: in my opinion, it shouldn’t be a mobile number. This also ensures that in case law enforcement wants to track down the user, they have to follow due process and get a court order, or follow an MLATS process. However, as long as users are aware that the message can be traced back to their mobile number (and hence to them, owing to KYC requirements in India) via some process, it could lead to people being more careful about the messages they create/send.
  • Accountability of the forwarder*? In a way, this approach ensures that both intermediaries in this process – the medium of the message, as well as the person forwarding it – are not held accountable for the message. This method enables intermediary liability protections, since the creator/original sender of the message can potentially be identified. Also, there’s no liability for a forwarder since no person can make a private message public without explicit consent (unless they’re forwarding a screenshot of a message, in which case, they would be liable as a creator)
  • Allow changing of settings from public to private?
    It’s debatable whether a creator should be allowed to make a public message private again, like is allowed on Facebook. Someone creating misinformation with malicious intent could use this setting to hide all traces of the message. But someone who sends a message by mistake, or based on misinformation received in real life, should have the right to correct or recall the message.
  • What about messages to groups?
    Messages to groups should be treated as private by default. However, if a sender/creator wants to enable forwarding, then it needs to be posted as a public message.
  • What do you do about well-meaning messages from misinformed users?
    I’m not sure of what one can do about a person who is misinformed about a threat of a kidnapping, and then proceeds to broadcast it to several others, purely out of genuinely good intent, which can then result in a mob breaking the law. This would need to be dealt with in a legal manner: is it incitement to violence? It will force people to be more careful and verify.
  • What about the benefits of anonymity that campaigners have?
    In authoritarian regimes, there are benefits of anonymity that campaigners get on platforms such as Twitter and Facebook, especially when there is potential threat from a state. In this case, because there is some leeway that platforms have, when it comes to the legality of complaints, that due process has to be followed. I’m not sure of what can be done when the due process isn’t good enough to protect the rights of legitimate political speech.
  • Censorship
    This is not to say that there can’t be mass reporting of legitimate and genuine messages, a tactic that has been used both on Facebook and Twitter to censor people. Platforms have been developing systems to deal with such practices, and while these are not perfect, this is not an unsolvable problem.

Attribution doesn’t do much to address political and motivated messages, as my co-panelists on a CNBC-Awaaz TV show argued yesterday while countering my suggestion. Their recommendations were around introspection from political parties and citizens, as well as user education and training of the police force. I don’t think introspection is a short-term solution. Also to say that the platform isn’t responsible – in the same manner that the Gutenberg Press isn’t responsible for publishing – is ignoring the fact that enabling broadcast without accountability is a design choice made by WhatsApp. Designs evolve with usage.

However, there is no silver bullet, and even this solution could lead to problems, especially those related to privacy. I’m happy to get feedback, and to rework this: please feel free to let me know about potential issues at nikhil@medianama.com. Also do let me know if I can make your comments public (either attributable or not), when I’m updating this solution.

(This article was first published on Medianama. Nikhil Pahwa is the founder of Medianama. This is an opinion piece and the views expressed above are the author's own.The Quint neither endorses nor is responsible for the same.)

(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)

Become a Member to unlock
  • Access to all paywalled content on site
  • Ad-free experience across The Quint
  • Early previews of our Special Projects
Continue

Published: 04 Jul 2018,10:43 AM IST

ADVERTISEMENT
SCROLL FOR NEXT