advertisement
Falsehoods, misleading information, and hate appears everywhere, especially online. Though so-called ‘fake news’ narrowly defined may be less pervasive than is sometimes assumed – one study found that it makes up about 0.15 percent of the average American media diet – misinformation and hate are real and widespread problems, and can cause serious harm, especially to already marginalised or vulnerable communities.
What can we do to limit the harm that different kinds of online misinformation and hate can cause?
This is a defining question of our time, one that we have to confront as citizens, and one that government are increasingly asking – including the Indian government, which has recently complained to the Supreme Court that there is “absolutely no check on the web-based digital media”, pointing to both big digital platforms such as Facebook, Twitter, and YouTube, as well as portals and individual online publishers.
The basic “what can we do” question quickly unfolds into a host of other questions, including who decides what can be said and widely disseminated online, on what basis, with what enforcement, what kinds of transparency, and which kinds of due process?
The status quo in most of the world is essentially this: politicians write laws drawing lines between legal and illegal speech, anyone can, in principle, appeal to the big digital platforms on the basis of these laws (whether around copyright, libel, hate speech, or any other potential illegal forms of expression) and they may decide to act, which they also do when courts decide on a case-by-case basis.
The companies also, to different degrees and in different ways, proactively try to police potentially illegal forms of expression, and often engage in ‘content moderation’ that go well beyond the letter of the law on the basis of ‘community standards’ and terms of service drawn up by the platforms and enforced more or less as they see fit.
International human rights law and many national constitutions protect our freedom of speech, which is not limited to statements that are deemed ‘correct’ or that governments find acceptable, and also protects forms of expression that are shocking, offensive, and disturbing. It is important to recognise that this is overwhelmingly a ‘negative right’, meant to protect us from government interference, not a ‘positive right’ that requires anyone else to let us express ourselves as we see fit – we have no legal right to say whatever we like on Facebook, Twitter, or YouTube.
As long as they do not break other laws by, for example, systematically discriminating against us on the basis of what is typically called ‘protected characteristics’ like sex, race, or religion, they can pretty much remove or reduce anything they want when they want to, something governments, at least in principle, are not supposed to do.
But it clearly struggles to deal with misinformation and hate at great scale and rapid pace (defining features of digital media), struggles to deal with the risk that companies for commercial reasons or in response to political pressures restrict speech, struggles to deal with the grey zone of things that may well be misinformation or hate in a broad sense, but are not necessarily illegal, and struggles to deal with the fact that misleading propaganda and vile attacks on minorities and other groups are often a defining feature of political discourse that in large part play out online and through the news.
It is clear that with the surge in online falsehoods, misleading information, and hate seen in many countries, the predominant push now is to ‘do more’. But it is important to recognise that, while the most vocal current criticism of the status quo – including often from governments eager to add new tools to their arsenal – is that it is too permissive, human rights organisations and free speech advocates have long argued that we also have the opposite problem.
Also, that the notice and takedown systems that feed into content moderation by big digital platforms risk creating further incentives for platforms to respond to organised, powerful interests by exercising ‘private censorship’ and over-blocking – and removing content that, on closer examination, is in fact legal and perhaps in the public interest, even if also sometimes problematic.
The truth of the matter is that no one has figured out what to do.
Despite years of intense public debate, elected officials in most countries have done nothing to change the rules of the game.
The 2016 US elections saw very significant problems, yet the 2020 US election are playing out with largely the same rules.
The situation in Europe is more complicated and have seen some small incremental steps, with the European Commission largely following the suggestions of the EU High Level Group on Online Disinformation, which recommended steering clear of direct content regulation in favour of indirect approaches focused on supporting independent journalism, fact checking, and media literacy, forcing the big platforms to be more transparent, and trying to bring different stakeholders together for a collaborative response.
Some governments have gone further. Where they have, civil society organisations and human rights groups have often been overwhelmingly critical, with the UN Special Rapporteur on Freedom of Expression frequently pointing out that the laws passed have very vague definitions of the problem they purport to address, suggested disproportional responses, and offered little in terms of due process. China is an interesting example here.
Some problems are simply hard to tackle head-on, even for governments who are happy to ban things and punish those who do not toe the line.
We clearly want more action against misinformation and hate online. Many possible responses will probably have to be indirect, and involve civil society, independent news media, fact checkers, as well as greater transparency – if necessary secured through stronger oversight and regulation – from platform companies especially around their content moderation practices, including what they remove and reduce and why.
Others may involve the government or the judiciary.
The unsatisfying status quo answer to the question “who should decide what we can say online?” is a messy reality where in democratic countries, politicians make the basic rules, courts enforce them, and individual private companies improvise, however clumsily and half-heartedly, their own further measures in line with their own business interests.
Governments in many countries are increasingly unhappy with this. Some of them prefer a much simpler answer to the question “who gets to decide what we can say online?” They say “we should!” That is the Chinese route, and some other countries seem drawn to it. It not clear that this in fact reduces online misinformation and hate. But it might achieve other ends.
(Rasmus Kleis Nielsen is Director of the Reuters Institute for the Study of Journalism and Professor of Political Communication at the University of Oxford. He served on the EU High Level Group on Online Disinformation. He tweets @rasmus_kleis. This is an opinion piece. The views expressed above are the author’s own. The Quint neither endorses nor is responsible for them.)
(At The Quint, we question everything. Play an active role in shaping our journalism by becoming a member today.)
Published: undefined