Germany: Disinformation in Pandemic Times
Matthias C. Kettemann
University of Innsbruck
Matthias C. Kettemann is Professor of Innovation, Theory and Philosophy of Law and head of the Department for Theory and Future of Law at the University of Innsbruck, Austria. He is head of the Research Program on private communication orders at the Leibniz Institute for Media Research | Hans-Bredow-Institute, Hamburg, and leads research groups on international law and the internet and on platform governance at the Humboldt Institute for Internet and Society, Berlin, and the Sustainable Computing Lab at the Vienna University of Economics and Business.
Germany has responded to influence operations carefully during the COVID-19 outbreak. In the discussions around German platform regulation, hate speech has historically occupied the forefront, not misinformation. Along with clear prohibitions against certain categories of serious antisemitism and other qualified hate speech and dehumanizing expressions, the idea that the process of political opinion formation is supposed to be free from state interference is regarded as a crucial component of Germany’s constitutional order. A country’s liberal democracy is thought to be at even greater risk from excessive domestic government influence on the negotiation of the norms of information behavior than it is from foreign influence operations.
Nevertheless, the rhetoric of both politicians and the general public shows a fear that influence operations may undermine the nation’s democratic process. Former Foreign Minister Heiko Maas highlighted the existence of “players and states” in a statement he released in June 2021, stating that they “are employing dishonest tactics to meddle in democratic processes as well as election campaigns in other countries.” Similar to this, 82 percent of respondents in a Forsa study conducted for the State Media Authority North Rhine-Westphalia concur (‘totally’ or ‘somewhat’) that political misinformation endangers Germany’s democracy.
How does Germany define disinformation?
It doesn’t. There is no official definition, and courts have been very reluctant to use the term. There are no laws that regulate “disinformation.” It is difficult to define disinformation neatly. Even accurate information, when stripped of its context or strategically launched, can be disinformative. Disinformation may even contain mostly accurate information, but it can be used selectively to influence political discussions and interpretations. For example, satire does not usually constitute deliberate disinformation because its authors primarily seek to entertain rather than influence; nevertheless, satire can also influence political discourse. It therefore makes sense to speak of disinformation only when it involves the deliberate dissemination of false information for a strategic goal—such as lowering trust in the media, making people or groups look bad, or not participating in electoral processes.
Does disinformation threaten our social order?
No. Lies have always existed; disinformation as a strategic form of online lying is a new phenomenon, but even in the aggregate it does not (so far) lead to a fundamental change in values in our societies. Those who study disinformation are confronted with the problem that the concrete impact of individual pieces of disinformation is difficult to measure. People gain their knowledge from many different sources; their media repertoires have become broader, not narrower, as a result of the Internet; traditional media still play a central role in news reception—except among the young and the young at heart.
But it is true: Prejudice and lack of knowledge make us more susceptible to receiving misinformation and disinformation. Because of the human weakness for prejudice, readers prefer to believe information that confirms their opinions and are more likely to reject information that would destabilize their worldview. Disinformation actors exploit this. In addition, their authors increasingly exploit the Internet as a means for their strategic dissemination. People often judge the value of news based on their trust in the medium used, not in the news itself. This importance of trust is also understandable since people are hardly in a position to obtain all the necessary information themselves at any time. Good work by journalists must be trusted. However, this traditional and fundamental basis of trust must be redefined for news producers in the online world.
The legal dimension of the discussion
States have a primary responsibility to protect human rights and fundamental freedoms, including in the digital environment. The state’s duty to protect does not end at the keyboard or the smartphone. However, things get complicated when these fundamental freedoms conflict, because states have a negative obligation not to violate human rights, but also a positive obligation to protect human rights and put them into practice. Every intervention thus represents a balancing act that must always be evaluated in the specific context. States attempt to fulfill this obligation by creating a legally secure environment on the Internet as well—through their own laws and by monitoring the rules of the platforms.
Online communication spaces are regulated at various levels: international law, regional standards, national laws, and the private law of the platforms. Most disinformation cannot be easily banned by states, as most disinformation is not worthy of punishment (nor is it punishable under current law). Disinformation about individuals can be banned if it contains false statements, not just opinions. Insult and defamation can be claimed as criminal offenses. However, new rules that seek to restrict the right to spread falsehoods must show consideration for freedom of expression.
Should disinformation on social media be actively combated or ignored?
As the German Federal Court of Justice confirmed as recently as late July 2021, platforms can generally decide for themselves which content to delete, legally or not, according to their terms of use or community guidelines. (They must, however, provide users with legal recourse quickly after deletion.) Platforms are regularly unable to reliably say whether a posting constitutes disinformation. Therefore, deletions should not be carried out excessively under the label “disinformation.” Resorting to algorithmic content regulation also does not always seem to be effective—at least in the fight against disinformation—because identifying and evaluating this content is too complex for automated processes. Nevertheless, platforms regularly report success in the fight against larger networks of “coordinated inauthentic behavior.” Here, actors are identified at great expense and banned from the platforms. Whether these decisions were correct and appropriate is disputed from case to case. However, a lack of transparency about such decisions and a lack of opportunities to appeal them make these procedures themselves a problem.
In addition to banning actors, social media companies can also influence the spread of misinformation. Once identified as false, information can be downgraded within the news feed of the respective platform so that it is seen and shared less often. In addition, when users interact with such information, they may be alerted to problematic content or guided by links to trusted studies or articles from established news outlets on the topic. However, the effect of such references is controversial, since users perceive them as lecturing and thus tend to reinforce disinformation.
It therefore regularly makes little sense to mark content as untrustworthy. Users are more likely to perceive such indications as an attempt to influence them. People easily see their personal freedom violated when an authority tells them not to do or believe something, with the consequence that they share—now more than ever—disinformation. Such ‘boomerang’ effects must be avoided. On the other hand, if a notice is worded too neutrally or ambiguously (e.g., “For more information, go to…”), it may not be clear that the notice is intended to correct the message.
So how should we deal with disinformation?
Truth must always fight for a place in societies. Taking action against untruths by means of law is regularly prohibited—also because of the lack of scalability and speed. Measures will only be constitutionally justifiable if there is a sufficiently high probability of imminent danger to individual legal interests, such as life and physical integrity. In addition, information flows before and during elections will require special protection because untrue statements may not be corrected in time.
In addition to government measures, voluntary labeling and fact-checking procedures are particularly suitable, as are obstacles to the dissemination of disinformation that can be made visible within the framework of recommendation algorithms. Particularly effective in the fight against disinformation, however, are systemic interventions to raise “information literacy,” i.e., the ability to classify information, in all phases of life, inside and outside of formal educational structures. Combating disinformation is a social and societal task, not primarily a legal one.