Why We Need to Think about a “Disinformation CERT”

Stefan Heumann

Stiftung Neue Verantwortung

Stefan Heumann is Co-Director of Stiftung Neue Verantwortung (SNV), a nonprofit think tank working on the intersection of technology and public policy based in Berlin. He has worked and published on a wide range of issues at the intersection of technology and public policy. His opinion pieces and commentary have appeared in German and international media outlets such as The New York Times, Financial Times, Politico, The Economist, Süddeutsche Zeitung, and Spiegel Online. Stefan Heumann is a member of the German Parliament’s Expert Commission on Artificial Intelligence. He is also a member of the advisory board of technology policy assessment of the German National Academy of Science and Engineering (acatech). Stefan holds a PhD in political science from the University of Pennsylvania.

Disinformation should be regarded as a real threat to public debate and democracy. A wholistic response is necessary argues Stefan Heumann in AGI’s new report, “Defending Democracy in the Cybersphere.”

In order to function properly, democracies depend on open deliberations and fact-based discussions. As our public discourses have increasingly moved online, social media platforms have become critical infrastructures for our democracy. Citizens and politicians use these platforms to share information, to engage in discussions, and to inform themselves. If these platforms are used to spread disinformation, manipulate our discourses, and undermine our ability to engage in fact-based conversations, our democracy suffers as a consequence.

Disinformation campaigns are usually identified and publicly exposed when it is too late—when the disinformation has already been widely shared and spread and the damage is done.

How can we better protect our democracy against disinformation on social media? This is a difficult and complex problem. There are no easy solutions or silver bullets. At the core, there is an information problem. Disinformation campaigns are usually identified and publicly exposed when it is too late—when the disinformation has already been widely shared and spread and the damage is done. In many cases, we still do not fully understand the nature and scale of the problem. Social media companies have been reluctant to share information regarding malicious activities on their platforms. Most information we have has come courtesy of government investigations such as those conducted by the U.S. Congress into Russian election interference. And for a long time, independent researchers have complained about their lack of access to relevant, non-personal data from social media platforms, such as Facebook, to study the problem.

This does not mean that social media platforms are not aware of the problem. To varying degrees they have taken measures to identify disinformation campaigns and to respond with countermeasures. In the run-up to the U.S. midterm elections in 2018, Facebook created a war room to study and analyze disinformation campaigns in real-time and to quickly come up with responses to undermine their effectiveness. Twitter has also released data related to Russian and Iranian attempts to manipulate political discourse for independent investigation. The data includes data from 3,841 accounts linked to Russia’s Research Agency. The Atlantic Council’s Digital Forensic Research Lab conducted a detailed analysis of the released data and published the results, showing, for example, how the accounts were used to promote the presidential candidacy of Donald Trump.

The problem is not confined to social media platforms like Twitter or Facebook. Since it started its Internet search service, Google has observed attempts to manipulate search results. This problem has only grown in scale and complexity given the high impact of search rankings on access to and distribution of information. The affected companies can take a wide range of measures to respond. The responses range from content take-downs on Facebook to deletion of listings on Google in cases where disinformation violates terms of service or is deemed illegal. Softer approaches do not remove information but decrease their impact through changes in the algorithms that determine their ranking in newsfeeds and search results.

Given the threat of disinformation to robust public debates on which democracy depends, the public has a strong interest in understanding the disinformation problem and in holding companies accountable regarding the development and implementation of effective responses. But this is easier said than done. Social media platforms are operated and owned by private companies. Currently, private companies make and enforce rules on their platforms and respond to the disinformation problem as they see fit without much government oversight or scrutiny.[1] However, public exposure of the techniques used on these platforms to amplify disinformation could make the problem even worse as other actors with malicious intent could learn valuable insights about how to exploit these platforms to maximize distribution and impact for their own propaganda campaigns.

Cyberattacks directed at these IT infrastructures could have devastating real-world impacts.

The nature of this problem is not new. For many years, we have confronted a similar challenge regarding cybersecurity. Information technology (IT) and networked computer systems play a central role in our economy and society. They run our energy systems, organize workplans and treatment in hospitals, and manage our banking operations. All these systems are owned and operated by private companies. Cyberattacks directed at these IT infrastructures could have devastating real-world impacts, such as an extended breakdown of the energy supply or a large scale manipulation of the financial systems. Thus the detection of cyberattacks and the quick development of effective countermeasures are crucial.

In order to share information about potential threats and provide assistance for effective responses, industry and government agencies have developed a cooperative approach. Computer Emergency Response Teams (CERTs) have been created to facilitate information sharing and the development and implementation of countermeasures across the private sector and the government. The basis for their effectiveness is trust as trust facilitates information sharing and cooperation. Information sharing is crucial to quickly identify new threats and to cooperatively develop countermeasures that not only protect individual companies, but also raise the level of protection for an entire sector or industry.

We need a more wholistic approach to understand and counter disinformation rather than the siloed approach of looking at each platform separately.

We need a similar approach to counter disinformation campaigns. State actors have developed especially sophisticated strategies for disinformation that take advantage of the interconnections between the platforms. For example, social media amplification on Twitter might be used to boost the ranking of certain videos on YouTube, in Google’s search results, or in Facebook’s newsfeed. Thus, we need a more wholistic approach to understand and counter disinformation rather than the siloed approach of looking at each platform separately. Such a wholistic approach can only be taken if social media and online platforms share information and coordinate their responses. This is what a “Disinformation CERT” could facilitate.

Implementing the idea of a “Disinformation CERT” is not without challenges. Serious questions need to be addressed and resolved. What kind of information and at what level of detail and granularity should be shared? Since disinformation campaigns are instigated by and interact with real users, there are also some serious privacy concerns that need to be addressed. Sharing information about public campaigns on Facebook or Twitter should not be a problem. But what about information that is gleaned from restricted groups and chats?

What kind of countermeasures can a “Disinformation CERT” take? And what can the platforms share about countermeasures they have taken themselves? Such measures are sensitive and will be highly controversial because of their impact on basic human rights such as freedom of speech. Social media platforms will seek to avoid any cooperation and information sharing that might draw even more public scrutiny to their power to influence digital public discourses. Their reluctance to engage in such a cooperation will become even greater if government agencies participate in the “Disinformation CERT” and learn about serious problems on the platforms.

All these challenges need to be earnestly considered. But we have no choice but to explore the idea and to find a solution. The alternative—doing nothing and leaving the problem to each platform to address as it sees fit—would cause much greater harm. Given what is at stake, we cannot afford to sit back and merely hope that the problem will solve itself on its own. The recent initiatives by the platforms show that they recognize the problem. The set-up of a “Disinformation CERT” would show that they are also really serious about addressing the problem and willing to accept accountability for it.

If voluntary action does not provide adequate results, governments should consider taking a regulatory approach. Here again, the field of cybersecurity can provide first lessons on how to do this. In order to facilitate information sharing and to coordinate effective responses to threats regarding IT, governments have deemed certain IT systems as critical if they support the functionality of important infrastructures in essential sectors such as transportation, energy, or health. A similar approach could be taken regarding large social media and online platforms. Based on their importance to public debate and access to information they could be deemed a critical infrastructure in a similar manner as energy or transportation companies. This would provide the basis for the government to take a more proactive regulatory approach to the information-sharing problem.

The determination as critical infrastructure has two important implications. First, operators of critical infrastructures have to share information regarding attacks against their IT systems with the government. This gives the government the ability to better understand the scale and nature of attacks and craft appropriate responses in collaboration with the private sector. Second, the government can use the knowledge about attack vectors to define and enforce the implementation of higher technical security standards.

Cybersecurity and the struggle against disinformation both require a long-term approach on the basis of information-sharing and cooperation.

Experiences from improving cybersecurity need to become part of our discussion about how to address the disinformation problem. Both areas have much in common. Attackers have an advantage over defenders. Both are dynamic problems where the strategies for defense and attack constantly evolve. Thus, there are no quick fixes. Cybersecurity and the struggle against disinformation both require a long-term approach on the basis of information-sharing and cooperation. In cybersecurity, CERTs play a crucial role in this long-term approach. It is time to recognize that we need a similar long approach for the disinformation problem. The challenges should not discourage us. Discussing and developing the design and implementation of a “Disinformation CERT” will be essential to making any long-term strategy to fight digital propaganda effective.


[1] The Code of Practice on Disinformation that the EU Commission agreed to with Internet platforms to better protect the European elections in 2019 is a first step to increase accountability. But the Code is voluntary and largely leaves it to the companies how they implement it.

The views expressed are those of the author(s) alone. They do not necessarily reflect the views of the American-German Institute.