Demise of the “Tech Utopia”: Toward Shared Responsibility
Annie Chen
Research Intern
Ms. Annie Chen is a research intern at AICGS for the fall of 2017. She contributes to the AICGS Notizen Blog, supports resident fellows with their research projects, manages databases, and helps organize and document AICGS events.
Ms. Chen recently completed her BA at the University of Toronto, with dual majors in Political Science and Criminology. Her research interests lie in Transatlantic Relations and International Security, particularly with issues related to the proliferation of weapons of mass destruction. She also enjoys reading German literature and philosophy.
Prior to joining AICGS, Ms. Chen conducted research in Mexico City, where she investigated the impact of the Mexican security crisis on cross-border migration patterns. She hopes to pursue graduate studies in the field of International Relations.
When it comes to regulation, no phrase so aptly expresses the Silicon Valley ethos better than Ben Horowitz’s maxim that, “in Silicon Valley, when you’re a private company, the entrepreneur can do no wrong.” For years now, Silicon Valley has ridden a wave of success, buoyed by the assumption that big tech was immune to government regulation. But in light of recent events, the problem appears to be less about regulation than it is about responsibility. After dubious political advertising and foreign bot activity featured as prominent influencers in recent elections, Western governments have been scrambling to check the big social media platforms.
Following the U.S. presidential election last November, Washington has put increasing pressure on social media companies to tighten their filtering mechanisms. Almost a year later, tech giants Facebook, Google, and Twitter, faced the Senate and the House intelligence committees on November 1, as they reveal the extent to which their platforms were a hotbed for Russian trolls and bots. Across the Atlantic, a similar desire to crack down on big tech has resulted in a more heavy-handed approach. A German social media law, which came into force on October 1, promises to fine any company failing to remove hate speech within 24 hours up to €50 million. One Facebook spokesperson has voiced concern that the new law “provides an incentive to delete content that is not clearly illegal and would have the effect of transferring responsibility for complex legal decisions from public authorities to private companies.”
It is this latter apprehension that is often obscured by over-emphasis on the free speech debate. The conventional narrative frequently pits tech giants, as guardians of free speech and Internet openness, against a state seeking to encroach upon those rights. Despite legitimate anxieties about censorship, there is little disagreement that something must be done about the ease with which the distortion of information can occur on the Internet. If some parameters for online content must be set, what then, is the appropriate division of responsibility between the government, the private sector, and individuals?
When Americans are asked about the duty to regulate misinformation online, 45 percent say the government bears “a great deal of responsibility,” 42 percent say so for social media corporations, and 43 percent for individuals. Perhaps more revealing is the fact that only 15 percent hold this expectation of all three groups, while 58 percent place the onus on only one or two. Technology experts, corporate practitioners, and government leaders that were surveyed as a part of a Pew report concurred that the burden of addressing misinformation will increasingly fall on governments rather than service providers.
For the social media giants, more responsibility is a tough pill to swallow; their identities as corporations and their obligations as publishing platforms are pulling in opposite directions. The reluctance to describe themselves as traditional media or publishing outlets stems from additional ethical implications associated with media companies. Ultimately, this means moving away from algorithms designed for engagement, which maximize the number of clicks (and therefore profits), to a “responsible algorithm.” Precisely what this entails in practice is still unclear. Nonetheless, critics have been right to push for greater accountability. For many years, tech companies have hidden behind the protective veil of section 230(c) of the Communications and Decency Act (CDA), an outdated 1996 telecommunications law exempting tech firms from liability for content posted by its users. However, with 62 percent of U.S. adults receiving news on social media today, the online environment is no longer the same as it was in 1996. Facebook’s long-touted rationalization that its raison d’être is to connect users, rather than to publish content, is no longer an acceptable excuse. After initial resistance, Mark Zuckerberg’s defensive position has eroded and Facebook’s CEO has come to terms with the changing tide of opinion. Over the past few months, Facebook has experimented with different creative solutions in an attempt to assuage criticism and deflect overreaching government measures. Already, efforts have been met with resistance and denounced as “downright Orwellian,” but some degree of regulation was likely a long time coming.
This is not to say that governments are entirely off the hook. A paradox emerges whereby forcing corporate giants to monitor their own platforms produces an ostensible worry about the “privatization of state censorship.” While governments have feared the apparent unrestricted power of big tech, permitting companies to police themselves may in fact strengthen their hands. Edward Snowden, the former NSA intelligence worker who sparked a global debate on Internet privacy, may be correct to warn that, “a company should never be deputized to do the work of a government. They have entirely different goals, and when you start crossing those lines that creates unintended consequences at unforeseen costs.” Indeed, the fact that the proliferation of misinformation is a pervasive problem across the tech industry, and not one localized to a single company means that governments cannot avoid tackling sensitive questions by shirking responsibility to the corporate world.
Although it is tempting to point fingers, the lesson that should be taken from all this is that online security is a shared responsibility. Public and private sector cooperation on network security is still a relatively new phenomenon and is a process that will inevitably require time to settle on an agreeable balance for all parties. A multi-stakeholder approach in the spirit of the Internet Corporation for Assigned Names and Numbers (ICANN) or the European Union Agency for Network and Information Security (ENISA) could be a starting point for distributing responsibility. Corporations can be accountable for screening within their own networks, but governments should intervene when wider national-level threats are involved. As for the average Internet user, the revelations of inherent vulnerabilities in cyberspace should be a call to exercise diligence.