Tech Experts Are Calling for an AI Moratorium

Axel Spies

German attorney-at-law (Rechtsanwalt)

Dr. Axel Spies is a German attorney (Rechtsanwalt) in Washington, DC, and co-publisher of the German journals Multi-Media-Recht (MMR) and Zeitschrift für Datenschutz (ZD).

But Is It a Solution?

The Future of Life Institute has recently made waves with its Pause Giant AI Experiments: An Open Letter: “We call on all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4.[…] AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts.” This open letter is signed so far by more than 1,400 tech experts, among them Elon Musk, CEO of SpaceX, etc.; Steve Wozniak, Co-founder of Apple; and Yuval Noah Harari, a well-known author and professor.

In a recent interview with PBS, former Google CEO Eric Schmidt comments on the consequences of AI. He believes that AI needs “guardrails,” especially General Purpose AI (GPAI). GPAI is an AI system that is able to perform generally applicable functions such as image/speech recognition, audio/video generation, pattern detection, question answering, translation, etc. GPAI is increasingly used for powerful applications in medicine, healthcare, finance, and life sciences. Schmidt believes that we are on the cusp of a revolution due to GPAI such as ChatGPT, which has gained more than 100 million users in a few weeks. We do not have a “philosophical basis” for how to deal and communicate with GPAI applications and how we “think about ourselves.” GPIA applications are not “killer robots” but “an intelligence that is different, that comes to answers differently than we do” and that “keeps emerging with new capabilities.” GPIA will be “looking for your attention. […] We have not figured out how powerful this new technology is going to be yet.” While Eric Schmidt is calling for “guardrails” as “things that are very dangerous seem to have been discovered in the raw modules” of AI, he is not calling for an AI moratorium.

Muted German Reaction

An AI moratorium on GPAI or broader does not square with the German Coalition Treaty of 2021 that states optimistically that “key emerging fields [for the German Government] are … technological sovereignty and the potentials of digitalization, e.g., in artificial intelligence and quantum technology.” So far, none of the German politicians has called for a moratorium. Rather, the debate in Germany is focusing on specific AI issues, such as the use of GPAI for home or exams in schools and universities.

Critics in Europe also accuse Future of Life of fueling the hype around this technology with the open letter. Others are concerned that a moratorium on AI will give players outside of Europe an advantage. In an interview with the German daily Frankfurter Allgemeine Zeitung Christoph Meinel, head of the prestigious German Plattner Institute, is also more guarded: “I often think to myself that if this energy were first put into the constructive building of AI systems, then we could at least have a say from our own experience. We have a bit of a predisposition to theoretically illuminate the boundaries and very thoroughly grasp the risks and think that we can already give permanent valid answers and regulate everything now. That is simply too early, and the phenomenon of AI is too new, too complex, and not yet conceivable in all its consequences.”

By contrast, the Italian data protection agency il Garante has temporarily “blocked” ChatGPT, citing in particular child protection reasons. According to the latest press reports, OpenAI is in discussions with il Garante and has pledged to be more transparent in how it processes data and to provide measures to address concerns under the GDPR. The company also stated that it removes personal information from its datasets where possible and will fine-tune its models to reject user prompts asking for such information.

EU AI Act chugging along, but only covers parts of the issues

As in Germany, the EU institutions are not very keen on a moratorium and are moving forward with the AI Act, which Eric Schmidt explicitly refers to when he calls for “guardrails”—several additional technical meetings remain scheduled in the European Parliament for the next weeks. Fourteen “compromise batches” have been discussed on the technical level covering the whole text. A political deal on the AI Act seems to be close. There have been some compromises, especially on the AI definition, but big obstacles remain on GPAI. One of the main obstacles is how to classify it under the AI Act. The answer is important because the AI Act follows a risk-based approach, which classifies AI systems into four categories: prohibited AI systems, high-risk AI systems, low-risk AI systems, and minimal-risk AI systems. With each of the latter three categories comes a specific set of obligations and monitoring for the manufacturers, importers, etc. How does GPIA fit in? Is it a separate category of its own?

A new German study by the Applied AI Initiative examines the impact of the risk classification criteria of the AI Act on innovations. It also discusses which questions need to be addressed to create more clarity and planning certainty. One of the main findings of the study is that approximately 18 percent of AI systems fall into the high-risk category, 42 percent into the low-risk category; for 40 percent it is unclear whether they fall into the high-risk category or not. Thus, the proportion of high-risk AI systems in this sample ranges from 18 percent to 58 percent. Most high-risk systems are expected to be in the following business units: human resources, customer service, accounting and finance, and legal. The classification is difficult but necessary. Unclear risk classifications in the AI Act will slow investment and innovation; the areas of unclear risk ratings are primarily critical infrastructure, workplaces, law enforcement, and product safety. GPIA is not addressed as a risk category of its own and does not fit into any of these silos.

Eric Schmidt’s focus, by contrast, is more on content regulation which is not covered by the AI Act. “We need to know who is on the platform and where the content is coming from. We need to know whether [the content] is authentic and how the platform makes its own decision.” However, these issues on the authenticity of information, vital for a democracy, will require a separate debate beyond Europe. With Eric Schmidt, “Some form of agreement in America between the government and the industry is required […] to keep the most extreme cases off the platforms.”

The views expressed are those of the author(s) alone. They do not necessarily reflect the views of the American-German Institute.