AI Regulatory Sandboxes
Axel Spies
German attorney-at-law (Rechtsanwalt)
Dr. Axel Spies is a German attorney (Rechtsanwalt) in Washington, DC, and co-publisher of the German journals Multi-Media-Recht (MMR) and Zeitschrift für Datenschutz (ZD).
Germany lagging behind others in Europe
According to its Coalition Treaty, the German government wants to “promote […] Artificial Intelligence (AI), quantum technologies, cybersecurity, distributed ledger technology (DLT), robotics and other innovative technologies.” On April 13, 2021, the German Federal Cabinet approved the so-called experimentation clause check: “In specialized laws, the possibility of ‘trying things out’ should be increased. […] For this reason, we want to check each future law, within the framework of the departmental principle, whether freedom can be given to innovative services by including an experimentation clause.” However, it seems that the government is not giving so-called AI regulatory sandboxes enough attention. Other countries in Europe are doing better. The Draft EU Act (already discussed here) calls for such sandboxes in each member state as there is a lot of concern in Europe that AI will infringe on privacy rights and trigger liabilities under the European Union’s General Data Protection Regulation (GDPR) and other laws.
What is a regulatory sandbox?
The term comes from computer science: A “sandbox” means an isolated environment that allows a software execution for testing new technologies under regulatory supervision. Examples of such a concept can be found above all in the field of financial technologies (FinTechs). A sandbox enables a company to test a technology and allows the authorities to gain practical insight while granting the participating companies temporary relief from certain regulatory requirements. In other words, regulatory sandboxes are fertile ground for assessing the (privacy) risks of new technologies at an early stage.
Why is a regulatory sandbox important for AI?
AI that goes rogue processing personal data can lead to tremendous damage and fines under the GDPR. Backing from the regulators at least in the form of advice in the sandbox can be a tremendous boon, especially for smaller companies. The current poster child in Europe is the data protection authority (Datatilsynet) of Norway (not an EU member country) with its sandbox for “responsible AI.” Its guiding principles stem from the “ethics guidelines for trustworthy AI” (an expert group appointed by the European Commission). One focus was developing a “fair” algorithm in specific cases, a risk management process. Other issues were how transparency and data minimization, as required by the GDPR, could be achieved and selecting training data. Datatilsynet has a detailed application process with all criteria on its website that also includes interviews with the applicants, a joint project plan, and workshops. At the end of the process (timeframe: six months), Datatilsynet publishes a detailed exit report.
What does the (draft) EU AI Act state about it?
On December 12, 2022, the Council of the European Union, representing the EU member states’ governments, adopted its “general approach” to the AI Act. It states that “participation in the AI regulatory sandbox should focus on issues that raise legal uncertainty for providers and prospective providers to innovate, experiment with AI in the Union, and contribute to evidence-based regulatory learning. The supervision of the AI systems in the AI regulatory sandbox should therefore cover their development, training, testing, and validation before the systems are placed on the market or put into service, as well as the notion and occurrence of substantial modification that may require a new conformity assessment procedure.”
The AI Act in its current form thus encourages these sandboxes but does not make them mandatory. The European Commission is not aiming for a uniform real laboratory for the entire EU but leaves their setup to the competent authorities—if necessary, across EU member states. If adopted, the AI Act will provide the legal basis for the participants in the AI regulatory sandbox to use personal data collected for other purposes for developing certain AI systems, but it would not override the GDPR. The AI Act would also require an exit report from the participants and the regulatory bodies detailing the activities carried out in the sandbox and making the related results and learning outcomes public.
Where does Germany stand?
The German regulators for AI and the GDPR are lagging behind Norway, the French data protection agency CNIL’s Edtech-Sandbox with five announced “winners,” and Spain with its new AI sandbox that is actively supported by the EU. Switzerland (also not part of the EU) is offering AI sandboxes in the Canton of Zurich and has already received various industry applications. German legislators and regulators prefer the term “reallabs” (“Reallabore”) over “sandboxes.” The main driver in Germany for such sandboxes is currently the State of North Rhine Westphalia (NRW), where an extensive project Digi-Sandbox.NRW is under development. Their website lists several reallabs in NRW but none with a focus on privacy protection and AI. On December 12, 2022, the digital ministers of the German federal states adopted a resolution on the planned AI Act that NRW promoted. The resolution does not contain a specific timetable, but the ministers at least agreed that the regulations in the AI Act allowing personal data to be processed in an AI sandbox are in the public interest. The German Federal Ministry for Economic Affairs and Climate Action has issued its “concept” for a new Reallab Act with the goal of codifying overarching standards for sandboxes and experimentation clauses.
What should industry and regulators do now?
Germany should act quickly. If not, there is the risk that AI developers will move to a country where they can test the privacy impacts of their software products and exchange the results freely without being fined under the GDPR. They need help to understand the (GDPR) rules and to explore their options from the bottom up while trying to develop and implement AI. A sandbox enhances legal certainty for innovators and the competent authorities’ oversight and understanding of the opportunities, emerging risks, and impacts of AI use. It will accelerate their access to markets, e.g., by removing barriers for small and medium enterprises (SMEs) and startups. But the process is resource intensive. To be a success, industry needs a central one-stop concept with someone in charge whom the industry can talk to, and a framework for cooperation between the relevant authorities involved in the supervision of the sandboxes, as proposed by the EU Council is likewise important for AI innovators. They also need clear rules on how to apply for participation in a sandbox and on IP issues in connection with it.
The draft AI Act provides that the data protection authorities must be involved in the operation of the AI sandboxes if the innovative AI systems process personal data. But it does not put them in the driver’s seat. The German Data Protection Authorities (DPA) should be at the forefront of this debate and offer proactive solutions and assign significant resources as in Norway.