On the 21st of April, 2021 the European Commission presented the “Artificial Intelligence Act” (AIA) containing harmonized legislative proposals for the 27 member states. If successful, it could follow the path of the 2016 European GDPR (General Data Protection Regulation) which has become the de facto global standard for privacy protection (known as the “Brussels effect”). The AIA has its origins in the 2019 “Ethics Guidelines for Trustworthy AI” prepared by the Commission’s High Level Expert Group which received extensive commentary from the AI Alliance and other civil society members.
The AIA defines as “high-risk” those technologies that enter into collision or impinge on fundamental rights of EU citizens. Thus, the use of AI-powered “subliminal techniques” to manipulate people is outrightly banned, while the employment of live, facial recognition or surveillance software in public places is severely curtailed, except in cases of missing children, for instance. Vendors of AI services in criminal justice (sentencing) or social welfare provision (subsidies) are required to submit proof of safety, risk assessment, and explicability of decision-making documentation, as well as guarantees of human oversight (keeping “humans in the loop”). They also have to serve notice of interaction with non-humans when utilizing chatbots in customer relationship management, for example, or computer-generated materials (“deep fakes”). As with anti-competitive practices in the sector, violations would be liable to fines of €30 million or 6% of global sales, whichever is higher.
The AIA is not without critics, largely, for contradictory reasons: “too little”, “too much”, “too late”, or “too soon”. Some claim the language is too vague (what exactly do “trustworthy” or “transparent” mean?); provisions are not sufficiently “robust”, giving companies latitude to “self-certify” compliance; or measures are just dubiously implementable. Others argue it would have a “chilling effect” on AI technologies and their business potential, with the EU already aggressively regulating on the basis of still merely hypothetical harms. Still others allege it has been too long in coming, for instance, the victims of the childcare benefits scandal in the Netherlands. Dutch tax authorities wrongly accused more than 25,000 parents of fraudulent claims between 2013 and 2019 due to faulty software, obliging them to return the money. Its discovery led to the collapse of the national government, triggering elections last March.
Although it may still take years before AIA is passed into law, the EU is certainly the first mover in comprehensive AI regulation and is well positioned to reap the advantages. As for the above-mentioned criticisms regarding timeliness and breadth of proposals, no one really knows for sure what the future holds and this is true for AI as well. Any piece of legislation can be improved upon, but that will always be easier if you have one already in place. It also helps if you’re backed up by a 450 million-strong market of fairly well-heeled consumers, Brexit notwithstanding. Strength lies in numbers. Tech giants and wannabes will have no choice but to toe the line. For example, neither Google nor its parent, Alphabet, can afford to give up their European business, as if it were China. Foreseeably, companies will tailor their product strategies first to the European market, trying their best to comply with the rules, and adjust from there.
US commentators are quick to point out that AIA could just be Europe’s way of “compensating” for being a technological backwater; its regulatory, bureaucratic zeal effectively ensures that it remains one. Indeed, none of the major AI players was born on European soil. In contrast, the US is often said to be friendlier to innovation, adopting a “wait and see” attitude before laying down rules, encouraging firms to “move fast and break things”. Americans prefer to let the market, not government decide, who the winners ought to be. Consider the current controversy whether consumers should “pay” a premium for privacy (Apple’s iPhone) or allow data to be gathered and sold to keep Internet services free (Facebook). The US does not have an all-encompassing set of laws governing AI, but not for want of trying. Instead, the approach has been piecemeal. For example, the Federal Trade Commission has issued warnings against algorithmic racial bias in employment, housing, credit, or insurance; California has legislated the Consumer Privacy Act covering personal data access, storage, accuracy, security, use, and third-party sale or sharing; and a motley crew of states and cities have had their share of initiatives restricting facial recognition in policing. This lack of uniformity may spell an opportunity for a few bit players, but for the majority, it’s more of a heavy, regulatory burden. The US may very well be sourgraping at what the sclerotic EU has been able to accomplish.
So far, no official reactions yet on these proposals from China, which has been busy reining in its own AI champions such as Alibaba, Tencent, Ant Group, and so forth, mainly on the basis of anti-competitive grounds. Data collection, privacy, and surveillance practices do not seem to be a concern for the Asian behemoth.Although laws are local, the digital economy plays by global rules. By moving first, the EU has the chance not only to set the agenda, but also to create the regulatory benchmark for AI. Legislation need not stifle technological innovation; it could even spell a boost by providing security and respectability (see the development of cryptocurrency exchanges). This is a huge step forward in putting AI, which extends, augments, and enhances human agency remarkably, at the service of genuine flourishing.