Global AI regulation: how the EU’s AI Act is resonating across media, the public, and industry
- Tech
As AI keeps growing in importance, governments, industry and stakeholders around the globe are asking themselves who should be regulating it, and what those rules and laws should be.
After months-long negotiations, in March the European Union plunged head-first into this the regulatory arena with its landmark Artificial Intelligence Act (AI Act), aiming to establish a broad framework for how AI is developed, used, and assessed.
🇪🇺 Democracy: 1️⃣ | Lobby: 0️⃣
I welcome the overwhelming support from European Parliament for our #AIAct —the world's 1st comprehensive, binding rules for trusted AI.
Europe is NOW a global standard-setter in AI.
We are regulating as little as possible — but as much as needed! pic.twitter.com/t4ahAwkaSn
— Thierry Breton (@ThierryBreton) March 13, 2024
Regulations set by the EU are highly influential, often becoming the cornerstone of corporate and governmental policies across the world, and this first ambitious act is being framed by the world's press as an important moment in the development of AI.
Every day, AI is getting a little bit more integrated into people's work and private lives, more and more so since the launch of ChatGPT in late 2022. But as it gets layered into the platforms we use and becomes a daily companion in our digitally-enabled behaviors, concerns about where AI might harm societies keeps on mounting.
From deepfakes, to scams and cyber security, to copyright, to AI's impact on employment, to existential threats looming around the rise of AGI (Artificial General Intelligence) – collective worries around artificial intelligence are rippling across society as a whole.
In this context, it's hardly surprising that over 75% of the audience talking about the AI Act perceives it with a significant degree of optimism and positivity.
But not everyone's happy with it. Some working on AI wonder how exactly these rules will hamper developers, as evidenced by this post from this pro-technology SubReddit dedicated to Effective Accelerationism.
Others – namely Amnesty International's tech arm, criticized the Act for not doing enough to safeguard human rights.
But regulation is also something discussed early and often by the Artificial Intelligence industry itself. OpenAI CEO Sam Altman –arguably the most visible and important person in the industry– emphasized the need for AI regulations in May 2023, although critics argued that this push is aimed at creating barriers to entry for competitors.
Sam Altman, CEO of the start-up behind the AI chatbot ChatGPT, agreed with members of the Senate on Tuesday on the need to regulate increasingly powerful AI technology. https://t.co/iFjgQBBdwY pic.twitter.com/NNXJykGJMu
— The New York Times (@nytimes) May 16, 2023
Such statements occur against a backdrop of increasing audience concern, which span from AI's potential impact on Hollywood and the ethical handling of creators' data…
Seen a lot of people on socials laugh about this short film created by #SoraAI saying things like "you can tell it's Ai lol" or "it just looks stupid" they are all failing to see that this is basically day zero in the Ai movie world
What does 5 years from now look like?We are… pic.twitter.com/Wob3KWvnzE
— BossLogic (@Bosslogic) March 26, 2024
…to the perils of exacerbating social inequalities.
we need AI laws passed right now
:( this is going to ruin so many womens lives https://t.co/eoO2U23RhT— sal 🫐 (@ghostinmypocket) February 15, 2024
The level of interest remained relatively modest until September 2022, when US big tech leaders advocated for ‘balanced’ AI regulation.
Following the Microsoft-Mistral deal, the passage of the EU Act sparked a notable 609% surge in Google searches in March compared to April 2023. This surge signals a significant shift of industry-led discussions into the public domain.
This mounting interest in regulation, and the media's understanding that the EU often acts as a 'first mover' in such cases, helped ensure that the conversation rapidly became a global one.
It was a dynamic that happened in two ways - english-language publications wrote up their reportage and opinions pieces at the same time as Francophone or Teutophone individuals layered their own analysis over reports from mainland Europe.
US giants' new (?) EU law lobbying tactic appears to be tipping funds into a couple of european AI startups who can be positioned as national champions + front 'local' pushback that demands reg carve out for Big AI. Step 3: Profit? https://t.co/p3bVOQi4l1 https://t.co/rKYtxH2Cyh
— Natasha 🧗♀️ (@riptari) December 5, 2023
Some of the editorials emerging from the US in particular not only acknowledged the precedent being set by the AI Act, but also saw it as an instant of the EU leaving other nations and geographies behind.
Both within Europe and outside, a plurality of individuals and communities engaged with the news story as it played out, exchanging news, information and hypotheses on the future of AI regulation.
These different communities can broadly be split across four categories: tech industry insiders in red, politicos in blue, news followers in green, and activists in purple.
Among these segments, tech industries account for nearly 30% of the conversation, while EU political communities - represented by EU Policymakers and Tech Law Pros - contribute approximately 18%.
Naturally, these different communities spoke about the topic in different ways, and with differing degrees of positivity.
EU politicos s are most optimistic about the AI Act, amplifying posts that suggest the regulation should provide a global standard. Liberal news followers and activists are similarly positive, although more likely to characterise the act according to what it prevents.
The European AI Act is established.
Chinese-style citizen monitoring systems that reward or punish citizens for their behavior will be banned.
Also, profiling based on facial recognition is forbidden.
Interesting to follow!https://t.co/UDIUrkPJBp— Timothy Robert (@timingnl) December 9, 2023
US AI entrepreneurs and tech lawmakers, meanwhile, hold a less optimistic outlook on the act in contrast to their EU counterparts. They raise doubts as to its effectiveness, particularly concerning potential loopholes such as the regulation of copyright and foundational models, which remain unaffected by the current legislation.
Beyond positivity and negativity, mapping how each often each community engages with the different aspects of AI regulation gives us a sense as to where their priorities - and concerns - lie.
Activists are notably vocal about deepfakes, cybersecurity & monopoly, with particular attention on the imbalance of power between large tech giants and smaller startups. They fear that the regulation, intended to mitigate advantages favoring larger entities, may stifle social equality & innovation, especially from individual creators.
The bad thing about this AI regulation is that its effect will be to limit AI business to big venture capital funded companies like openAI, Google, Meta, IBM, Microsoft, maybe SAP and such giants. Smaller innovative companies cannot afford the certification, documentation and…
— Dan Aulkerman (@blafasel42) December 16, 2023
Within industry, on the other hand, there is a focus on open-source versus closed-source AI. The AI Act pushes companies the direction of the latter, dismaying a number of AI professionals who believe the former creates more transparency and opportunity for start-ups.
The Microsoft-Mistral deal provided something of a flashpoint for these debates, as the French company adopted the closed-source standard of its US-based partner.
In other words, Mistral is no longer (just) a European leader and is backtracking on its much-celebrated open source approach. Where does this leave the start-up vis-à-vis EU policymakers as the AI Act's enforcement approaches? My guess is someone will inevitably feel played. 8/8
— Luca Bertuzzi (@BertuzLuca) February 26, 2024
This deal was viewed in the context of an US vs EU race, with some within Europe lambasting the sale of one of the continent's leading lights.
Not that Mistral and Microsoft were the only companies that ended up attracting audience interest:
OpenAI was thrust into the spotlight following a viral clip in which the company CTO suggested she doesn’t know where Sora training data came from. This helped spark a huge public debate about the urgent need for AI regulation and the importance of reclassifying risk tiers.
Me: What data was used to train Sora? YouTube videos?
OpenAI CTO: I'm actually not sure about that...(I really do encourage you to watch the full @WSJ interview where Murati did answer a lot of the biggest questions about Sora. Full interview, ironically, on YouTube:… pic.twitter.com/51O8Wyt53c
— Joanna Stern (@JoannaStern) March 14, 2024
Tumblr, over the same period, saw a rise in conversational volume of 87.5% (compared to Nov ‘23) - the greatest relative rise of any brand. The revelation of its data sales to OpenAI and Midjourney served to stir controversy amongst Tumblr users. Despite the availability of an opt-out feature allowing users to reject data selling for AI training, many are upset about the use of their personal content in this manner.
When the audience categories introduced previously - industry insiders, politicos and news followers - are mapped onto the most mentioned companies, it becomes apparent that different communities relate to brands in very different ways.
Industry insider's focus, once centered squarely on OpenAI, now shifts to Google owing to reports that the company was going to reach an agreement with Apple to power iPhone AI features using Gemini.
Even before these discussions, Google's Gemini already emerged as a focal point of industry conversation.
It happened. AIs are now getting too smart to test openly. They can tell something is "different".
And people wonder why Gemini and Bing act so crazy. It's us. Anyone ever read 2001?
We lie and they know it. 🤫 #AI #llm https://t.co/v3Fe8bh8xv
— Curiouser llc ✡️ (@soi) March 6, 2024
Besides the furore that had surrounded the apparent omission of white men from generated images, the company's apology to the Indian government for unsubstantiated comments made by Gemini and its self-claim of the platform as 'unreliable' have fueled industry skepticism.
At the same time, politicos and lawmakers focus on OpenAI, and the salutary example its recent public fallout offers to others within the field.
That’s absolutely right.
The OpenAI saga shows how private ordering and self regulation can be defeated by competitive market responses.
Hence the need for public interest regulation. The AI Act is important. Let’s not botch the details. https://t.co/qhytfNv0B9
— Nicolas Petit (@CompetitionProf) November 21, 2023
Finally, news followers tend to focus on the Microsoft-Mistral deal, as a piece of corporate dealmaking conducted in the shadow of the AI deal, which may or may not (in the opinion of commentators) have influenced the precise nature of the AI Act itself.
The news of the day is that #Microsoft took a share into #mistralai, creating shadows over France's real objectives when deregulating foundation models in the #AIAct. Was it a question of defending a European champion or guaranteeing a US investment? https://t.co/z2o3YW4j6i
— Innocenzo Genna (@InnoGenna) February 27, 2024