The General-Purpose AI Code of Practice (GPAI CoP) was meant to make sure AI you can trust gets built. But more and more groups and experts are saying it could end up just helping big companies get their way instead of keeping the public safe from AI risks. Will regular folks getting angry about this even matter?

The Code’s Getting Heat

At that AI meeting in France, all anyone talked about was money stuff, not really about keeping AI safe or protecting people’s rights. And according to a lot of people from regular society, the same thing’s happening with this AI Code for things like ChatGPT.

This Code is being put together by thirteen smart people from universities, with help from a ton of experts, charities, scientists, and companies – almost a thousand in total. It’s supposed to tell the companies making ChatGPT-like stuff how to follow the EU’s AI rules. But some groups are already thinking about ditching the whole thing.

“If this is just a show to make it look like decisions are being made fairly, then it’s pointless,” said Karine Caunes, who runs Digihumanism and edits the European Law Journal.

Other human rights groups working on the GPAI CoP feel the same way. A few academics and people from charities are even thinking about walking out in protest because they don’t think anyone’s actually listening to them.

The Risk List: Where Things Get Messy

The most important part of the Code is how it lists AI risks. But the way it looks now isn’t what the civil rights groups were hoping for.

The very first rule of the EU’s AI law says it’s there to protect basic rights, reminds Sarah Andrew from the group Avaaz.

But the Code treats risks to people’s rights as just “extra things to think about,” instead of putting them right up there with the main risks. Plus, things like “lots of illegal discrimination” or “messing with people in harmful ways” are on the risk list, but they’ve added words that make them seem less important.

If they’ve already messed up how they’re listing the risks, then the whole system for managing AI will be flawed, Andrew warns.

Missing Stuff: Checking Up on Companies and Knowing How AI is Trained

Groups that focus on AI safety point out another problem: they don’t have to have independent security checks, and they don’t have to be clear about how they train their AI models.

Experts say that if companies can just test their own AI without anyone else looking, the system will be easy to abuse. Another big issue with the Code is whether they have to share how they train their AI. Tech companies don’t want to give away details about the information they use to train their systems because they’re worried about copyright and keeping data private.

The Security Coalition, which includes a well-known expert named Stuart Russell, sent a letter to the people in charge of the Code with four main things they want to change:

  •  Make independent checks of AI models mandatory.
  • Give more time to look at risky systems before they’re used.
  • Set clear levels of safety that, if crossed, mean an AI model is too dangerous to use.
  • Create ways to deal with new risks that we haven’t thought of yet.

“If they actually did these things, the people in charge would be doing what the best experts say about figuring out and dealing with risks,” argues someone who signed the letter.

Big Tech’s Not Happy

While groups that help people want tougher rules, the tech industry sees things very differently. Even before the AI meeting, Meta and Google said they really didn’t like the Code as it is.

Meta’s top lobbyist, Joel Kaplan, called it “not practical and won’t work,” and Google’s Kent Walker said it was “a step in the wrong direction.” The companies argue that the Code makes them do more than what the EU’s AI law says.

“That’s not true,” says Caunes. The Code isn’t just supposed to repeat the law, it’s supposed to go further, he believes.

But some of these complaints seem to be getting attention in Brussels. The European Commission has started talking about “making rules simpler” to help the economy grow.

A pretty telling moment at the AI meeting was when Google’s CEO, Sundar Pichai, gave the closing speech. The fact that a company pushing for fewer rules got such a prime spot didn’t escape anyone’s notice.

Google is quietly pressuring the people in charge of the Code to make the references to EU law weaker, while also getting the last word at the summit, Andrew commented.

What’s Next?

The back-and-forth about this AI Code shows how much the interests of regular people and big tech companies are clashing. While groups for the public are fighting for things like honesty, safety, and human rights, the big companies are pushing for fewer rules and more freedom to do what they want.

So far, the public groups haven’t officially quit working on the Code, but they’re getting fed up. If no one listens to them, they might start doing things that get more attention.

Shape the conversation

Do you have anything to add to this story? Any ideas for interviews or angles we should explore? Let us know if you’d like to write a follow-up, a counterpoint, or share a similar story.