{"id":8973,"date":"2025-05-16T14:07:42","date_gmt":"2025-05-16T14:07:42","guid":{"rendered":"https:\/\/www.pulse-z.eu\/the-ai-code-of-conduct-a-tool-for-trust-or-a-shield-for-big-tech-2\/"},"modified":"2025-06-12T08:59:54","modified_gmt":"2025-06-12T08:59:54","slug":"kodex-spravania-umelej-inteligencie-nastroj-dovery-alebo-stit-pre-velke-technologicke-spolocnosti","status":"publish","type":"post","link":"https:\/\/www.pulse-z.eu\/sk\/kodex-spravania-umelej-inteligencie-nastroj-dovery-alebo-stit-pre-velke-technologicke-spolocnosti\/","title":{"rendered":"K\u00f3dex spr\u00e1vania umelej inteligencie: N\u00e1stroj d\u00f4very alebo \u0161t\u00edt pre ve\u013ek\u00e9 technologick\u00e9 spolo\u010dnosti?"},"content":{"rendered":"\n<p><span style=\"font-weight: 400\">The <\/span><a href=\"https:\/\/digital-strategy.ec.europa.eu\/en\/policies\/ai-code-practice\" target=\"_blank\" rel=\"noopener\"><span style=\"font-weight: 400\">General-Purpose AI Code of Practice (GPAI CoP)<\/span><\/a><span style=\"font-weight: 400\"> was meant to make sure AI you can trust gets built. But more and more groups and experts are saying it could end up just helping big companies get their way instead of keeping the public safe from AI risks. Will regular folks getting angry about this even matter?<\/span><\/p>\n<p><b>The Code&#8217;s Getting Heat<\/b><\/p>\n<p><span style=\"font-weight: 400\">At that AI meeting in France, all anyone talked about was money stuff, not really about keeping AI safe or protecting people&#8217;s rights. And according to a lot of people from regular society, the same thing&#8217;s happening with this AI Code for things like ChatGPT.<\/span><\/p>\n<p><span style=\"font-weight: 400\">This Code is being put together by thirteen smart people from universities, with help from a ton of experts, charities, scientists, and companies \u2013 almost a thousand in total. It&#8217;s supposed to tell the companies making ChatGPT-like stuff how to follow the EU&#8217;s AI rules. But some groups are already thinking about ditching the whole thing.<\/span><\/p>\n<p><span style=\"font-weight: 400\">&#8220;If this is just a show to make it look like decisions are being made fairly, then it&#8217;s pointless,&#8221; said Karine Caunes, who runs Digihumanism and edits the European Law Journal.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Other human rights groups working on the GPAI CoP feel the same way. A few academics and people from charities are even thinking about walking out in protest because they don&#8217;t think anyone&#8217;s actually listening to them.<\/span><\/p>\n<p><b>The Risk List: Where Things Get Messy<\/b><\/p>\n<p><span style=\"font-weight: 400\">The most important part of the Code is how it lists AI risks. But the way it looks now isn&#8217;t what the civil rights groups were hoping for.<\/span><\/p>\n<p><span style=\"font-weight: 400\">The very first rule of the EU&#8217;s AI law says it&#8217;s there to protect basic rights, reminds Sarah Andrew from the group Avaaz.<\/span><\/p>\n<p><span style=\"font-weight: 400\">But the Code treats risks to people&#8217;s rights as just &#8220;extra things to think about,&#8221; instead of putting them right up there with the main risks. Plus, things like &#8220;lots of illegal discrimination&#8221; or &#8220;messing with people in harmful ways&#8221; are on the risk list, but they&#8217;ve added words that make them seem less important.<\/span><\/p>\n<p><span style=\"font-weight: 400\">If they&#8217;ve already messed up how they&#8217;re listing the risks, then the whole system for managing AI will be flawed, Andrew warns.<\/span><\/p>\n<p><b>Missing Stuff: Checking Up on Companies and Knowing How AI is Trained<\/b><\/p>\n<p><span style=\"font-weight: 400\">Groups that focus on AI safety point out another problem: they don&#8217;t have to have independent security checks, and they don&#8217;t have to be clear about how they train their AI models.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Experts say that if companies can just test their own AI without anyone else looking, the system will be easy to abuse. Another big issue with the Code is whether they have to share how they train their AI. Tech companies don&#8217;t want to give away details about the information they use to train their systems because they&#8217;re worried about copyright and keeping data private.<\/span><\/p>\n<p><span style=\"font-weight: 400\">The Security Coalition, which includes a well-known expert named Stuart Russell, sent a letter to the people in charge of the Code with four main things they want to change:<\/span><\/p>\n<ul>\n<li><span style=\"font-weight: 400\">\u00a0Make independent checks of AI models mandatory.<\/span><\/li>\n<li>Give more time to look at risky systems before they&#8217;re used.<\/li>\n<li>Set clear levels of safety that, if crossed, mean an AI model is too dangerous to use.<\/li>\n<li>Create ways to deal with new risks that we haven&#8217;t thought of yet.<\/li>\n<\/ul>\n<p><span style=\"font-weight: 400\">&#8220;If they actually did these things, the people in charge would be doing what the best experts say about figuring out and dealing with risks,&#8221; argues someone who signed the letter.<\/span><\/p>\n<p><b>Big Tech&#8217;s Not Happy<\/b><\/p>\n<p><span style=\"font-weight: 400\">While groups that help people want tougher rules, the tech industry sees things very differently. Even before the AI meeting, Meta and Google said they really didn&#8217;t like the Code as it is.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Meta&#8217;s top lobbyist, Joel Kaplan, called it &#8220;not practical and won&#8217;t work,&#8221; and Google&#8217;s Kent Walker said it was &#8220;a step in the wrong direction.&#8221; The companies argue that the Code makes them do more than what the EU&#8217;s AI law says.<\/span><\/p>\n<p><span style=\"font-weight: 400\">&#8220;That&#8217;s not true,&#8221; says Caunes. The Code isn&#8217;t just supposed to repeat the law, it&#8217;s supposed to go further, he believes.<\/span><\/p>\n<p><span style=\"font-weight: 400\">But some of these complaints seem to be getting attention in Brussels. The European Commission has started talking about &#8220;making rules simpler&#8221; to help the economy grow.<\/span><\/p>\n<p><span style=\"font-weight: 400\">A pretty telling moment at the AI meeting was when Google&#8217;s CEO, Sundar Pichai, gave the closing speech. The fact that a company pushing for fewer rules got such a prime spot didn&#8217;t escape anyone&#8217;s notice.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Google is quietly pressuring the people in charge of the Code to make the references to EU law weaker, while also getting the last word at the summit, Andrew commented.<\/span><\/p>\n<p><b>What&#8217;s Next?<\/b><\/p>\n<p><span style=\"font-weight: 400\">The back-and-forth about this AI Code shows how much the interests of regular people and big tech companies are clashing. While groups for the public are fighting for things like honesty, safety, and human rights, the big companies are pushing for fewer rules and more freedom to do what they want.<\/span><\/p>\n<p><span style=\"font-weight: 400\">So far, the public groups haven&#8217;t officially quit working on the Code, but they&#8217;re getting fed up. If no one listens to them, they might start doing things that get more attention.<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The General-Purpose AI Code of Practice (GPAI CoP) was meant to make sure AI you can trust gets built. But more and more groups and experts are saying it could end up just helping big [&hellip;]<\/p>\n","protected":false},"author":158,"featured_media":6211,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[1],"tags":[],"post_formats":[673],"coauthors":[],"class_list":["post-8973","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-general","post_formats-clanky"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.pulse-z.eu\/sk\/wp-json\/wp\/v2\/posts\/8973","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.pulse-z.eu\/sk\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.pulse-z.eu\/sk\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.pulse-z.eu\/sk\/wp-json\/wp\/v2\/users\/158"}],"replies":[{"embeddable":true,"href":"https:\/\/www.pulse-z.eu\/sk\/wp-json\/wp\/v2\/comments?post=8973"}],"version-history":[{"count":1,"href":"https:\/\/www.pulse-z.eu\/sk\/wp-json\/wp\/v2\/posts\/8973\/revisions"}],"predecessor-version":[{"id":8974,"href":"https:\/\/www.pulse-z.eu\/sk\/wp-json\/wp\/v2\/posts\/8973\/revisions\/8974"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.pulse-z.eu\/sk\/wp-json\/wp\/v2\/media\/6211"}],"wp:attachment":[{"href":"https:\/\/www.pulse-z.eu\/sk\/wp-json\/wp\/v2\/media?parent=8973"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.pulse-z.eu\/sk\/wp-json\/wp\/v2\/categories?post=8973"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.pulse-z.eu\/sk\/wp-json\/wp\/v2\/tags?post=8973"},{"taxonomy":"post_formats","embeddable":true,"href":"https:\/\/www.pulse-z.eu\/sk\/wp-json\/wp\/v2\/post_formats?post=8973"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.pulse-z.eu\/sk\/wp-json\/wp\/v2\/coauthors?post=8973"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}