AI moves from labs to boardrooms

New technologies in the form of autonomous AI agents, capable of negotiating and signing contracts independently, are no longer confined to experimental labs — they are entering real business practices. AI systems can now conduct multi-stage negotiations, analyze partner data, propose contract terms, and finalize deals, often without real-time human oversight. Meanwhile, the law is struggling to catch up. According to the Commission’s latest discussion paper, two contrasting regulatory approaches are under consideration.

Global model vs. flexible guidelines

The first option involves adopting a model law developed under UN auspices, finalized in July. This international framework aims to clearly regulate key issues, including the validity of AI-signed contracts, the rules of liability, and dispute resolution. Its advantage lies in legal clarity and alignment with global standards — an invaluable asset in a globalized economy where cross-border transactions are routine.

The alternative is “soft law”: voluntary guidelines, model clauses, or draft recommendations for AI developers. The Commission notes that this approach allows flexibility — laws would not be rigid, but could evolve with rapidly developing technology. Critics warn, however, that the lack of mandatory rules could create market inequalities or even legal chaos. Can the EU economy, heavily reliant on contractual trust, sustain such an experiment?

Autonomous negotiations in a legal vacuum

The challenge goes beyond a simple “hard vs. soft law” dichotomy. AI agents differ fundamentally from traditional software — they do not simply follow pre-set instructions but can negotiate independently, adapting to the situation. This means contracts may be concluded without full human awareness until finalization. Are such contracts legally valid? The Commission’s document admits that the lack of clear answers undermines legal certainty.

For example, a company might discover that its “digital representative,” acting in good faith, has signed a contract later contested as non-binding. Then there’s the question of liability: if an AI agent signs an unfavorable or even harmful agreement, who is accountable? The algorithm developer, the operator, or the end user? Ambiguous regulations may discourage firms from implementing innovative solutions, potentially slowing the growth of Europe’s AI sector.

The unpredictability of machines: business risk or cost of innovation?

Another issue is unpredictability. Machine-learning systems modify their behavior based on data, meaning their actions may diverge from programmers’ initial assumptions. The Commission warns that AI might conclude contracts contrary to an owner’s interest or causing significant losses. Consequences could range from lawsuits to industry-wide disruptions.

Does this risk justify strict regulations? Proponents of soft law argue that excessive restrictions could stifle European innovation, pushing AI development to more permissive jurisdictions. Opponents insist that unclear legal frameworks threaten the foundations of private law, whose core is certainty and predictability.

Europe between global standards and its own path

The EU’s dilemma has a geopolitical dimension. The UN model law was developed with active participation from EU Member States, and the General Assembly recommends its adoption. Rejecting it could signal a retreat from global leadership in AI regulation. Conversely, blindly adopting an external model risks eroding European regulatory sovereignty. Will the EU have the courage to create its own ambitious legal framework, or settle for a compromise shaped by global interests?

The outcome will decide

Answers may come soon. The Justice for Growth Forum on October 16 will be the stage for debates between proponents of both approaches. The results will influence not only the shape of future regulations but also whether Europe will lead or follow in the global race for safe and innovative artificial intelligence.

One thing is clear: the stakes involve not just legal security for businesses, but a fundamental question — are societies ready to entrust machines with the authority to sign binding contracts on our behalf?

Shape the conversation

Do you have anything to add to this story? Any ideas for interviews or angles we should explore? Let us know if you’d like to write a follow-up, a counterpoint, or share a similar story.