The EU is less interested in building the strongest AI than in controlling how AI fits into society.
On January 27, 2026 the European Commission launched formal proceedings under the Digital Markets Act (DMA) requiring Google to open its Android operating system and certain key data to rival AI developers and search engines operating in Europe to ensure fair competition. The European Commission has given Google six months to dismantle technical barriers that prevent rival AI assistants from operating fairly on Android devices. In parallel, Google must also grant other search engine providers access to core search data. (1)
This intervention is not primarily concerned with technological performance, but with the concentration of power. The European Union (EU) seeks to address one of the major challenges facing artificial intelligence today: integrating AI into society without undermining democratic values.
Artificial Intelligence in the Collective Imagination: A Long History of Human Control
When we look at the history of the development of artificial intelligence in the collective imagination, we must go back long before our era. The idea of artificial intelligence can already be found in the eighteenth century, in the Ottoman Empire, with the Mechanical Turk: a machine presented as a man capable of playing chess at an exceptional level and defeating all the opponents it faced. This demonstrates that the idea of a machine able to surpass human intelligence has existed in the collective imagination for a very long time. Eventually, it was revealed that a human operator was hidden behind the machine.
However, this example remains highly relevant, as it illustrates a fundamental reality that still applies to contemporary artificial intelligence. Behind every intelligent machine, there are human actors who design, operate, and maintain it. Without the massive amounts of data provided by humans, artificial intelligence systems cannot function, learn, or evolve. Consequently, AI cannot become a true conceptual or existential threat on its own; it remains deeply dependent on human intervention, intentions, and control.
For this reason, the regulation of AI markets is essential. Without effective regulatory frameworks, the AI sector risks evolving toward monopolization, where a small number of dominant actors control technological infrastructures and access to data. In such a scenario, data as the central resource of artificial intelligence would be concentrated in the hands of a few private entities.
Artificial intelligence first emerged as a philosophical and theoretical question before becoming a technical one.
The European Union’s regulatory approach implicitly assumes that the true danger of artificial intelligence lies not in its technical performance, but in the power it may acquire within human systems. This assumption directly echoes the foundational philosophical question raised in the mid-twentieth century: can machines think, and if so, under what conditions should they be granted decision-making authority?
When we examine the true origins of thinking machines, we are led to the European intellectual context. In the 1950s, Alan Turing was the first to explicitly pose the question, “Can machines think?”, in his article Computing Machinery and Intelligence. The importance of this question lies in issues of power and domination: only if machines possess genuine decision-making capacity can they become a conceptual danger. Otherwise, artificial intelligences remain mere imitations of human ways of thinking.
If we consider the historical context in which artificial intelligence emerged, it becomes clear that its development was closely tied to military needs. From the 1950s onward, the United States military funded research in automatic translation, particularly for the purpose of translating and decoding Soviet texts during the Cold War. The goal was to increase military might by controlling the adversary states’ information and decision-making processes, rather than just focusing on technical efficiency. In this way, artificial intelligence has always been associated with surveillance, strategic dominance, and the use of knowledge management as a tool of power.
While the United States continues to consolidate its dominance in artificial intelligence through control over infrastructure and global platforms, the European Union has taken a distinctly different path. Rather than competing for technological supremacy, the EU has positioned itself as a normative power, focused on embedding AI within a framework of democratic values, human rights, and market fairness. The Artificial Intelligence Act, adopted in June 2024, is a great example for the world’s first comprehensive AI regulatory framework. It ensures that AI is safe, transparent, and aligned with fundamental rights. This legislative approach divides AI systems into risk categories, from minimal to unacceptable, and sets obligations accordingly.
Yet, this commitment to ethical governance also reveals Europe’s strategic dilemma. By focusing on regulation over power, the EU acknowledges its technological lag behind the U.S. and China. Although provisions for innovation sandboxes and support for startups exist, Europe’s reliance on American cloud providers and AI infrastructure hampers true digital sovereignty.
