The spark

Denmark won’t be playing on the next Black Mirror season

Imagine waking up one morning and finding out that you gave a press conference, saying things you never imagined, while not even remembering giving it. In the world of Black Mirror, this is just Tuesday. Your image, your voice, even your expressions no longer belong to you; they have become someone else’s intellectual property, material for reproduction, entertainment, or even torture. Because in this universe, the question is not whether technology will betray you, but when. From “USS Callister,” where characters are trapped in digital clones without their consent, to “Joan Is Awful,” where a woman’s life becomes the property of a streaming platform, the series plays with the same idea over and over again: that your image, your voice, even your personality can become a product without you having a say in the matter.

In a world not so far away, in April 2024, Prime Minister Mette Frederiksen of Denmark abolished Easter, Christmas, and several other holidays, according to the president of the Danish People’s Party, Morten Messerschmidt, who posted a video of the prime minister making the announcement. Morten Messerschmidt, who posted a video of the prime minister making the announcement. Obviously, the video was a deepfake, and it was labeled as such, but it went viral, and a political storm followed immediately. Almost all parties condemned the act and called on the government to legislate.

Here you can see an image from the video showing a fake Mette Frederiksen. The image is AI-generated. (Photo: © Screenshot from X)

A year later, Culture Minister Jakob Engel-Schmidt submitted a radical proposal that all Danes should automatically acquire copyright to their own features. This proposal received broad support, with nine of the eleven parties in Parliament agreeing to provide the consensus necessary to enable every citizen to demand the removal of deepfakes concerning them from any platform, even when they have not been commercially exploited. It also provides for compensation, while maintaining an exception for parody and satire.

At the heart of the debate, concerning the ethics of the matter, was the discussion of what would happen if the same technology were used in a period of crisis, for example, if it showed the prime minister declaring a state of emergency or announcing restrictive measures due to a crisis. The concern was that deepfakes are not just funny or dangerous videos, but tools capable of undermining trust in institutional voices themselves. The innovation introduced by this Danish model is proactiveness, i.e., proof of damage to the victim is not required, essentially changing the data, restoring the pure sovereignty of appearance and identity with an automatic and universal act.

The ability of artificial intelligence to create realistic content has transformed the very concept of proof into a battlefield where deepfakes can be used to bypass facial and voice recognition systems, to the point of gaining illegal access to sensitive data. The problem, of course, does not stop at cybersecurity, with the social and psychological implications being enormous. 95% of deepfakes are pornographic, disproportionately affecting women and minors, while the ability to dismiss plausible material as fake, the so-called liar’s dividend, undermines trust in public discourse. In the political arena, a notable example was the deepfake audio calls featuring “Joe Biden’s voice” that blocked voters from going to the polls in New Hampshire, showing how such tools can destabilize even elections. The new reality has nothing to do with privacy per se, but with its extension as a matter of democratic security.

Denmark, with its proposal, is attempting to close this gap, with the country intending to use its upcoming EU presidency to push the issue in Brussels, with Engel-Schmidt already stating his intentions to raise the issue at the European table and push for a common framework that will oblige even the large platforms to comply, with the threat of heavy fines for non-compliance, making Denmark the small laboratory for an intervention that could shape European rules for the protection of digital identity in the age of genetic algorithms.

 

The European perspective: Homo Digitalis

Denmark’s proposal was seen as innovative and ambitious, generating both enthusiasm and skepticism. As Lefteris Helioudakis, a lawyer specializing in new technologies and Executive Director of Homo Digitalis, explained, “at first sight, the proposal appears to be a questionable workaround. Copyright functions as a social contract: the creator of an original work contributes something to society, and in return society grants that creator exclusive rights. Extending this logic to biometric information is misaligned with the fundamental principles of intellectual property law.” Consequently, Denmark’s ambitious new law sets a new precedent based on the concept of identity rather than creation, something without any precedent.

Homo Digitalis also points out that the legal framework at European level already existed in Directive 1385/2024, which criminalizes the use of deepfakes in sensitive contexts, up to the Digital Services Act (DSA), which provides rules on illegal content and platform liability. “Such legality rules are actually made by national parliaments of EU Member States. Thus, Denmark could explore other provisions on personal data, disinformation, defamation, to tackle such use of deepfakes, but it did not.” Going one step further, in a critique of the “innovation” of the Danish action, such unilateral national initiatives may attract media attention, but they do not address the root of the problem.

“At first sight, the proposal appears to be a questionable workaround. Copyright functions as a social contract: the creator of an original work contributes something to society, and in return society grants that creator exclusive rights. Extending this logic to biometric information is misaligned with the fundamental principles of intellectual property law.”

 

At the same time, when it comes to freedom of expression, Homo Digitalis is categorical: «Exceptions for parody, satire, and political critique are already well established within EU copyright law … Together, these mechanisms provide both the legal safeguards for freedom of expression and the procedural tools for tackling harmful deepfakes without reinventing the underlying balance.» In other words, the framework for protecting satire and political criticism already exists, and there is no need for a new balance; what is needed is the application of the existing, well-grounded rules.

Their observation on platform accountability, with the DSA, brings back to the forefront the obligations of large platforms to conduct systematic risk assessments, not only for illegal content but also for material that “may undermine public discourse.” This means that there is already provision for deepfakes, and so they fall under mandatory mitigation measures, even through the crisis response mechanism provided for by law. Denmark’s intervention seems more like a political act than a legal necessity, possibly in line with the broader trend of EU security, extending into the digital space. 

Finally, according to Lefteris Helioudakis, Executive Director of Homo Digitalis, the danger is that regulation will be treated as a panacea, when the real challenge is systemic. As Mr. Helioudakis warn, “Too often, commercial interests shape the agenda, pushing rapid adoption despite unresolved ethical, legal, and societal concerns. This lack of readiness and critical scrutiny increases the likelihood that AI will undermine individual rights and democratic processes.” The battle is not only with deepfakes; it is with the way artificial intelligence is invading every area of our lives without sufficient public debate. In short, Homo Digitalis does not reject Denmark’s initiative, but sees it as an opportunity for something broader, making it clear when they tell us that “What is needed now is a critical reflection on existing rules and active experimentation with the remedies they provide. This requires broader citizen engagement…We urgently need more voices and more participation to ensure meaningful oversight and democratic resilience.” The challenge, therefore, is not only to legally establish our identity, but to protect it politically, through collective action and democratic control.

Too often, commercial interests shape the agenda, pushing rapid adoption despite unresolved ethical, legal, and societal concerns. This lack of readiness and critical scrutiny increases the likelihood that AI will undermine individual rights and democratic processes.

Philosophizing black mirror

The idea of body copyright is essentially an attempt to legally frame something that is otherwise self-evident, namely that our bodies belong to us — in general — turning Denmark’s legislative initiative into something that until now we took for granted, namely that our bodies belong to us. However, the very act of legal protection reveals that this self-evident truth has collapsed, has been disrupted, and that technology forces us to define what “ego” means when our voice, image, and even our thoughts can be copied indefinitely, in an effort to re-establish ourselves as the owners of our own selves through third-parameters.

The problem, since it concerns us on a material level, cannot but be ontological. The body is the point of reference for identity, a material boundary that distinguishes the self from the other. When this body can be perfectly reproduced, the boundary collapses, especially when my digital “clone” is no longer a stranger, but not exactly me either. It is a hybrid that carries my voice and my movements, but does not obey my will. And here lies the challenge, not in who owns the intellectual property rights, but in how we maintain the concept of subjectivity, of the self, in a world where the self has multiplied. Body copyright is currently being discussed in market terms, in terms of registration, licensing, and compensation — with the logic being, on the one hand, that it should be protective, but on the other hand, that it transforms identity into capital, and this is not a problem of the solution but part of the problem we are called upon to solve. If my body is an asset, I can rent it, sell it, assign it — or lose it if I cannot pay for its protection. Freedom becomes a privilege, with the problem not being empowerment, but a new form of dependence, where the individual sovereignty that copyright is supposed to defend becomes a prerequisite for participation in society. Thus, the threat leads to an additional problem, where the body is transformed from a place of experience into an object of management, with society functioning not in terms of recognition, but in terms of compliance. A compliance that implies a kind of license where if you do not “protect” yourself, you become either invisible or vulnerable. Consequently, individual identity is no longer a relationship, it is a contract.

The real response to this cannot simply be more rights; we need a concept of “ownership” that does not define the body as a commodity, but as an extension of human dignity, a return to the roots of human rights. A policy that is not limited to financial compensation but ensures that no use of the image or “clone” can take away the autonomy of the person. The philosophical challenge of body copyright is not to find a fair price for our face, or a fair penalty for its misuse, but to ensure that our face never becomes marketable. Only in this way can technology expand, rather than nullify, human freedom.

Shape the conversation

Do you have anything to add to this story? Any ideas for interviews or angles we should explore? Let us know if you’d like to write a follow-up, a counterpoint, or share a similar story.