Tell EU stories, develop your journalism skills, and make your voice heard!

It sounds unbelievable, like something a conspiracy theorist might yell deliriously, but technology has now advanced to a degree where it can read our minds, and billionaires are investing in it heavily. So, what is neurotechnology, and have we got enough regulation to protect ourselves as more products integrate it?

Neurotechnology, also referred to as neurotech, has already hit the market with non-invasive portable neuroscanning devices like those sold by Emotiv and Muse. Other products are yet to go to market, such as Elon Musk’s Neuralink “brain chip” which aims to allow users to operate a phone or computer through their mind alone. Meta is investing in consumable neurotech, and Apple has filed patents for AirPods that can scan your brain, the data from which can then be analysed by your phone or computer.

These technologies have huge ethical and human rights implications, and yet there is currently no legislation in the EU that directly regulates this burgeoning market. How do we create laws that keep consumers safe, prevent manipulation, limit the powers of governments to misuse these technologies, and hold companies accountable for their actions? Chile has, for example, already regulated and successfully prosecuted neurotech companies for data misuse, and perhaps it is time that the EU follow suit. Taking big tech’s history of misconduct into account, why are we giving them access to our brains without guardrails? To prevent harm before it even starts, we need regulation that addresses the ethical issues that arise with these technologies.

What is neurotechnology?

Neurotech refers to technologies that interact with our nervous systems. Brain imaging devices like fMRI and EEG machines are nothing new and a part of many healthcare provisions. What is different now though, is the use of AI analysis to interpret these scans. This technology can decode the words and sentences that form your inner monologue, or reconstruct the images that you are looking at, simply through brain scans.

‘Mind reading’ neurotech has varying degrees of accuracy, but even when it doesn’t work very well, its ability to impact our behaviour is staggering. Research shows that people tend to overestimate the power of neurotech, which leads to instances where the technology is used in life changing situations without it necessarily revealing the truth. For example, “brain fingerprinting” methods have been used in India for criminal investigations and convictions since 2003, despite it being argued to be unreliable and ineffective by neuroscientists. When it comes to neurotechnology, its actual capabilities are only part of the impact; the story we tell about it is enough to change our world.

Beyond deciphering the data from our brains, neurotech can also influence them.  Companies like Neuroba are working to bring technologies to market that improve cognitive function or enhance memory. This ‘neuroenhancement’ technology, would impact our behaviour, personalities, or moods, raising ethical questions that need to be reflected in our legislation and our political discussions. For example, can someone be held liable for their actions if their mood or behaviour is influenced by the technology? If it makes someone an impulsive spender, who should foot the bill? If someone becomes more aggressive because of the tech, how accountable are they for their own actions? Can you be said to give informed consent under the influence of a machine? Most scary of all perhaps, what could happen if a user is “hacked”, a concern that has been raised by ethicists. A computer changing how we behave, think, or act, is a concerning ethical conundrum at the best of times, but complicated by technology companies with financial, (rather than ethical), motivations at the core of their operations.

These are issues that need to be addressed in our legislation. History shows that sectors are regulated after tragedy has struck; is that something we are willing to wait for?

Photo of CT scans on a computer screen

CT scans. Image by Kiril Ukr from Pixabay

Big tech: a legacy of data misuse

The tech industry has shown itself to be repeatedly negligent when it comes to user data and citizen welfare. This subpar track record includes repeated fines for data misuse given to Apple, Meta, and X by the European Commission, the infamous Cambridge Analytica scandal where voters were illegally targeted through social media data in the UK’s Brexit referendum, Meta selling user data to China, and a barrage of land use violations, copyright breaches, and environmental damage by AI companies. The list could go on.

This history of data misuse and irresponsible behaviour by tech companies, their famous ‘move fast and break things’ approach, and a flagrant attitude towards regulators, suggests that if we let big tech into our heads, legal protection is only the start of keeping us safe.

But why read our minds in the first place?

Why though, are these technologies being made in the first place, and why are billionaires like Elon Musk investing in them so heavily? Many neurotech companies tout the benefits of their technologies for those with disabilities, but the money making potential with these technologies comes from elsewhere.

Neuromarketing” is being touted as marketing’s next frontier, with neurotech allowing companies to sell to us like never before. Social media algorithms already curate our feeds based on what it thinks we feel about content, like interpreting how quickly we swipe past a video. Neurotech however, like Apple’s yet to be released brain-reading AirPods, would create deeper insights into how you feel and respond to content. For the marketeers in the business of making you feel the right emotion to makes you hit buy, this is a gold mine. It is of course not a far cry to extrapolate this to the world of politics and governance, where image and emotion are equally as important. What does a post-Cambridge Analytica world of politics look like when spin doctors know your brain?

Beyond swaying publics and advertising revenue, neurotech has potential for the other technologies that billionaires are invested in; namely simulation and computer modelling. For example, the body movements involved in swinging on a swing set; a simple task for many, but a complex one to explain to a computer Neurotech, particularly in wearable consumable products, can provide highly valuable data that makes these simulations more accurate. This seems to be yet another instance of big tech using our data for their own commercial gain. Should we really accept our neurons being used to train simulation models? The adage of the Facebook era, “if you don’t pay for the product, the product is you”, seems to be outdated. When it comes to consumer neurotechnology, we are looking at a potential scenario where we could be paying to be the product.

Human Rights include your brain: Chile’s solution

Regulation of neurotechnology is not a pipedream, it has been done before, and successfully. Chile has made “neurorights” a part of their constitution, expanding human rights frameworks to include considerations around equal access to neurotech, and protection of user data. In 2023, this legislation was used in a landmark court case, where the neurotech company Emotiv was made to delete the data of former senator Guido Girardi, who years earlier had bought and used one of their products, but not paid for an extra licence to access his data. This was deemed to violate his neurorights under the constitution. Girardi won the case and Emotiv deleted his neurodata.

The Giardi case proves the potential of neurorights legislation in protecting our brain. Whilst this was a landmark case, one can imagine much more severe cases that could arise if companies do not behave responsibly, and if legislators do not hold them to account. Sensitive data, such as sexual orientation or feelings around a government, could be life threatening in some circumstances, and, as mentioned, don’t need to be correct to be believed. What happens then, if that data is hacked? What happens if big tech continue their legacy of selling user data? What happens if it gets into the wrong hands?

If we are to let companies conduct brain scans, inadequate handling of that data by companies is not something we can simply leave up to chance. So, how are we currently protected in the EU?

Does EU regulation protect consumers?

In the EU there is no legislation directly addressing neurotech. Any data obtained through neurotech devices would be covered by the Medical Device Regulation (MDR) and General Data Protection Regulation (GDPR) legislations. Neither of these however, actually address neurotech specifically. There is also the EU’s AI Act from 2024, but legal scholars argue that the AI act is too broad to cover neurotech, at all. In terms of human rights, whilst the European Union’s Charter of Fundamental Rights touches on relevant themes, but still, neurorights are not explicitly outlined.

Regulators have the potential to keep us safe before any violations of our rights, safety or wellbeing occur. We should have the ability to choose if and how our brain is recorded or influenced, which may not be as simple as it sounds. Apple’s patent suggests that neurotech might be integrated into products that are not explicitly marketed as brain scanning devices, meaning our ability to “opt out” is murkier than it looks.

We deserve to be fully informed about how new technologies could influence our feelings or behaviour. We deserve regulators that are willing to take a stand against the companies with histories of malfeasance as they venture to profit off our brain waves. We deserve to be kept safe by regulation before the danger even appears.

Shape the conversation

Do you have anything to add to this story? Any ideas for interviews or angles we should explore? Let us know if you’d like to write a follow-up, a counterpoint, or share a similar story.