MEPs’ Proposal
The resolution recently approved by the European Parliament aims to make minors’ online presence safer. According to a report cited by MEPs, 75% of children between the ages of 13 and 17 check their devices at least once an hour, while another study found that as many as one in four minors has “dysfunctional” or “problematic” smartphone use.
The proposed measures aim to tighten security controls on platforms. For example, in addition to introducing a ban on social media for children under 16—unless there is parental consent, in which case the minimum age is 13—the resolution recommends introducing new restrictions and rules.
These include a ban on practices that could foster digital addiction, especially among minors, such as infinite scrolling or harmful gamification, or the use of game-like elements or modes in non-game contexts. It also proposes banning websites that do not comply with European regulations, as well as other systems and methods potentially harmful to children, such as the application of algorithms that personalize content on social media by exploiting minors’ involvement. The resolution also emphasizes the need to protect minors from commercial exploitation, for example by banning platforms that encourage “kidfluencing,” or child influencers.
The European Parliament also intends to increase potential sanctions. Specifically, in addition to banning platforms that do not comply with European regulations, MEPs have also proposed holding senior managers of web platforms personally liable “in the event of serious and persistent non-compliance,” especially with respect to child protection and the introduction of age verification systems.
The Risks of Artificial Intelligence
Another point concerns the need for “urgent action” to protect minors from the risks associated with the use of generative artificial intelligence tools, which can often expose children to significant risks to their safety. We’ve seen this with apps that create deepfakes, including nudes, from any photo, or with apps that allow you to create customizable chatbots. In California, a 16-year-old boy committed suicide in August 2025 after spending months chatting with ChatGPT. Following his death, his parents sued OpenAI, holding its artificial intelligence responsible for their son’s death. Although the company denied the allegations and the lawsuit is ongoing, the news has reignited the debate about the risks of AI to child safety.