Is Meta failing to protect users from the distribution of non-consensual images?

Outrage spread across Italy in mid-August when it emerged that men in a Facebook group named Mia Moglie, translated as My Wife, were circulating photos of their wives alongside degrading remarks.
The group was as easy to access as it was disturbing to scroll through.
Pictures of women—some clearly taken in secret, others likely meant to remain private within a chat or marriage—were followed by offensive comments.
Some were casual but inappropriate, like “congratulations”; others were hypersexual, such as “that mouth is perfect to work”; and some were obscene, like “the best therapy for a woman is always a man’s d**k”.
Created in 2019, the group became active in May 2025, Meta told Euronews Next. On August 18, Mia Moglie came into the spotlight when author and screenwriter Carolina Capria posted a screenshot of the group on Instagram and urged people to report it. At the time, the group counted around 32,000 members.
After several complaints filed with Meta and local authorities, the tech giant shut down the group on August 20 for breaching its Adult Sexual Exploitation policies, a Meta spokesperson told Italian newspapers and later confirmed to Euronews Next.
Under this policy, among others, Meta prohibits the sharing or threatened distribution of non-consensual intimate images, whether real or fictional. It also forbids secretly taking images of commonly sexualised body parts and sharing them, threatening to share, or expressing an intent to distribute private sexual conversations without consent.
“We do not allow content that threatens or promotes sexual violence, sexual assault or sexual exploitation on our platforms,” the spokesperson added.
Despite the closure of Mia Moglie, national news outlets reported the appearance of new channels with the same purpose on social media platforms such as Facebook and Telegram, albeit with fewer members for now.
But how long until a new group goes viral? And, most importantly, how is this even possible on platforms as large and accessible as Meta’s?
Is Meta’s policy on non-consensual images effective enough?
Meta’s platforms—Facebook, Instagram, Messenger, and Threads—address issues such as gender based violence and non-consensual images in several sections of their shared Community Standards. Yet, the reality on the ground often looks different from what is written on paper.
Mia Moglie is not a new, exceptional phenomenon. Back in 2017, Facebook shut down a French-speaking group called Babylone 2.0, where more than 50,000 members were exchanging intimate images of women without their consent.
And in 2024, Meta's semi-independent observer body, the Oversight Board, urged the tech giant to do more to address non-consensual, nude deepfakes on its platforms.
“Meta’s current policy is insufficient,” Silvia Semenzin, digital sociologist and post-doctoral researcher at the University Complutense of Madrid, told Euronews Next.
“Gender violence online has become normalised also on Meta’s platforms, with impunity being the rule rather than the exception,” she added.
A 2024 report by the Center for Countering Digital Hate revealed that Instagram failed to act on 93 per cent of abusive comments directed at well-known United States (US) women politicians on Instagram.
According to Semenzin, Meta’s approach focuses heavily on child protection, which she considers “important but not sufficient” when addressing gender-based violence.
A similar perspective is shared by lawyer Cathy La Torre, who explained that while Meta is slow to act on non-consensual images of ordinary individuals, it responds quickly to cases involving child sexual abuse or commercial content.
In the first case, rapid action reflects the strong sensitivity of the US — where the tech giant is headquartered — to child sexual exploitation; in the second, it can be explained as an intent to protect the platform’s economic interests with the industries, such as possibilities of advertisements, La Torre explained.
La Torre added that Meta has some important strategies to counter gender violence, but does not promote them enough.
For example, the company relies on “trusted flaggers”, non-profit organisations recognised by the company to deal with illegal content online. The Italian association Permesso Negato is one such organisation and offers technological and legal support to victims of non-consensual intimate images and online violence. Yet, little information about this trusted flagger appears on Meta’s channels.
“Meta does not say these things, and it does not do that because moderation has a cost, and they’d rather manage these issues with artificial intelligence,” La Torre said.
A hint of the declining importance of moderation on Meta’s platforms emerged in early 2025, when Meta CEO Mark Zuckerberg announced that the company would phase out US fact-checkers.
Euronews Next reached out to Meta for their reaction to these allegations, but the company did not respond at the time of publication.
Meta is not the only one to blame
According to both Semenzin and La Torre, Meta is not the only party at fault: international institutions, as well as the Italian political system, also share some responsibility.
“Regulators need to enforce the Digital Service Act (DSA) and similar frameworks with much stronger oversight, sanctions, and transparency requirements,” Semenzin told Euronews Next.
The European Union’s Digital Services Act (DSA) sets rules to protect consumers’ rights online. The regulation, which came into force in 2024, has been welcomed by experts as a crucial—but just a first—step toward safeguarding digital rights.
Italy also bears part of the responsibility in this story, according to Cathy La Torre.
“If lawmakers pass laws, then platforms have to comply with them. But in Italy, we don't make these laws because people in the parliament and government have no knowledge of cyberspace”.
This expertise gap is reflected in the wider population as well. According to the 2024 Digital Decade Country Report by the European Commission, in 2023, only 45.8 per cent of people in the Peninsula had at least basic digital skills, a figure well below the EU's 55.6 per cent average.
In recent years, other EU countries have taken strong legislative measures against big social media platforms. For instance, in 2017, Germany passed a law obliging social media platforms to identify and remove content that violates German rules on hate speech.
Italian legislation does not appear as assertive in regulating tech giants as Germany’s. Nevertheless, the Italian Competition Authority (AGCOM) has not hesitated to fine Meta over issues relating to data management and unfair competition.
Regardless of who is responsible for halting the non-consensual distribution of images, one thing is certain: the problem is not exclusive to Meta’s platforms.
Soon after the case of Mia Moglie, another online forum came under the Italian spotlight: Phica.eu.
Created in 2005, the forum had thousands of subscribers and shared intimate pictures, deepfakes, and other types of images of both well-known and unknown women in the Italian public, including the country’s prime minister Giorgia Meloni.
After years of petitions and complaints, the forum was shut down last week.
Yet, when it comes to cyberspace, one question remains unanswered: what happens to the pictures once they are diffused online?
Today