BRUSSELS — In the 10 years since the Brussels terror attacks, the EU has
tightened its security strategy but the internet is opening up new threats,
according to the bloc’s counterterrorism coordinator.
Daesh is “mutating jihadism,” Bartjan Wegter told POLITICO in an interview on
the eve of the anniversary of the terrorist attacks in Brussels, which pushed
the bloc to bolster border protection and step up collaboration and
information-sharing.
The group has “calculated that it’s much more effective to radicalize people who
are already inside the EU through online environments rather than to organize
orchestrated attacks from outside our borders,” he said. “And they’re very good
at it.”
Ten years ago, two terrorists from Daesh (also known as the so-called Islamic
State) blew themselves up at Brussels Airport. Another explosion tore through a
metro car at Maelbeek station, in the heart of Brussels’ EU district. Thirty-two
people were killed, and hundreds more injured.
The attacks came just months after terrorists killed 130 people in attacks on a
concert hall, a stadium, restaurants and bars in Paris, exposing gaps in
information-sharing in the bloc’s free-travel area. The terrorists had moved
between countries, planning the attacks in one and carrying them out in another,
said Wegter, who is Dutch. “That’s where our vulnerabilities were.”
Today, violent jihadism remains a threat and new large-scale attacks can’t be
excluded. But the probability is “much, much lower today than it was 10 years
ago,” said Wegter.
In the aftermath of the attacks, the bloc changed its security strategy with a
focus on prevention and a “security reflex” across every policy field, according
to Wegter. It’s also stepping up police and judicial collaboration through
Europol and Eurojust, and it’s putting in place databases — including the
Schengen Information System — so countries could alert each other about
high-risk individuals, as well as an entry/exit system to monitor who enters and
leaves the free-travel area.
But the bloc is facing a new type of threat, as security officials see a gradual
increase in attempted terrorist attacks by lone actors. A lot of that is being
cultivated online and increasingly, younger people are involved.
“We’ve seen cases of children 12 years old. And, the radicalization process [is]
also happening faster,” Wegter said. “Sometimes we’re talking about weeks or
months.”
In 2024, a third of all arrests connected to potential terror threats were of
people aged between 12 and 20 years old, and France recorded a tripling of the
number of minors radicalized between 2023 and 2024, said Wegter.
“Just put yourself in the shoes of law enforcement … You’re dealing with young
people who spend most of their time online … Who may not have a criminal record.
Who, if they are plotting attacks, may not be using registered weapons. It’s
very hard to prevent.”
Violent jihadism is just one of the threats EU security officials worry are
being cultivated online.
Wegter said there is also an emerging trend of a violent right-wing extremist
narrative online — and to a lesser extent, violent left-wing extremism. There’s
also what he called “nihilistic extremist violence,” a new phenomenon that can
feature elements of different ideologies or a drive to overthrow the system, but
which is fundamentally minors seeking an identity through violence.
“What we see online, some of these images are so horrible that even law
enforcement needs psychological support to see this kind of stuff,” said Wegter.
Law enforcement’s ability to get access to encrypted data and information on
people under investigation is crucial, he stressed, and he drew parallels with
the steps the EU took to secure the Schengen free movement 10 years ago.
“If you want to preserve the good things of the internet, we also need to make
sure that we have … some key mechanisms to safeguard the internet also.”
Tag - Illegal content
Anton, a 44-year-old Russian soldier who heads a workshop responsible for
repairing and supplying drones, was at his kitchen table when he learned last
month that Elon Musk’s SpaceX had cut off access to Starlink terminals used by
Russian forces. He scrambled for alternatives, but none offered unlimited
internet, data plans were restrictive, and coverage did not extend to the areas
of Ukraine where his unit operated.
It’s not only American tech executives who are narrowing communications options
for Russians. Days later, Russian authorities began slowing down access
nationwide to the messaging app Telegram, the service that frontline troops use
to coordinate directly with one another and bypass slower chains of command.
“All military work goes through Telegram — all communication,” Anton, whose name
has been changed because he fears government reprisal, told POLITICO in voice
messages sent via the app. “That would be like shooting the entire Russian army
in the head.”
Telegram would be joining a home screen’s worth of apps that have become useless
to Russians. Kremlin policymakers have already blocked or limited access to
WhatsApp, along with parent company Meta’s Facebook and Instagram, Microsoft’s
LinkedIn, Google’s YouTube, Apple’s FaceTime, Snapchat and X, which like SpaceX
is owned by Musk. Encrypted messaging apps Signal and Discord, as well as
Japanese-owned Viber, have been inaccessible since 2024. Last month, President
Vladimir Putin signed a law requiring telecom operators to block cellular and
fixed internet access at the request of the Federal Security Service. Shortly
after it took effect on March 3, Moscow residents reported widespread problems
with mobile internet, calls and text messages across all major operators for
several days, with outages affecting mobile service and Wi-Fi even inside the
State Duma.
Those decisions have left Russians increasingly cut off from both the outside
world and one another, complicating battlefield coordination and disrupting
online communities that organize volunteer aid, fundraising and discussion of
the war effort. Deepening digital isolation could turn Russia into something
akin to “a large, nuclear-armed North Korea and a junior partner to China,”
according to Alexander Gabuev, the Berlin-based director of the Carnegie Russia
Eurasia Center.
In April, the Kremlin is expected to escalate its campaign against Telegram —
already one of Russia’s most popular messaging platforms, but now in the absence
of other social-media options, a central hub for news, business and
entertainment. It may block the platform altogether. That is likely to fuel an
escalating struggle between state censorship and the tools people use to evade
it, with Russia’s place in the world hanging in the balance.
“It’s turned into a war,” said Mikhail Klimarev, executive director of the
internet Protection Society, a digital rights group that monitors Russia’s
censorship infrastructure. “A guerrilla war. They hunt down the VPNs they can
see, they block them — and the ‘partisans’ run, build new bunkers, and come
back.”
THE APP THAT RUNS THE WAR
On Feb. 4, SpaceX tightened the authentication system that Starlink terminals
use to connect to its satellite network, introducing stricter verification for
registered devices. The change effectively blocked many terminals operated by
Russian units relying on unauthorized connections, cutting Starlink traffic
inside Ukraine by roughly 75 percent, according to internet traffic analysis
by Doug Madory, an analyst at the U.S. network monitoring firm Kentik.
The move threw Russian operations into disarray, allowing Ukraine to make
battlefield gains. Russia has turned to a workaround widely used before
satellite internet was an option: laying fiber-optic lines, from rear areas
toward frontline battlefield positions.
Until then, Starlink terminals had allowed drone operators to stream live video
through platforms such as Discord, which is officially blocked in Russia but
still sometimes used by the Russian military via VPNs, to commanders at multiple
levels. A battalion commander could watch an assault unfold in real time and
issue corrections — “enemy ahead” or “turn left” — via radio or Telegram. What
once required layers of approval could now happen in minutes.
Satellite-connected messaging apps became the fastest way to transmit
coordinates, imagery and targeting data.
But on Feb. 10, Roskomnadzor, the Russian communications regulator, began
slowing down Telegram for users across Russia, citing alleged violations of
Russian law. Russian news outlet RBC reported, citing two sources, that
authorities plan to shut down Telegram in early April — though not on the front
line.
In mid-February, Digital Development Minister Maksut Shadayev said the
government did not yet intend to restrict Telegram at the front but hoped
servicemen would gradually transition to other platforms. Kremlin spokesperson
Dmitry Peskov said this week the company could avoid a full ban by complying
with Russian legislation and maintaining what he described as “flexible contact”
with authorities.
Roskomnadzor has accused Telegram of failing to protect personal data, combat
fraud and prevent its use by terrorists and criminals. Similar accusations have
been directed at other foreign tech platforms. In 2022, a Russian court
designated Meta an “extremist organization” after the company said it would
temporarily allow posts calling for violence against Russian soldiers in the
context of the Ukraine war — a decision authorities used to justify blocking
Facebook and Instagram in Russia and increasing pressure on the company’s other
services, including WhatsApp.
Telegram founder Pavel Durov, a Russian-born entrepreneur now based in the
United Arab Emirates, says the throttiling is being used as a pretext to push
Russians toward a government-controlled messaging app designed for surveillance
and political censorship.
That app is MAX, which was launched in March 2025 and has been compared to
China’s WeChat in its ambition to anchor a domestic digital ecosystem.
Authorities are increasingly steering Russians toward MAX through employers,
neighborhood chats and the government services portal Gosuslugi — where citizens
retrieve documents, pay fines and book appointments — as well as through banks
and retailers. The app’s developer, VK, reports rapid user growth, though those
figures are difficult to independently verify.
“They didn’t just leave people to fend for themselves — you could say they led
them by the hand through that adaptation by offering alternatives,” said Levada
Center pollster Denis Volkov, who has studied Russian attitudes toward
technology use. The strategy, he said, has been to provide a Russian or
state-backed alternative for the majority, while stopping short of fully
criminalizing workarounds for more technologically savvy users who do not want
to switch.
Elena, a 38-year-old Yekaterinburg resident whose surname has been withheld
because she fears government reprisal, said her daughter’s primary school moved
official communication from WhatsApp to MAX without consulting parents. She
keeps MAX installed on a separate tablet that remains mostly in a drawer — a
version of what some Russians call a “MAXophone,” gadgets solely for that app,
without any other data being left on those phones for the (very real) fear the
government could access it.
“It works badly. Messages are delayed. Notifications don’t come,” she said. “I
don’t trust it … And this whole situation just makes people angry.”
THE VPN ARMS RACE
Unlike China’s centralized “Great Firewall,” which filters traffic at the
country’s digital borders, Russia’s system operates internally. Internet
providers are required to route traffic through state-installed deep packet
inspection equipment capable of controlling and analyzing data flows in real
time.
“It’s not one wall,” Klimarev said. “It’s thousands of fences. You climb one,
then there’s another.”
The architecture allows authorities to slow services without formally banning
them — a tactic used against YouTube before its web address was removed from
government-run domain-name servers last month. Russian law explicitly provides
government authority for blocking websites on grounds such as extremism,
terrorism, illegal content or violations of data regulations, but it does not
clearly define throttling — slowing traffic rather than blocking it outright —
as a formal enforcement mechanism. “The slowdown isn’t described anywhere in
legislation,” Klimarev said. “It’s pressure without procedure.”
In September, Russia banned advertising for virtual private network services
that citizens use to bypass government-imposed restrictions on certain apps or
sites. By Klimarev’s estimate, roughly half of Russian internet users now know
what a VPN is, and millions pay for one. Polling last year by the Levada Center,
Russia’s only major independent pollster, suggests regular use is lower, finding
about one-quarter of Russians said they have used VPN services.
Russian courts can treat the use of anonymization tools as an aggravating factor
in certain crimes — steps that signal growing pressure on circumvention
technologies without formally outlawing them. In February, the Federal
Antimonopoly Service opened what appears to be the first case against a media
outlet for promoting a VPN after the regional publication Serditaya Chuvashiya
advertised such a service on its Telegram channel.
Surveys in recent years have shown that many Russians, particularly older
citizens, support tighter internet regulation, often citing fraud, extremism and
online safety. That sentiment gives authorities political space to tighten
controls even when the restrictions are unpopular among more technologically
savvy users.
Even so, the slowdown of Telegram drew criticism from unlikely quarters,
including Sergei Mironov, a longtime Kremlin ally and leader of the Just Russia
party. In a statement posted on his Telegram channel on Feb. 11, he blasted the
regulators behind the move as “idiots,” accusing them of undermining soldiers at
the front. He said troops rely on the app to communicate with relatives and
organize fundraising for the war effort, warning that restricting it could cost
lives. While praising the state-backed messaging app MAX, he argued that
Russians should be free to choose which platforms they use.
Pro-war Telegram channels frame the government’s blocking techniques as sabotage
of the war effort. Ivan Philippov, who tracks Russia’s influential military
bloggers, said the reaction inside that ecosystem to news about Telegram has
been visceral “rage.”
Unlike Starlink, whose cutoff could be blamed on a foreign company, restrictions
on Telegram are viewed as self-inflicted. Bloggers accuse regulators of
undermining the war effort. Telegram is used not only for battlefield
coordination but also for volunteer fundraising networks that provide basic
logistics the state does not reliably cover — from transport vehicles and fuel
to body armor, trench materials and even evacuation equipment. Telegram serves
as the primary hub for donations and reporting back to supporters.
“If you break Telegram inside Russia, you break fundraising,” Philippov said.
“And without fundraising, a lot of units simply don’t function.”
Few in that community trust MAX, citing technical flaws and privacy concerns.
Because MAX operates under Russian data-retention laws and is integrated with
state services, many assume their communications would be accessible to
authorities.
Philippov said the app’s prominent defenders are largely figures tied to state
media or the presidential administration. “Among independent military bloggers,
I haven’t seen a single person who supports it,” he said.
Small groups of activists attempted to organize rallies in at least 11 Russian
cities, including Moscow, Irkutsk and Novosibirsk, in defense of Telegram.
Authorities rejected or obstructed most of the proposed demonstrations — in some
cases citing pandemic-era restrictions, weather conditions or vague security
concerns — and in several cases revoked previously issued permits. In
Novosibirsk, police detained around 15 people ahead of a planned rally. Although
a small number of protests were formally approved, no large-scale demonstrations
ultimately took place.
THE POWER TO PULL THE PLUG
The new law signed last month allows Russia’s Federal Security Service to order
telecom operators to block cellular and fixed internet access. Peskov, the
Kremlin spokesman, said subsequent shutdowns of service in Moscow were linked to
security measures aimed at protecting critical infrastructure and countering
drone threats, adding that such limitations would remain in place “for as long
as necessary.”
In practice, the disruptions rarely amount to a total communications blackout.
Most target mobile internet rather than all services, while voice calls and SMS
often continue to function. Some domestic websites and apps — including
government portals or banking services — may remain accessible through
“whitelists,” meaning authorities allow certain services to keep operating even
while broader internet access is restricted. The restrictions are typically
localized and temporary, affecting specific regions or parts of cities rather
than the entire country.
Internet disruptions have increasingly become a tool of control beyond
individual platforms. Research by the independent outlet Meduza and the
monitoring project Na Svyazi has documented dozens of regional internet
shutdowns and mobile network restrictions across Russia, with disruptions
occurring regularly since May 2025.
The communications shutdown, and uncertainty around where it will go next, is
affecting life for citizens of all kinds, from the elderly struggling to contact
family members abroad to tech-savvy users who juggle SIM cards and secondary
phones to stay connected. Demand has risen for dated communication devices —
including walkie-talkies, pagers and landline phones — along with paper maps as
mobile networks become less reliable, according to retailers interviewed by RBC.
“It feels like we’re isolating ourselves,” said Dmitry, 35, who splits his time
between Moscow and Dubai and whose surname has been withheld to protect his
identity under fear of governmental reprisal. “Like building a sovereign grave.”
Those who track Russian public opinion say the pattern is consistent: irritation
followed by adaptation. When Instagram and YouTube were blocked or slowed in
recent years, their audiences shrank rapidly as users migrated to alternative
services rather than mobilizing against the restrictions.
For now, Russia’s digital tightening resembles managed escalation rather than
total isolation. Officials deny plans for a full shutdown, and even critics say
a complete severing would cripple banking, logistics and foreign trade.
“It’s possible,” Klimarev said. “But if they do that, the internet won’t be the
main problem anymore.”
BRUSSELS — France is hurtling toward a ban for children younger than 15 to
access social media — a move that would see it become only the second country in
the world to take that step.
The plan comes amid rising concerns about the impacts of apps including
Snapchat, TikTok, Instagram and X on children’s mental health.
After Australia in December kicked kids under 16 off a host of platforms, France
is leading the charge in Europe with a bill that would prohibit social media for
under-15s as soon as this year.
Supported by President Emmanuel Macron and his centrist Renaissance party, the
proposed law passed the French parliament’s lower chamber in the early hours of
Tuesday.
Here are 5 things to know.
WHEN WILL A BAN KICK IN?
While the timing isn’t finalized, the government is targeting September of this
year.
“As of September 1st, our children and adolescents will finally be protected. I
will see to it,” Macron said in an X post.
The bill now has to be voted on by the French Senate, and Macron’s governing
coalition is aiming for a discussion on Feb. 16.
If the Senate votes the bill through, a joint committee with representatives of
both upper and lower houses of parliament will be formed to finalize the text.
WHICH PLATFORMS WILL BE BANNED?
That decision will lie with France’s media authority Arcom, since the
legislation itself doesn’t outline which platforms will or won’t be covered.
The architect of the bill, Renaissance lawmaker Laure Miller, has said it will
be similar to Australia’s and would likely see under-15s banned from using
Snapchat, TikTok, Instagram and X.
Australia no longer allows children under 16 to create accounts on Facebook,
Instagram, Kick, Reddit, Snapchat, Threads, TikTok, Twitch, X and YouTube.
Australia’s list doesn’t include Discord, GitHub, Google Classroom, LEGO Play,
Messenger, Pinterest, Roblox, Steam and Steam Chat, WhatsApp or YouTube Kids.
Miller has also described plans to come up with a definition that could see the
ban cover individual features on social media platforms.
WhatsApp Stories and Channels — a feature of the popular messaging app — could
be included, as well as the online chat within the gaming platform Roblox, the
French MP said.
WHO WILL ENFORCE IT?
With France set to be the first country within the European Union to take this
step, a major sticking point as the bill moves through parliament has been who
will enforce it.
Authorities have finally settled on an answer: Brussels.
The EU has comprehensive social media rules, the Digital Services Act, which on
paper prohibits countries from giving big platforms additional obligations.
After some back and forth between France and the European Commission, they have
come to an agreement.
France can’t give more obligations to platforms but it can set a minimum age on
accessing social media. It will then be up to the Commission to ensure national
rules are followed.
This is similar to how other parts of the DSA work, such as illegal content.
Exactly what is illegal content is determined by national law, and the
Commission must then make sure that platforms are properly assessing and
mitigating the risks of spreading it.
How exactly the EU will make sure no children in France are accessing sites is
untested.
DSA violations can lead to fines of up to 6 percent of platforms’ annual global
revenue.
WHAT ARE THE TECHNICAL CHALLENGES?
Companies within the industry have been at loggerheads over who should implement
age gates that would render the social media ban possible.
Platform providers including Meta say that operating system services should
implement age checks, whereas OS and app store providers such as Apple say the
opposite.
The Commission has not clearly prescribed responsibility to either side of the
industry, but France has interpreted guidance from Brussels as putting the onus
on the service providers. France’s bill therefore puts the responsibility on the
likes of TikTok and Instagram.
Exactly what the technical solution will be to implement a ban is up to the
platforms, as long as it meets requirements for accuracy and privacy.
Some public entities have developed solutions, like the French postal service’s
“Jeprouvemonage,” which the platforms can use. Privately developed tech is also
available.
“No solution will be imposed on the platforms by the state,” the office of the
minister for digital affairs told journalists.
IS THIS HAPPENING IN OTHER EUROPEAN COUNTRIES?
France is not the only European country working on such restrictions.
Denmark’s parliament agreed on restrictions for under-15s, although parents can
allow them to go on social media if they are older than 13. Denmark hasn’t
passed a formal bill. Austria’s digital minister said an Australia-style ban is
being developed for under-14s.
Bills are going through the Spanish and Italian parliaments, and Greece’s Prime
Minister Kyriakos Mitsotakis has also voiced support for similar plans. Germany
is considering its options. The Dutch government has issued guidance to say kids
younger than 15 should not access social media like TikTok.
Many of these countries as well as the European Parliament have said they want
something done at the EU level.
While the Commission has said it will allow EU countries to set their own
minimum ages for accessing social media, it is also trying to come up with
measures that would apply across the entire bloc.
President Ursula von der Leyen has been personally paying attention to this
issue and is setting up a panel of experts to figure out if an EU-wide ban is
desirable and tenable.
BRUSSELS — Online marketplace Shein is rolling out an age-assurance tool to keep
underage users away from inappropriate products, the company’s lawyer told
lawmakers on Tuesday.
The move follows outrage and regulatory pressure on the platform over the sale
of sex dolls in November. The EU executive had demanded information from Shein
on how it checks users’ age to make sure they cannot see inappropriate products.
Shein has deployed a “third-party solution” on its website that is being rolled
out on a “country-by-country” basis, General Counsel Zhu Yinan told the European
Parliament’s internal market committee.
“All age-restricted products” will be behind that layer of age checking, Zhu
said.
The Commission is the primary supervisor of Shein under the Digital Services
Act, the EU law designed to limit the risks of online platforms to users. Shein
is classified as a Very Large Online Platform with over 45 million users and can
face fines up to 6 percent of its global annual revenue for breaches of the
rules.
The Commission did not immediately respond to POLITICO’s request for comment.
Shein is also testing the Commission’s age verification app, or “mini wallet” as
it’s sometimes called, Zhu said. This blueprint for an app to check age online
was developed by the Commission and is currently being tested by six EU
countries.
“Of course it was totally unacceptable what has happened,” Zhu said, referring
to the child-like sex dolls and other illegal content. But it “is not the first
time that happened to a marketplace and it also happened to multiple
marketplaces,” Zhu said.
BRUSSELS — Meta’s WhatsApp will face fresh scrutiny from Brussels after the EU
decided the service falls under its tough regime for the biggest online
platforms.
A decision announced Monday to classify WhatsApp Channels as a popular online
platform — joining the likes of Facebook, Instagram, X and TikTok — means that
the app will now be held liable for how it handles systemic risks to users.
Platforms that fail to meet regulatory requirements can be fined up to 6 percent
of global annual turnover under the EU’s Digital Services Act.
The verdict also lands as countries such as France are actively discussing
restrictions on social media platforms for children.
The decision focuses particularly on WhatsApp Channels in which admins can
broadcast announcements to groups of people in a feed, making it different from
the messaging feature. WhatsApp’s private messaging service is explicitly
excluded.
WhatsApp was aware that the decision was coming as far back as August, when it
reported that Channels had approximately 51.7 million users in the EU. That
crossed the EU’s threshold for Very Large Online Platforms with over 45 million
users in the EU.
Meta now has four months to assess and mitigate systemic risks on its platform.
Those risks include the spread of illegal content, as well as threats to civic
discourse, elections, fundamental rights and health.
“WhatsApp Channels continue to grow in Europe and globally. As this expansion
continues, we remain committed to evolving our safety and integrity measures in
the region, ensuring they align with relevant regulatory expectations and our
ongoing responsibility to users,” WhatsApp spokesperson Joshua Breckman said in
a statement.
BRUSSELS — It reads like Washington’s worst nightmare: a European tech regulator
independent of the Brussels institutions and armed to crack down on the
violations of U.S. companies.
But that’s exactly what some in Brussels say is now needed as the EU struggles
to get a grip on how to implement and enforce its digital laws amid repeated
political attacks from the White House.
The attacks are reviving a long-held goal among EU legislators: to establish an
independent, well-resourced regulator that sits outside EU institutions to
enforce its many tech rulebooks.
While the dream faces hurdles to becoming a reality, the timing of its
resurrection reflects growing concerns that the EU has failed to underpin its
ambition to be the world’s digital policeman with adequate enforcement
structures that can resist U.S. attacks.
After years of lawmaking, Brussels governs through a patchwork of rules and
institutions that clash with the reality of U.S. politics.
The EU’s maze of rules and regulators has also been thrown into sharp focus by
the ongoing Grok scandal, which saw the artificial intelligence tool allow users
of Elon Musk’s X to generate sexualized deepfakes.
The EU’s maze of rules and regulators has also been thrown into sharp focus by
the ongoing Grok scandal. | Samuel Boivin/NurPhoto via Getty Images
“The enforcement is not happening because there’s too much pressure from the
Trump administration,” said Alexandra Geese, a German Greens European Parliament
lawmaker who negotiated the EU’s platform law, the Digital Services Act.
For Geese, it’s an “I told you so” moment after EU legislators floated the
possibility of creating a standalone agency to enforce the digital rulebooks
when they were being negotiated.
A group of EU countries, led by Portugal, also tinkered with the idea late last
year.
BLACKMAIL
The Digital Services Act sits at the center of the U.S.-EU feud over how
Brussels is enforcing its tech rules.
The European Commission is responsible for enforcing these rules on platforms
with over 45 million users in the EU, among them some of the most powerful U.S.
companies including Elon Musk’s X, Mark Zuckerberg’s Meta and Alphabet’s Google.
As the bloc’s executive arm, the Commission also needs buy-in from the White
House for negotiations on tariffs, security guarantees for Ukraine, and a host
of other major political topics.
The Commission last month slapped a €120 million fine on Musk’s X, its first
under the DSA, which prompted a fierce rebuke from Washington. Just weeks later
the U.S. imposed a travel ban on Thierry Breton, a former EU commissioner and
one of the officials behind the law.
It topped off a year in which the U.S. repeatedly attacked the DSA, branding it
“censorship” and treating it as a bargaining chip in trade talks.
This fueled concerns that the Commission was exposed and that digital fines
were, as a result, being delayed or disrupted. Among the evidence was a
last-minute intervention by the EU’s trade chief to delay a Google antitrust
penalty at what would have been a sensitive time for talks. The fine eventually
landed some months later.
“Delegating digital enforcement to an independent body would strengthen the EU’s
bargaining position against the U.S.,” Mario Mariniello, a non-resident fellow
at think tank Bruegel, argued in a September piece on how the Commission could
protect itself against blackmail.
The need to separate enforcement powers is highest for the bloc’s online content
law, he argued. “There, the level of politicization is so high that you would
have a significant benefit.”
“It’s so political, there’s no real enforcement, there’s no independent
enforcement, independent from politics,” Geese said.
Alexandra Geese, the German Greens European Parliament lawmaker who negotiated
the EU’s platform law, the Digital Services Act. | Martin Bertrand/Hans
Lucas/AFP via Getty Images
Meanwhile, the recent controversy around X’s AI tool Grok, which allowed users
to generate sexualized fakes based on real-life images, has illustrated the
complexity of the EU’s existing structures and laws.
As a platform, X has to address systemic risks arising from the spread of
illegal content under the DSA, while it also faces obligations regarding its AI
tool — such as watermarking deepfakes — under the EU’s AI Act.
National authorities or prosecutors took an interest in the matter alongside
Brussels, because in some countries it’s illegal to share nudes without consent,
and because the spread of child sexual abuse material is governed by separate
laws involving national regulators.
Having a single powerful digital authority could address the fragmented
enforcement carried out by several authorities under different EU rulebooks,
according to Geese.
“It’s absolutely true that the rulebooks are scattered, that enforcement is
scattered [and] that it would be easier to have one agency,” Geese said.
“It would have made sense … to do that right away [when the laws were being
drafted], as an independent agency, a little bit out of the realm of day-to-day
politics,” she added.
“Europe urgently needs a single digital enforcement agency to provide legal
certainty and ensure EU laws work consistently across the Union,” said German
Greens European Parliament lawmaker Sergey Lagodinsky, who added that the
current enforcement landscape is “siloed, with weak coordination.”
HURDLES
A proposal to establish such a regulator would likely face opposition from EU
governments.
Last year Portugal launched a debate on whether EU countries should be able to
appoint a single digital regulator themselves, as they grappled with the
enforcement of several rulebooks.
“The central question is whether a single digital regulator should be
established, at national level, coordinating responsibilities currently spread
across multiple authorities whilst ensuring a more integrated consistent
approach to enforcement,” Portuguese Minister for State Reform Gonçalo Matias
wrote in an invitation for an October summit with 13 countries, seen by
POLITICO.
Although the pitch proved controversial, it received some support in the
summit’s final declaration. “The potential establishment of a single digital
regulator at national or EU level can consolidate responsibilities, ensure
coherent enforcement of EU digital legislation and foster an innovation-friendly
regulatory culture,” the 13 countries said.
That group didn’t include countries that are traditionally skeptical of handing
power to a Brussels-backed agency, such as Hungary, Slovakia and Poland.
Isolating tech enforcement in an independent agency could also limit the
interplay with the Commission’s other enforcement powers, such as on antitrust
matters, Mariniello argued.
Even for advocates such as Geese, there is a potential downside to reopening the
debate at such a critical moment for digital enforcement.
“The world is watching Europe to see how it responds to one of the most
egregious episodes of a large language model perpetuating gender based
violence,” she wrote in a recent opinion.
As for a new agency, “You’re gonna debate this for two or three years, with the
Council, and Hungary and Slovakia are going to say: No way. And in the meantime,
nothing happens, because that becomes the excuse: The agency is going to do it,”
Geese said.
The European Union and the United Kingdom are not ready to let Elon Musk’s Grok
off the hook for creating non-consensual nude deepfakes.
Social media platform X announced late Wednesday it would stop people from
“editing of images of real people in revealing clothing such as bikinis”
following a proliferation of sexualized images created by the Grok artificial
intelligence bot that is integrated into X.
These changes apply only to publicly available tweets targeted to the Grok
chatbot and not when using the Grok assistant built into X, which is separate
from the publicly available platform feed.
The move by X — which included a fresh promise of geoblocking — came in response
to mounting pressure and at least two app bans in Malaysia and Indonesia, as
well as a formal probe in the U.K.
Yet POLITICO was able to verify that users in Brussels, Paris and London were
still able to generate images of people in bikinis on Thursday morning using the
integrated AI assistant Grok feature on X, suggesting the move may not meet
regulator demands.
Regulators said Thursday the jury is still out as to whether the changes are
sufficient.
“We will carefully assess these changes to make sure they effectively protect
citizens in the EU,” European Commission spokesperson Thomas Regnier told
POLITICO.
“Should these changes not be effective, the Commission will not hesitate to use
the full enforcement toolbox of the [Digital Services Act],” he said.
The Commission, responsible for enforcing the EU’s landmark social media
regulation on X, ordered the platform to retain all documents related to the
chatbot in response to the scandal.
Yet it has not yet announced any formal investigation since widespread nude
deepfakes began to appear via Grok more than two weeks ago, despite strong
rhetoric from EU leaders.
The EU has called the nonconsensual, sexually explicit deepfakes “illegal” and
“disgusting,” with Commission President Ursula von der Leyen describing it as
“unthinkable behavior.”
France’s digital minister, Anne Le Hénanff, said in response to the announcement
that the pressure from Paris and Brussels is “producing results.”
“Nevertheless, I remain particularly vigilant regarding the proper
implementation of the commitments made by X. This restriction measure must be
effective for all X users (subscribers and non-subscribers alike) in France,”
she said.
The U.K. launched fresh action last week. These changes are a “welcome
development” but “our formal investigation remains ongoing,” an Ofcom
spokesperson said Thursday.
The platform said Wednesday it will geoblock all nudify image requests in
jurisdictions where it’s illegal.
Just hours before the changes to Grok were announced, Elon Musk denied that the
chatbot was used to generate illegal content.
Mizy Clifton, Océane Herrero and Emile Marzolf contributed to this report.
BRUSSELS — Elon Musk has denied that X’s artificial intelligence tool Grok
generates illegal content in the wake of AI-generated undressed and sexualized
images on the platform.
In a fresh post Wednesday, X’s powerful owner sought to argue that users — not
the AI tool — are responsible and that the platform is fully compliant with all
laws.
“I[‘m] not aware of any naked underage images generated by Grok,” he said.
“Literally zero.”
“When asked to generate images, [Grok] will refuse to produce anything illegal,
as the operating principle for Grok is to obey the laws of any given country or
state,” he added.
“There may be times when adversarial hacking of Grok prompts does something
unexpected. If that happens, we fix the bug immediately.”
Musk’s remarks follow heightened scrutiny by both the EU and the U.K., with
Brussels describing the appearance of nonconsensual, sexually explicit deepfakes
on X as “illegal,” “appalling” and “disgusting.”
The U.K.’s communications watchdog, Ofcom, said Monday that it had launched an
investigation into X. On Wednesday, U.K. Prime Minister Keir Starmer said the
platform is “acting to ensure full compliance” with the relevant law but said
the government won’t “back down.”
The EU’s tech chief Henna Virkkunen warned Monday that X should quickly “fix”
its AI tool, or the platform would face consequences under the bloc’s platform
law, the Digital Services Act.
The Commission last week ordered X to retain all of Grok’s data and documents
until the end of the year.
Just 11 days ago, Musk said that “anyone using Grok to make illegal content will
suffer the same consequences as if they upload illegal content” in response to a
post about the inappropriate images.
The company’s safety team posted a similar line, warning that it takes action
against illegal activity, including child sexual abuse material.
BRUSSELS — The European Commission’s top tech official has warned Elon Musk’s X
to quickly “fix” its AI tool Grok — or face consequences under the controversial
Digital Services Act.
The fact that Grok allows users to generate pictures that depict women and
minors undressed and sexualized is “horrendous”, said Henna Virkkunen, the
Commission’s tech chief.
She urged the company to take immediate action.
“X now has to fix its AI tool in the EU, and they have to do it quickly,” she
said in a post on the platform.
If that doesn’t happen, the European Commission is ready to strike under the the
Digital Services Act, its law governing digital platforms.
“We will not hesitate to put the DSA to its full use to protect EU citizens.”
Under the DSA, platforms like X must address systemic risks, including those
related to the spread of illegal content, or face fines of up to 6 per cent of
their global annual turnover.
Last month the European Commission imposed a €120 million fine on X for minor
transparency infringements, drawing howls of outrage from the Trump
administration.
The Commission ordered X last week to retain all documents and data related to
Grok until the end of this year.
LONDON — U.K. ministers are warning Elon Musk’s X it faces a ban if it doesn’t
get its act together. But outlawing the social media platform is easier said
than done.
The U.K.’s communications regulator Ofcom on Monday launched a formal
investigation into a deluge of non-consensual sexualized deepfakes produced by
X’s AI chatbot Grok amid growing calls for action from U.K. politicians.
It will determine whether the creation and distribution of deepfakes on the
platform, which have targeted women and children, constitutes a breach of the
company’s duties under the U.K.’s Online Safety Act (OSA).
U.K. ministers have repeatedly called for Ofcom, the regulator tasked with
policing social media platforms, to take urgent action over the deepfakes.
U.K. Technology Secretary Liz Kendall on Friday offered her “full support” to
the U.K. regulator to block X from being accessed in the U.K., if it chooses to.
“I would remind xAI that the Online Safety Act Includes the power to block
services from being accessed in the U.K., if they refuse to comply with U.K.
law. If Ofcom decide to use those powers they will have our full support,” she
said in a statement.
The suggestion has drawn Musk’s ire. The tech billionaire branded the British
government “fascist” over the weekend, and accused it of “finding any excuse for
censorship.”
With Ofcom testing its new regulatory powers against one of the most
high-profile tech giants for the first time, it is hard to predict what happens
next.
NOT GOING NUCLEAR — FOR NOW
Ofcom has so far avoided its smash-glass option.
Under the OSA it could seek a court order blocking “ancillary” services, like
those those processing subscription payments on X’s behalf, and ask internet
providers to block X from operating in the U.K.
Taking that route would mean bypassing a formal investigation, but that
is generally considered a last resort according to Ofcom’s guidance. To do so,
Ofcom would need to prove that risk of harm to U.K. users is particularly
great.
Before launching its investigation Monday, the regulator made “urgent contact”
with X on Jan. 5, giving the platform until last Friday to respond.
Ofcom stressed the importance of “due process” and of ensuring its
investigations are “legally robust and fairly decided.”
LIMITED REACH
The OSA only covers U.K. users. It’s a point ministers have been keen to stress
amid concerns its interaction with the U.S. First Amendment, which guarantees
free speech, could become a flashpoint in trade negotiations with
Washington. It’s not enough for officials or ministers to believe X has failed
to protect users generally.
The most egregious material might not even be on X. Child sexual abuse charity
the Internet Watch Foundation said last week that its analysts had found what
appeared to be Grok-produced Child sexual abuse material (CSAM) on a dark web
forum, rather than X itself — so it’s far from self-evident that Ofcom taking
the nuclear option against X would ever have been legally justified.
X did not comment on Ofcom’s investigation when contacted by POLITICO, but
referred back to a statement issued on Jan. 4 about the issue of deepfakes on
the platform.
“We take action against illegal content on X, including Child Sexual Abuse
Material (CSAM), by removing it, permanently suspending accounts, and working
with local governments and law enforcement as necessary. Anyone using or
prompting Grok to make illegal content will suffer the same consequences as if
they upload illegal content,” the statement said.
BIG TEST
The OSA came into force last summer, and until now Ofcom’s enforcement actions
have focused on pornography site providers for not implementing age-checks.
Online safety campaigners have argued this indicates Ofcom is more interested in
going after low-hanging fruit than challenging more powerful tech companies. “It
has been striking to many that of the 40+ investigations it has launched so
far, not one has been directed at large … services,” the online safety campaign
group the Molly Rose Foundation said in September.
That means the X investigation is the OSA’s first big test, and it’s especially
thorny because it involves an AI chatbot. The Science, Innovation and Technology
committee wrote in a report published last summer that the legislation does
not provide sufficient protections against generative AI, a point Technology
Secretary Liz Kendall herself conceded in a recent evidence session.
POLITICAL RISKS
If Ofcom concludes X hasn’t broken the law there are likely to be calls from OSA
critics, both inside and outside Parliament, to return to the drawing board.
It would also put the government, which has promised to act if Ofcom doesn’t, in
a tricky spot. The PM’s spokesperson on Monday described child sexual abuse
imagery as “the worst crimes imaginable.”
Ofcom could also conclude X has broken the law, but decide against imposing
sanctions, according to its enforcement guidance.
The outcome of Ofcom’s investigation will be watched closely by the White House
and is fraught with diplomatic peril for the U.K. government, which has already
been criticized for implementing the new online safety law by Donald Trump and
his allies.
Foreign Secretary David Lammy raised the Grok issue with U.S. Vice President JD
Vance last week, POLITICO reported.
But other Republicans are readying for a geopolitical fight: GOP Congresswoman
Anna Paulina Luna, a member of the U.S. House foreign affairs committee,
said she was drafting legislation to sanction the U.K. if X does get blocked.