BRUSSELS — In the 10 years since the Brussels terror attacks, the EU has
tightened its security strategy but the internet is opening up new threats,
according to the bloc’s counterterrorism coordinator.
Daesh is “mutating jihadism,” Bartjan Wegter told POLITICO in an interview on
the eve of the anniversary of the terrorist attacks in Brussels, which pushed
the bloc to bolster border protection and step up collaboration and
information-sharing.
The group has “calculated that it’s much more effective to radicalize people who
are already inside the EU through online environments rather than to organize
orchestrated attacks from outside our borders,” he said. “And they’re very good
at it.”
Ten years ago, two terrorists from Daesh (also known as the so-called Islamic
State) blew themselves up at Brussels Airport. Another explosion tore through a
metro car at Maelbeek station, in the heart of Brussels’ EU district. Thirty-two
people were killed, and hundreds more injured.
The attacks came just months after terrorists killed 130 people in attacks on a
concert hall, a stadium, restaurants and bars in Paris, exposing gaps in
information-sharing in the bloc’s free-travel area. The terrorists had moved
between countries, planning the attacks in one and carrying them out in another,
said Wegter, who is Dutch. “That’s where our vulnerabilities were.”
Today, violent jihadism remains a threat and new large-scale attacks can’t be
excluded. But the probability is “much, much lower today than it was 10 years
ago,” said Wegter.
In the aftermath of the attacks, the bloc changed its security strategy with a
focus on prevention and a “security reflex” across every policy field, according
to Wegter. It’s also stepping up police and judicial collaboration through
Europol and Eurojust, and it’s putting in place databases — including the
Schengen Information System — so countries could alert each other about
high-risk individuals, as well as an entry/exit system to monitor who enters and
leaves the free-travel area.
But the bloc is facing a new type of threat, as security officials see a gradual
increase in attempted terrorist attacks by lone actors. A lot of that is being
cultivated online and increasingly, younger people are involved.
“We’ve seen cases of children 12 years old. And, the radicalization process [is]
also happening faster,” Wegter said. “Sometimes we’re talking about weeks or
months.”
In 2024, a third of all arrests connected to potential terror threats were of
people aged between 12 and 20 years old, and France recorded a tripling of the
number of minors radicalized between 2023 and 2024, said Wegter.
“Just put yourself in the shoes of law enforcement … You’re dealing with young
people who spend most of their time online … Who may not have a criminal record.
Who, if they are plotting attacks, may not be using registered weapons. It’s
very hard to prevent.”
Violent jihadism is just one of the threats EU security officials worry are
being cultivated online.
Wegter said there is also an emerging trend of a violent right-wing extremist
narrative online — and to a lesser extent, violent left-wing extremism. There’s
also what he called “nihilistic extremist violence,” a new phenomenon that can
feature elements of different ideologies or a drive to overthrow the system, but
which is fundamentally minors seeking an identity through violence.
“What we see online, some of these images are so horrible that even law
enforcement needs psychological support to see this kind of stuff,” said Wegter.
Law enforcement’s ability to get access to encrypted data and information on
people under investigation is crucial, he stressed, and he drew parallels with
the steps the EU took to secure the Schengen free movement 10 years ago.
“If you want to preserve the good things of the internet, we also need to make
sure that we have … some key mechanisms to safeguard the internet also.”
Tag - Online safety
LONDON — The U.K.’s media regulator Ofcom fined 4chan £450,000 on Thursday for
failing to comply with age check requirements under the Online Safety Act.
The regulator also levied two additional fines of £50,000 and £20,000 on the
company for not assessing the risk of users encountering illegal material and
failing to specify in its terms of service how they are to be protected from
such content, respectively.
Ofcom previously fined 4chan £20,000 for failing to respond to to requests for
information from the regulator.
4chan has until 2 April to implement age assurance, carry out a “suitable and
sufficient” illegal harms risk assessment, and rewrite its terms of service or
face a daily penalty of £200.
“Companies – wherever they’re based – are not allowed to sell unsafe toys to
children in the U.K. And society has long protected youngsters from things like
alcohol, smoking and gambling. The digital world should be no different,”
Suzanne Cater, Ofcom’s director of enforcement, said in a statement.
4chan did not immediately respond when contacted for comment.
Anton, a 44-year-old Russian soldier who heads a workshop responsible for
repairing and supplying drones, was at his kitchen table when he learned last
month that Elon Musk’s SpaceX had cut off access to Starlink terminals used by
Russian forces. He scrambled for alternatives, but none offered unlimited
internet, data plans were restrictive, and coverage did not extend to the areas
of Ukraine where his unit operated.
It’s not only American tech executives who are narrowing communications options
for Russians. Days later, Russian authorities began slowing down access
nationwide to the messaging app Telegram, the service that frontline troops use
to coordinate directly with one another and bypass slower chains of command.
“All military work goes through Telegram — all communication,” Anton, whose name
has been changed because he fears government reprisal, told POLITICO in voice
messages sent via the app. “That would be like shooting the entire Russian army
in the head.”
Telegram would be joining a home screen’s worth of apps that have become useless
to Russians. Kremlin policymakers have already blocked or limited access to
WhatsApp, along with parent company Meta’s Facebook and Instagram, Microsoft’s
LinkedIn, Google’s YouTube, Apple’s FaceTime, Snapchat and X, which like SpaceX
is owned by Musk. Encrypted messaging apps Signal and Discord, as well as
Japanese-owned Viber, have been inaccessible since 2024. Last month, President
Vladimir Putin signed a law requiring telecom operators to block cellular and
fixed internet access at the request of the Federal Security Service. Shortly
after it took effect on March 3, Moscow residents reported widespread problems
with mobile internet, calls and text messages across all major operators for
several days, with outages affecting mobile service and Wi-Fi even inside the
State Duma.
Those decisions have left Russians increasingly cut off from both the outside
world and one another, complicating battlefield coordination and disrupting
online communities that organize volunteer aid, fundraising and discussion of
the war effort. Deepening digital isolation could turn Russia into something
akin to “a large, nuclear-armed North Korea and a junior partner to China,”
according to Alexander Gabuev, the Berlin-based director of the Carnegie Russia
Eurasia Center.
In April, the Kremlin is expected to escalate its campaign against Telegram —
already one of Russia’s most popular messaging platforms, but now in the absence
of other social-media options, a central hub for news, business and
entertainment. It may block the platform altogether. That is likely to fuel an
escalating struggle between state censorship and the tools people use to evade
it, with Russia’s place in the world hanging in the balance.
“It’s turned into a war,” said Mikhail Klimarev, executive director of the
internet Protection Society, a digital rights group that monitors Russia’s
censorship infrastructure. “A guerrilla war. They hunt down the VPNs they can
see, they block them — and the ‘partisans’ run, build new bunkers, and come
back.”
THE APP THAT RUNS THE WAR
On Feb. 4, SpaceX tightened the authentication system that Starlink terminals
use to connect to its satellite network, introducing stricter verification for
registered devices. The change effectively blocked many terminals operated by
Russian units relying on unauthorized connections, cutting Starlink traffic
inside Ukraine by roughly 75 percent, according to internet traffic analysis
by Doug Madory, an analyst at the U.S. network monitoring firm Kentik.
The move threw Russian operations into disarray, allowing Ukraine to make
battlefield gains. Russia has turned to a workaround widely used before
satellite internet was an option: laying fiber-optic lines, from rear areas
toward frontline battlefield positions.
Until then, Starlink terminals had allowed drone operators to stream live video
through platforms such as Discord, which is officially blocked in Russia but
still sometimes used by the Russian military via VPNs, to commanders at multiple
levels. A battalion commander could watch an assault unfold in real time and
issue corrections — “enemy ahead” or “turn left” — via radio or Telegram. What
once required layers of approval could now happen in minutes.
Satellite-connected messaging apps became the fastest way to transmit
coordinates, imagery and targeting data.
But on Feb. 10, Roskomnadzor, the Russian communications regulator, began
slowing down Telegram for users across Russia, citing alleged violations of
Russian law. Russian news outlet RBC reported, citing two sources, that
authorities plan to shut down Telegram in early April — though not on the front
line.
In mid-February, Digital Development Minister Maksut Shadayev said the
government did not yet intend to restrict Telegram at the front but hoped
servicemen would gradually transition to other platforms. Kremlin spokesperson
Dmitry Peskov said this week the company could avoid a full ban by complying
with Russian legislation and maintaining what he described as “flexible contact”
with authorities.
Roskomnadzor has accused Telegram of failing to protect personal data, combat
fraud and prevent its use by terrorists and criminals. Similar accusations have
been directed at other foreign tech platforms. In 2022, a Russian court
designated Meta an “extremist organization” after the company said it would
temporarily allow posts calling for violence against Russian soldiers in the
context of the Ukraine war — a decision authorities used to justify blocking
Facebook and Instagram in Russia and increasing pressure on the company’s other
services, including WhatsApp.
Telegram founder Pavel Durov, a Russian-born entrepreneur now based in the
United Arab Emirates, says the throttiling is being used as a pretext to push
Russians toward a government-controlled messaging app designed for surveillance
and political censorship.
That app is MAX, which was launched in March 2025 and has been compared to
China’s WeChat in its ambition to anchor a domestic digital ecosystem.
Authorities are increasingly steering Russians toward MAX through employers,
neighborhood chats and the government services portal Gosuslugi — where citizens
retrieve documents, pay fines and book appointments — as well as through banks
and retailers. The app’s developer, VK, reports rapid user growth, though those
figures are difficult to independently verify.
“They didn’t just leave people to fend for themselves — you could say they led
them by the hand through that adaptation by offering alternatives,” said Levada
Center pollster Denis Volkov, who has studied Russian attitudes toward
technology use. The strategy, he said, has been to provide a Russian or
state-backed alternative for the majority, while stopping short of fully
criminalizing workarounds for more technologically savvy users who do not want
to switch.
Elena, a 38-year-old Yekaterinburg resident whose surname has been withheld
because she fears government reprisal, said her daughter’s primary school moved
official communication from WhatsApp to MAX without consulting parents. She
keeps MAX installed on a separate tablet that remains mostly in a drawer — a
version of what some Russians call a “MAXophone,” gadgets solely for that app,
without any other data being left on those phones for the (very real) fear the
government could access it.
“It works badly. Messages are delayed. Notifications don’t come,” she said. “I
don’t trust it … And this whole situation just makes people angry.”
THE VPN ARMS RACE
Unlike China’s centralized “Great Firewall,” which filters traffic at the
country’s digital borders, Russia’s system operates internally. Internet
providers are required to route traffic through state-installed deep packet
inspection equipment capable of controlling and analyzing data flows in real
time.
“It’s not one wall,” Klimarev said. “It’s thousands of fences. You climb one,
then there’s another.”
The architecture allows authorities to slow services without formally banning
them — a tactic used against YouTube before its web address was removed from
government-run domain-name servers last month. Russian law explicitly provides
government authority for blocking websites on grounds such as extremism,
terrorism, illegal content or violations of data regulations, but it does not
clearly define throttling — slowing traffic rather than blocking it outright —
as a formal enforcement mechanism. “The slowdown isn’t described anywhere in
legislation,” Klimarev said. “It’s pressure without procedure.”
In September, Russia banned advertising for virtual private network services
that citizens use to bypass government-imposed restrictions on certain apps or
sites. By Klimarev’s estimate, roughly half of Russian internet users now know
what a VPN is, and millions pay for one. Polling last year by the Levada Center,
Russia’s only major independent pollster, suggests regular use is lower, finding
about one-quarter of Russians said they have used VPN services.
Russian courts can treat the use of anonymization tools as an aggravating factor
in certain crimes — steps that signal growing pressure on circumvention
technologies without formally outlawing them. In February, the Federal
Antimonopoly Service opened what appears to be the first case against a media
outlet for promoting a VPN after the regional publication Serditaya Chuvashiya
advertised such a service on its Telegram channel.
Surveys in recent years have shown that many Russians, particularly older
citizens, support tighter internet regulation, often citing fraud, extremism and
online safety. That sentiment gives authorities political space to tighten
controls even when the restrictions are unpopular among more technologically
savvy users.
Even so, the slowdown of Telegram drew criticism from unlikely quarters,
including Sergei Mironov, a longtime Kremlin ally and leader of the Just Russia
party. In a statement posted on his Telegram channel on Feb. 11, he blasted the
regulators behind the move as “idiots,” accusing them of undermining soldiers at
the front. He said troops rely on the app to communicate with relatives and
organize fundraising for the war effort, warning that restricting it could cost
lives. While praising the state-backed messaging app MAX, he argued that
Russians should be free to choose which platforms they use.
Pro-war Telegram channels frame the government’s blocking techniques as sabotage
of the war effort. Ivan Philippov, who tracks Russia’s influential military
bloggers, said the reaction inside that ecosystem to news about Telegram has
been visceral “rage.”
Unlike Starlink, whose cutoff could be blamed on a foreign company, restrictions
on Telegram are viewed as self-inflicted. Bloggers accuse regulators of
undermining the war effort. Telegram is used not only for battlefield
coordination but also for volunteer fundraising networks that provide basic
logistics the state does not reliably cover — from transport vehicles and fuel
to body armor, trench materials and even evacuation equipment. Telegram serves
as the primary hub for donations and reporting back to supporters.
“If you break Telegram inside Russia, you break fundraising,” Philippov said.
“And without fundraising, a lot of units simply don’t function.”
Few in that community trust MAX, citing technical flaws and privacy concerns.
Because MAX operates under Russian data-retention laws and is integrated with
state services, many assume their communications would be accessible to
authorities.
Philippov said the app’s prominent defenders are largely figures tied to state
media or the presidential administration. “Among independent military bloggers,
I haven’t seen a single person who supports it,” he said.
Small groups of activists attempted to organize rallies in at least 11 Russian
cities, including Moscow, Irkutsk and Novosibirsk, in defense of Telegram.
Authorities rejected or obstructed most of the proposed demonstrations — in some
cases citing pandemic-era restrictions, weather conditions or vague security
concerns — and in several cases revoked previously issued permits. In
Novosibirsk, police detained around 15 people ahead of a planned rally. Although
a small number of protests were formally approved, no large-scale demonstrations
ultimately took place.
THE POWER TO PULL THE PLUG
The new law signed last month allows Russia’s Federal Security Service to order
telecom operators to block cellular and fixed internet access. Peskov, the
Kremlin spokesman, said subsequent shutdowns of service in Moscow were linked to
security measures aimed at protecting critical infrastructure and countering
drone threats, adding that such limitations would remain in place “for as long
as necessary.”
In practice, the disruptions rarely amount to a total communications blackout.
Most target mobile internet rather than all services, while voice calls and SMS
often continue to function. Some domestic websites and apps — including
government portals or banking services — may remain accessible through
“whitelists,” meaning authorities allow certain services to keep operating even
while broader internet access is restricted. The restrictions are typically
localized and temporary, affecting specific regions or parts of cities rather
than the entire country.
Internet disruptions have increasingly become a tool of control beyond
individual platforms. Research by the independent outlet Meduza and the
monitoring project Na Svyazi has documented dozens of regional internet
shutdowns and mobile network restrictions across Russia, with disruptions
occurring regularly since May 2025.
The communications shutdown, and uncertainty around where it will go next, is
affecting life for citizens of all kinds, from the elderly struggling to contact
family members abroad to tech-savvy users who juggle SIM cards and secondary
phones to stay connected. Demand has risen for dated communication devices —
including walkie-talkies, pagers and landline phones — along with paper maps as
mobile networks become less reliable, according to retailers interviewed by RBC.
“It feels like we’re isolating ourselves,” said Dmitry, 35, who splits his time
between Moscow and Dubai and whose surname has been withheld to protect his
identity under fear of governmental reprisal. “Like building a sovereign grave.”
Those who track Russian public opinion say the pattern is consistent: irritation
followed by adaptation. When Instagram and YouTube were blocked or slowed in
recent years, their audiences shrank rapidly as users migrated to alternative
services rather than mobilizing against the restrictions.
For now, Russia’s digital tightening resembles managed escalation rather than
total isolation. Officials deny plans for a full shutdown, and even critics say
a complete severing would cripple banking, logistics and foreign trade.
“It’s possible,” Klimarev said. “But if they do that, the internet won’t be the
main problem anymore.”
LONDON — Keir Starmer wants the public to know he’s going to move fast and fix
things.
Speaking to an audience of young people last month, the U.K. prime minister said
that unlike the previous Conservative government, which took eight years to pass
the country’s Online Safety Act, Labour will legislate fast enough to keep
up with the breakneck speed of technological change and its associated harms.
“We’ve taken the powers to make sure we can act within months, not years,” he
said.
His words came after the government decried Elon Musk’s X for
allowing deepfaked nude images to flood its platform. “The action we took on
Grok sent a clear message that no platform gets a free pass,” Starmer said.
Labour showcased its bold new approach last week,
tabling two legislative amendments that seek to grant ministers sweeping powers
to change the U.K.’s online safety regime without needing to pass primary
legislation through Parliament — meaning MPs and peers would have next to no
opportunity for scrutiny.
While Labour argues this is necessary to deal with the onslaught of online harms
brought about by technology — particularly AI — digital rights activists and
civil liberties campaigners fear executive overreach, and say Labour is
confusing fast action for good policy, especially as it mulls the possibility of
a social media ban for under-16s.
GOVERNMENT HANDS ITSELF NEW POWERS
The first amendment, to the Crime and Policing Bill, would empower any senior
government minister to amend the Online Safety Act near unilaterally for the
purposes of “minimizing or mitigating the risks of harm to individuals”
presented by illegal AI-generated content.
The second amendment, to the Children’s Wellbeing and Schools Bill, looks to go
even further, giving ministers the ability to alter any piece of primary
legislation to restrict children’s access to “certain internet services.”
The Department for Science, Innovation and Technology (DSIT) has said it wants
to act “at pace” in response to the findings of its consultation, the “key
focus” of which is whether to ban social media for under-16s, a policy idea
which has picked up momentum in multiple countries since Australia introduced a
ban at the end of last year.
Amendments like those tabled this week are commonly referred to as Henry VIII
clauses, which allow ministers to largely bypass Parliament. They are
not entirely new: successive governments since the 1980s have increasingly
relied on statutory instruments for lawmaking, according to the Institute for
Government.
But such clauses bring problems that could last long after Starmer’s
premiership. The government may have good intentions when it comes to online
safety, but the measures proposed are “storing up trouble for years to come at a
very worrying moment where anti-democratic parties [around the world] are
gaining traction,” Anna Cardaso, policy and campaigns officer at civil liberties
organisation Liberty told POLITICO.
“When you create a law, you have to think about what a future government could
do with those powers. A future government might not be motivated purely by
reducing harms to children, or might have a very different view of what counts
as harm,” agreed James Baker, advocacy manager at digital rights
organisation Open Rights Group.
Baker pointed to steps taken by the Trump administration in the U.S. to target
websites hosting LGBTQ+ content and reproductive health advice.
There are also questions to be asked about proportionality under the Human
Rights Act, he argued, not least because the evidence base on how children are
affected by social media is muddy at best — a DSIT-commissioned study published
in January found little high-quality evidence of a correlation between time
spent on social media and poorer reported mental health, for example.
Although the government hopes its use of Henry VIII powers will speed things
up, the move is vulnerable to challenge in the courts — not only from human
rights campaigners concerned about the impact on privacy and freedom of
expression, but also from tech companies navigating any new regulations.
“The inevitable consequence of such broad regulatory discretion is an explosion
in litigation,” Oliver Carroll, legal director at law firm Bird & Bird, said.
‘FIRE-FIGHTING’
The government has backed away from plans to introduce primary legislation
dedicated to artificial intelligence, with ministers instead looking to regulate
AI at the point of use on a sector-by-sector basis.
Primary legislation on AI would have allowed parliamentarians and other
stakeholders to “debate and hammer out the fundamental principles and a
framework of regulation,” Liberty’s Anna Carsado said. “But instead, they’ve
dodged the hard thing, and they’re just firefighting emergency by emergency by
statutory instrument.”
The Children’s Wellbeing and Schools Bill amendment gets its first outing in the
House of Commons today, where it stands a good chance of surviving thanks to
Labour’s 158-seat majority. Both amendments will also have to pass the House of
Lords, where they could meet more resistance.
DSIT did not respond when contacted by POLITICO for comment.
LONDON — Labour peer Margaret Hodge is among the candidates vying to be the next
chair of the media regulator Ofcom.
Hodge, who was the MP for Barking until 2024 and has supported stricter social
media regulation, was among the candidates interviewed for the role last week,
according to two people familiar with the appointment process, granted anonymity
because they are not authorized to speak on the record.
Hodge, a veteran Labour politician who has spoken about her experience of online
abuse, would be another political appointment to the £120,000-a-year role at a
crucial time for the independent regulator.
The previous Conservative government appointed Michael Grade, a Tory peer, as
chair in 2022. His term ends on April 26, and the Department for Science,
Innovation and Technology, which is leading the recruitment process, hopes to
announce his replacement before then.
The interview panel, which is made up of civil servants and independent members,
will now hand Technology Secretary Liz Kendall a shortlist of approved
candidates.
Former Conservative Culture Secretary Jeremy Wright is also in the running,
according to the same two people quoted above. Wright, one of the architects of
the Online Safety Act (OSA), has been critical of Ofcom’s implementation of the
flagship law.
The Telegraph newspaper has reported Channel 4’s former Chairman Ian Cheshire is
also on the shortlist.
Kendall has also been critical of Ofcom for not implementing parts of the OSA
quickly enough. She warned last November that it risks losing public trust.
Ofcom, which also regulates TV and radio, is about to embark on a major review
of the telecoms sector, which is being upended by developments in artificial
intelligence and satellite technology.
A DSIT spokesperson said they were unable to comment on the recruitment process.
Hodge did not immediately respond to a request for comment.
LONDON — U.K. Prime Minister Keir Starmer on Monday said the government “need to
look at” social media design features like infinite scroll as part of action to
encourage healthier habits for children online.
Speaking at an event on Monday morning, Starmer said he was concerned that even
exposure to ostensibly non-harmful online content could be problematic for kids’
development.
The U.K. government is due to launch a consultation in the next few weeks into
children’s online safety which will specifically consider whether to ban
under-16s from social media. The government on Monday announced it would give
itself powers to swiftly enact findings from that consultation, which will last
three months.
Starmer said that “there will be action coming out of this consultation,” even
if that’s not an outright ban, and suggested that specific features including
infinite scrolling and autoplay could be targeted.
“Some of the addictive features on social media that mean you never stop
scrolling, or once you want watch one thing, another thing comes up and you’re
on your screen the whole time, we need to look at that, because even if it’s
good stuff, the question is, how do we get people off it and not simply on their
screen?” Starmer said.
He reiterated the point in an interview with Radio 2.
“Yes, there’s the sort of overarching question of whether under 16 should be on
social media at all … There are features within social media that are intended
to make it addictive, so the sort of constant scrolling, the sort of auto player
for next thing … all of these are designed to keep young people on-screen, not
off-screen. And we have to tackle that,” Starmer said.
This comes after the European Commission made a preliminary finding earlier this
month that TikTok’s infinite scroll and autoplay features breached Europe’s
Digital Services Act.
MUNICH, Germany — The U.S. is not interfering in European politics, a senior
U.S. State Department official told POLITICO on Saturday, despite reported
efforts by the Trump administration to fund MAGA-aligned organizations on the
continent.
Speaking at the POLITICO Pub on the sidelines of the Munich Security Conference,
U.S. Under Secretary of State Sarah Rogers pushed back on a Financial Times
report that she had backed a program to fund far-right think tanks and
institutes in Europe.
“The idea that we have a slush fund for the far right is a lie,” Rogers said.
“It’s not America’s decision to govern who’s elected in Europe.”
The message from Rogers appeared to be another sign of the Trump administration
trying to send conciliatory signals to Europe, despite the recently published
National Security Strategy calling on the U.S. to “cultivate resistance” to the
political status quo on the continent. And it came just hours after Secretary of
State Marco Rubio called for a “strong and revitalized Europe” on the Munich
stage.
Rogers has courted controversy by taking to her official social media accounts
to launch public attacks, from characterizing immigrants to Germany as “imported
barbarian rapist hordes” to connecting Sweden’s migration policy to instances of
sexual violence, and for her sharp rebukes of social media regulations in the EU
and the United Kingdom.
After U.S. Vice President JD Vance’s searing Munich speech last year criticizing
European democracies for ostensibly pushing back on free speech rights in
efforts to crack down on election interference, Rogers indicated that the U.S.
is still making a list of which allies have been naughty and nice, but used a
gentler tone.
“In terms of who’s a good ally, we certainly have views on that, but whoever’s
elected, we will work with them,” she said.
At Munich, she has faced questions over whether rising far-right European
parties, such as Germany’s Alternative for Germany (AfD) and France’s National
Rally, might share U.S. priorities when it comes to beefing up defense.
Many right-wing parties have qualms over higher military spending and many also
have warm relations with the Kremlin.
Rogers said that despite holding meetings with an AfD spokesperson last year,
she has also talked with the British and French governments.
“I’m a diplomat,” Rogers said. “It’s my job to meet with people that disagree
with us on at least some things.”
The White House also has disagreements with would-be European allies on the
right, she said, and there is some common ground on efforts to crack down on AI
deepfakes and sexual exploitation on social media.
“We certainly don’t disagree that defamatory sexualized deepfakes are a serious
issue, possibly addressable by law,” she added.
LONDON — The U.K.’s data protection watchdog has opened a formal investigation
into Elon Musk’s companies X and xAI, over the use of personal data by the Grok
AI system to generate a flood of sexualized deepfakes.
In a statement on Tuesday, the Information Commissioner’s Office said the
“reported creation and circulation of such content raises serious concerns under
U.K. data protection law and presents a risk of significant potential harm to
the public.”
“These concerns relate to whether personal data has been processed lawfully,
fairly and transparently, and whether appropriate safeguards were built into
Grok’s design and deployment to prevent the generation of harmful manipulated
images using personal data,” it said.
The formal investigation follows an announcement last month that the ICO was
seeking urgent information from X and xAI, amid widespread reports that Grok had
been used to generate sexualized images of children and adults.
William Malcolm, executive director for regulatory risk and innovation at the
ICO, said the reports about Grok “raise deeply troubling questions about how
people’s personal data has been used.”
“Losing control of personal data in this way can cause immediate and significant
harm. This is particularly the case where children are involved,” Malcolm said.
“Where we find obligations have not been met, we will take action to protect the
public.”
While the ICO’s investigation will focus on X and xAI’s compliance with U.K.
data protection law, Malcolm said it would work closely with other regulators in
the U.K. and abroad that are also investigating the issue.
Ofcom, the U.K.’s communications regulator, opened a formal investigation into X
last month under the Online Safety Act. That investigation is ongoing, Ofcom
said on Tuesday. It is progressing “as a matter of urgency” but could take
“months,” Ofcom added, noting that it must follow a “fair process” and “it would
not be appropriate to provide a running commentary.”
Ofcom also said it is not currently investigating xAI, which provides the
standalone Grok AI tool, noting that “it can only take action on online harms
covered by the [OSA].” The act does not apply to AI tools which do not involve
searching the internet, interacting with other social media users, or generating
pornography, it said.
The U.K.’s Technology Secretary Liz Kendall has previously said she is assessing
options to address “gaps” in the OSA.
The European Commission announced its own probe into X last month, while French
authorities searched X’s offices in Paris on Tuesday as part of their own
criminal investigation into Grok, POLITICO reported.
X did not immediately respond when contacted for comment.
LONDON — Pornhub will no longer be fully available in the U.K. from Feb. 2, its
parent company Aylo announced Tuesday, citing the consequences of Britain’s
Online Safety Act.
Aylo said it made an effort to comply after the act’s Children’s Codes came into
force last summer, requiring adult sites to have highly effective age-assurance.
But visitors — both adults and under-18s — are flocking to non-compliant sites
en masse, Alexzandra Kekesi, vice president of brand and community at Aylo,
said.
Despite sharing these findings with the Department for Science, Innovation and
Technology and the U.K.’s communications watchdog Ofcom, “we’re still continuing
to see more of the same,” she said. Aylo says users who go through age assurance
prior to the Feb. 2 cut-off date will still be able to access the site.
During a press conference, Aylo’s lawyers were keen to argue that the blame for
its decision should be put at the government’s feet, rather than Ofcom’s, and
argued only device-based age-assurance by the likes of Google, Apple, and
Microsoft would solve the problem.
“This law, not our regulator, this law by its very nature is pushing both adults
and children alike to the cesspools of the internet, to the most dangerous
material possible,” Solomon Friedman, a partner at Ethical Capital Partners and
a lawyer representing Aylo said.
“And while there [were] six months by Aylo of good faith effort to be part of
this ecosystem, to gather data and share it with the government, the data now
really speaks for itself. This law not only is not protecting children, it’s
putting children and adults in greater danger online,” he added.
LONDON — The U.K. government’s upcoming ban on nudification apps won’t apply to
general-purpose AI tools like Elon Musk’s Grok, according to Tech Secretary Liz
Kendall.
The ban will “apply to applications that have one despicable purpose only: to
use generative AI to turn images of real people into fake nude pictures and
videos without their permission,” Kendall said in a letter to Science,
Innovation and Technology committee chair Chi Onwurah published Wednesday.
Grok, which is made by Musk’s AI company xAI but is also accessible inside his
social media platform X, has sparked a political uproar because it has been used
to create a wave of sexualized nonconsensual deepfakes, many targeting women and
some children.
But Grok can be used to generate a wide range of images and has other
functionalities, including text generation, so does not have the sole purpose of
generating sexualized or nude images.
The U.K. government announced its plan to ban nudification apps in December,
before the Grok controversy took off, but Kendall has given it as an example of
ways that the government is cracking down on AI-generated intimate image abuse.
Kendall said the nudification ban will be put into effect using the Crime and
Policing Bill, which is currently passing through committee stage.
The Department for Science, Innovation, and Technology did not immediately
respond when contacted by POLITICO for comment.
The U.K.’s media regulator Ofcom launched an investigation into X on Monday to
determine whether the platform has complied with its duties under the Online
Safety Act to protect British users from illegal content. The U.K, government
has said Ofcom has its full support to use whatever enforcement tools it deems
fit, which could include blocking X in the U.K. or issuing a fine.