European countries should not rush into social media bans for children, human
rights adviser Michael O’Flaherty told POLITICO.
The comments come as many EU countries push to restrict minors’ access to social
media, citing mental health concerns. In France, the parliament’s upper house is
this week debating restrictions that President Emmanuel Macron has said will be
in place as soon as September.
Such bans are neither “proportionate nor necessary,” said O’Flaherty, the
commissioner for human rights at the Council of Europe, the continent’s top
human rights body, adding that there “are other ways to address the curse of
abusive material online.”
The debate on how to protect children from the harms of social media “goes
straight to bans without looking at all the other options that could be in
play,” he told POLITICO. Restricting access to social media presents “issues of
human rights, because a child has a right to receive information just like
anybody else.”
O’Flaherty’s concerns come amid live discussions on the merits and effectiveness
of bans in Europe. Australia became the first country in the world to ban minors
under 16 from creating accounts on social media platforms like Instagram in late
2025, and Brazil moved forward with its own measures last week.
Now France, Denmark, Spain and Greece are among the EU countries heading toward
bans, albeit on different timelines.
Proponents argue that age-related restrictions setting a minimum age for the
most addictive social media platforms are vital to protect children’s physical
and mental health.
Critics say that bans are ineffective and are detrimental to privacy because
they require users to verify themselves online.
O’Flaherty argued that — while children’s rights to access information could be
curtailed if that overall limited their risks — any restrictions need to be
proportionate and necessary.
That must follow a serious effort by the EU to tackle illegal and harmful
content on social media, he said, which hasn’t happened yet. “We haven’t
remotely tried hard enough yet to ensure effective oversight of the platforms.”
The human rights chief praised the EU’s digital laws as world-leading, including
the Digital Services Act, which seeks to protect kids from systemic risks on
online platforms — but said it wasn’t being policed strongly enough.
“We have a very piecemeal enforcement of the Digital Services Act and the other
relevant rulebook right across Europe. It’s very much dependent on the goodwill
and the capacity of the different governments to be serious about it,” he said.
Governments have “an uneven record” in that regard, he said.
The European Commission, in charge of enforcing the DSA on large social media
platforms, is considering its own measures. | Thierry Monasse/Getty Images
EU countries must make sure they have exhausted all other solutions before
heading for the extreme measures of bans, he said. “I don’t see much sign of
that effort.”
Still, Denmark, Spain and Greece are among the EU countries heading toward bans,
although they are on vastly different timelines.
The European Commission, in charge of enforcing the DSA on large social media
platforms, is considering its own measures. Countries like Greece have called on
the Commission to go forth with an EU-wide ban to avoid fragmentation across the
bloc.
President Ursula von der Leyen has convened a panel of experts to advise her on
next steps, which is expected to give its results by the summer.
Tag - Privacy
BRUSSELS — In the 10 years since the Brussels terror attacks, the EU has
tightened its security strategy but the internet is opening up new threats,
according to the bloc’s counterterrorism coordinator.
Daesh is “mutating jihadism,” Bartjan Wegter told POLITICO in an interview on
the eve of the anniversary of the terrorist attacks in Brussels, which pushed
the bloc to bolster border protection and step up collaboration and
information-sharing.
The group has “calculated that it’s much more effective to radicalize people who
are already inside the EU through online environments rather than to organize
orchestrated attacks from outside our borders,” he said. “And they’re very good
at it.”
Ten years ago, two terrorists from Daesh (also known as the so-called Islamic
State) blew themselves up at Brussels Airport. Another explosion tore through a
metro car at Maelbeek station, in the heart of Brussels’ EU district. Thirty-two
people were killed, and hundreds more injured.
The attacks came just months after terrorists killed 130 people in attacks on a
concert hall, a stadium, restaurants and bars in Paris, exposing gaps in
information-sharing in the bloc’s free-travel area. The terrorists had moved
between countries, planning the attacks in one and carrying them out in another,
said Wegter, who is Dutch. “That’s where our vulnerabilities were.”
Today, violent jihadism remains a threat and new large-scale attacks can’t be
excluded. But the probability is “much, much lower today than it was 10 years
ago,” said Wegter.
In the aftermath of the attacks, the bloc changed its security strategy with a
focus on prevention and a “security reflex” across every policy field, according
to Wegter. It’s also stepping up police and judicial collaboration through
Europol and Eurojust, and it’s putting in place databases — including the
Schengen Information System — so countries could alert each other about
high-risk individuals, as well as an entry/exit system to monitor who enters and
leaves the free-travel area.
But the bloc is facing a new type of threat, as security officials see a gradual
increase in attempted terrorist attacks by lone actors. A lot of that is being
cultivated online and increasingly, younger people are involved.
“We’ve seen cases of children 12 years old. And, the radicalization process [is]
also happening faster,” Wegter said. “Sometimes we’re talking about weeks or
months.”
In 2024, a third of all arrests connected to potential terror threats were of
people aged between 12 and 20 years old, and France recorded a tripling of the
number of minors radicalized between 2023 and 2024, said Wegter.
“Just put yourself in the shoes of law enforcement … You’re dealing with young
people who spend most of their time online … Who may not have a criminal record.
Who, if they are plotting attacks, may not be using registered weapons. It’s
very hard to prevent.”
Violent jihadism is just one of the threats EU security officials worry are
being cultivated online.
Wegter said there is also an emerging trend of a violent right-wing extremist
narrative online — and to a lesser extent, violent left-wing extremism. There’s
also what he called “nihilistic extremist violence,” a new phenomenon that can
feature elements of different ideologies or a drive to overthrow the system, but
which is fundamentally minors seeking an identity through violence.
“What we see online, some of these images are so horrible that even law
enforcement needs psychological support to see this kind of stuff,” said Wegter.
Law enforcement’s ability to get access to encrypted data and information on
people under investigation is crucial, he stressed, and he drew parallels with
the steps the EU took to secure the Schengen free movement 10 years ago.
“If you want to preserve the good things of the internet, we also need to make
sure that we have … some key mechanisms to safeguard the internet also.”
L’Europa si è spaccata sulla “sorveglianza di massa”, per citare le parole del
Parlamento europeo a proposito del regolamento Csar (Child sexual abuse
regulation). Lo scopo è consentire la scansione dei messaggi degli utenti in
chat e via mail, da parte delle piattaforme tecnologiche, per contrastare la
piaga della pedopornografia online in costante aumento. Dunque posta elettronica
al setaccio, ma anche i messaggini via Whatsapp, Messenger, Instagram, Signal,
Telegram. In gioco c’è il diritto alla riservatezza di 450 milioni di europei.
Anche per l’importanza della posta in palio, il 16 marzo, è saltato l’accordo
tra il Parlamento e i governi Ue riuniti nel Consiglio: dopo i negoziati aperti
4 giorni prima, il trilogo è ufficialmente fallito. Cosa è successo? “Con la
loro mancanza di flessibilità, gli Stati membri hanno deliberatamente accettato
che il regolamento provvisorio scadrà ad aprile”, ha dichiarato la relatrice
tedesca del Parlamento Ue Birgit Sippel (S&D, socialisti e democratici).
Risultato? “La scansione volontaria per contrastare la diffusione online di
materiale pedopornografico da parte dei fornitori non sarà più possibile”. In
sostanza, il Consiglio Ue ha ritenuto troppo blandi i controlli autorizzati dal
Parlamento, al punto da preferirne l’azzeramento e far saltare i negoziati con
Strasburgo. Secondo fonti del Parlamento Ue, molto raramente i governi Ue in
senso al Consiglio hanno assunto posizioni così rigide.
Potenzialmente, lo stop alla scansione dei messaggi è un vulnus nella lotta ai
crimini sessuali contro i minori. Gli attivisti dei diritti digitali, invece,
tirano un sospiro di sollievo. Anche il Movimento 5 stelle gioisce con Gaetano
Pedullà: “Proteggere i bambini è un dovere sacrosanto, ma per una volta
salutiamo con favore l’opposizione degli Stati Membri che non ha permesso al
chat control I di entrare in vigore. La protezione dei minori non può
trasformarsi in un sistema di sorveglianza di massa sui cittadini europei, con
la fine della privacy e del diritto alla comunicazione riservata”.
LA SCANSIONE DEI MESSAGGI IN DEROGA AI DIRITTI SULLA PRIVACY: TRATTATIVE IN
CORSO SUL REGOLAMENTO DEFINITIVO
Lo scontro tra il Parlamento e i governi riguarda la deroga alla direttiva
ePrivacy del 2002. Il provvedimento europeo impedirebbe da 24 anni intrusioni
nei messaggi privati online, nel nome della riservatezza. Ma dal 2021 vige una
deroga per combattere gli abusi sessuali online a danno dei minori, in
vertiginoso aumento. In virtù dell’eccezione, la piattaforme digitali possono
accedere ai messaggi degli utenti segnalando alle forze dell’ordine i casi di
presunte molestie sessuali. Facebook già scansiona le comunicazioni a caccia di
materiali pedopornografici: il 95% delle segnalazioni giunge dal colosso di
Zuckerberg. Ma la scelta delle piattaforme di guardare dallo “spioncino” è
volontaria, nessun obbligo di legge. La deroga alla tutela della privacy è stata
prorogata nel 2024 e scadrà ad aprile 2026. Poi alt alla scansione dei
contenuti: la Commissione europea invece ne aveva caldeggiato la prosecuzione
con il testo proposto ufficialmente il 19 dicembre 2025, ammonendo sui rischi in
caso di stop: “Ciò renderebbe più facile per i predatori la diffusione di
materiale pedopornografico, la loro impunità e l’adescamento di bambini nell’Ue.
L’individuazione proattiva da parte dei fornitori di servizi online è stata
fondamentale per oltre 15 anni nel salvare i bambini da abusi in corso e nel
portare i colpevoli davanti alla giustizia”. In realtà alle piattaforme è
permesso accedere ai messaggi non da 15 anni bensì da 5, con l’entrata in vigore
della deroga nel 2021. L’anno dopo, la Commissione europea ha proposto il
regolamento Csar per trasformare la deroga temporanea in legge duratura. Gli
attivisti dei diritti digitali – guidati da Patrick Breyer, ex europarlamentare
del partito dei pirati tedesco – sono insorti contro il Csar battezzandolo “chat
control 2.0”. La versione 1.0, per i difensori della privacy, è la deroga
temporanea alle tutele per la riservatezza.
LE TRATTATIVE SU CHAT CONTROL: DA UN LATO IL PARLAMENTO, DALL’ALTRA LA
COMMISSIONE
La proposta di regolamento di palazzo Berlaymont, firmata dalla
socialdemocratica svedese Ylva Johansson, è stata già bocciata dal Parlamento
europeo a novembre 2023, bollata come “sorveglianza di massa”: la scansione non
sarebbe stata facoltativa, per le piattaforme, bensì obbligatoria. Infatti il
testo dell’Eurocamera restringe nettamente il perimetro dei controlli sui
messaggi. Su “chat control 2.0”, per lungo tempo neppure il Consiglio Europeo ha
trovato l’accordo: fino a novembre 2025, quando l’intesa è giunta sul testo
firmato dalla presidente danese Mette Frederiksen, socialdemocratica come
Johansson. Dopo 3 anni di trattative, i governi hanno trovato la quadra
adottando il principio cardine della deroga provvisoria: niente obbligo per le
piattaforme di scansionare i messaggi degli utenti (come voleva la Commissione)
bensì una scelta volontaria. Un passo indietro rispetto al testo di Palazzo
Berlaymont. Ora sono in corso i triloghi per il regolamento definitivo. Ma il
fallimento dei negoziati su “chat control 1.0” lascia presagire nuovi ostacoli
lungo le trattative per l’approvazione.
Da una parte, il Consiglio e la Commissione Ue premono per favorire la scansione
dei messaggi contro la pedopornografia online, almeno su base volontaria.
Dall’altra, il Parlamento Ue prova a mitigare i controlli delle piattaforme
ascoltando le preoccupazioni per la privacy sollevate da giuristi e istituzioni
del Vecchio continente. L’11 marzo all’Eurocamera è passato l’emendamento dei
Verdi che ha stravolto il testo della Commissione per rinnovare la proroga ai
controlli su chat e mail. La modifica restringeva il campo dei controlli solo
agli utenti “identificati dall’autorità giudiziaria competente”, sui quali si
nutrano “ragionevoli motivi per sospettare l’esistenza di un legame, anche
indiretto, con materiale pedopornografico”. Dunque niente controlli
indiscriminati su tutti gli utenti. Non solo: esclusi dalla scansione anche i
servizi con crittografia end to end, come Whatsapp e Signal. Risultato: l’11
marzo il Parlamento ha rinnova la deroga per consentire i controlli in chat, a
costo di depotenziare ampiamente il testo della Commissione Ue. Del resto, il 2
marzo la Commissione Libe aveva già bocciato la proposta di palazzo Berlaymont.
Nove giorni dopo, la formulazione in salsa “light” passa a Strasburgo con 458
voti a favore, 103 contrari e 63 astensioni. Dunque una bocciatura trasversale,
da destra a sinistra, verso i controlli indiscriminati delle piattaforme
tecnologiche (per lo più americane) sui messaggi dei cittadini europei: malgrado
lo scopo sia combattere gli abusi sessuali online a danno dei minori. Il
Consiglio Ue ha reagito facendo saltare i negoziati del trilogo. Ovvero: meglio
nessun controllo, rispetto alla versione alleggerita del parlamento Ue.
L'articolo Chat control, l’Europa si spacca sulla sorveglianza di massa nel nome
dei minori. Da aprile stop alla scansione dei messaggi proviene da Il Fatto
Quotidiano.
BRUSSELS — Most Europeans believe the U.S. could pull the plug on technology
that Europe heavily relies on, according to a new poll.
Eighty-six percent of people think a sudden U.S. move to restrict Europe’s
access to digital services is “plausible” and “should not be ruled out,” and 59
percent called it “already a real and concrete risk,” in a survey conducted by
SWG and Polling Europe presented to European Parliament members this week.
European governments are trying to reduce their dependency on U.S. technology
for critical services like cloud, communications and AI.
One fear driving the shift to use homegrown tech is that of a “kill switch”; the
idea that U.S. President Donald Trump could force the hand of American tech
providers to cease services in Europe. Those fears peaked when the International
Criminal Court’s Chief Prosecutor Karim Khan lost access last year to his
Microsoft-hosted email account after the U.S. imposed sanctions on him.
“During the last year, everybody has really realized how important it is that we
are not dependent on one country or one company when it comes to some very
critical technologies,” the EU’s tech chief Henna Virkkunen told an audience in
Brussels earlier this year, at an event organized by POLITICO.
“In these times … dependencies, they can be weaponized against us,” Virkkunen
said.
The survey quizzed 5,079 respondents across all 27 EU member countries in
January. For 55 percent of those interviewed, charting a “European path” has
become a “central strategic issue.”
The European Parliament and a series of national government institutions have
already taken steps to move away from ubiquitous U.S. tech — though EU capitals
have cautioned the transition won’t happen overnight.
The European Commission is also finalizing a set of proposals due in late May to
reduce reliance on foreign tech, including defining what qualifies as a
sovereign provider and which critical sectors should rely exclusively on them to
safeguard European data and day-to-day operations.
The poll suggests U.S. efforts to debunk and dismiss the “kill switch” scenario
haven’t convinced Europeans.
U.S. National Cyber Director Sean Cairncross told an audience in Munich in
February that the idea that Trump can pull the plug on the internet is not “a
credible argument.”
Microsoft President Brad Smith said in Brussels last year that the “kill switch”
scenario was “exceedingly unlikely” to happen, but acknowledged it’s “a real
concern of people across Europe.” He pledged to push back against any
prospective orders to suspend operations in Europe.
U.S. firms at the same time are rushing to assuage the concerns with safeguards,
like air-gapped solutions that would prove resilient in the case of operational
disruptions.
The FBI is buying up information that can be used to track people’s movement and
location history, Director Kash Patel said during a Senate hearing Wednesday.
It is the first confirmation that the agency is actively buying people’s
data since former Director Christopher Wray said in 2023 that the FBI had
purchased location data in the past but was not doing so at that time.
“We do purchase commercially available information that’s consistent with the
Constitution and the laws under the Electronic Communications Privacy Act, and
it has led to some valuable intelligence for us,” Patel told senators at the
Intelligence Committee’s annual Worldwide Threats hearing.
The U.S. Supreme Court has required law enforcement agencies to obtain a warrant
for getting people’s location data from cell phone providers since 2018, but
data brokers offer an alternative avenue by purchasing the information directly.
Many lawmakers want to end the practice. Sens. Ron Wyden (D-Ore.) and Mike
Lee (R-Utah) introduced the Government Surveillance Reform Act on March 13,
which would require federal law enforcement and intelligence agencies to obtain
a warrant to buy Americans’ personal information.
“Doing that without a warrant is an outrageous end run around the Fourth
Amendment, it’s particularly dangerous given the use of artificial intelligence
to comb through massive amounts of private information,” Wyden said at
Wednesday’s hearing.
The bill has a House counterpart introduced by Rep. Zoe Lofgren (D-Calif.)
and Warren Davidson (R-Ohio).
Committee Chair Tom Cotton (R-Ark.) defended the practice at the hearing.
“The key words are commercially available. If any other person can buy it, and
the FBI can buy it, and it helps them locate a depraved child molester or savage
cartel leader, I would certainly hope the FBI is doing anything it can to keep
Americans safe,” he said.
Defense Intelligence Agency Director James Adams told senators at the hearing
that his agency also purchases commercially available information.
Anton, a 44-year-old Russian soldier who heads a workshop responsible for
repairing and supplying drones, was at his kitchen table when he learned last
month that Elon Musk’s SpaceX had cut off access to Starlink terminals used by
Russian forces. He scrambled for alternatives, but none offered unlimited
internet, data plans were restrictive, and coverage did not extend to the areas
of Ukraine where his unit operated.
It’s not only American tech executives who are narrowing communications options
for Russians. Days later, Russian authorities began slowing down access
nationwide to the messaging app Telegram, the service that frontline troops use
to coordinate directly with one another and bypass slower chains of command.
“All military work goes through Telegram — all communication,” Anton, whose name
has been changed because he fears government reprisal, told POLITICO in voice
messages sent via the app. “That would be like shooting the entire Russian army
in the head.”
Telegram would be joining a home screen’s worth of apps that have become useless
to Russians. Kremlin policymakers have already blocked or limited access to
WhatsApp, along with parent company Meta’s Facebook and Instagram, Microsoft’s
LinkedIn, Google’s YouTube, Apple’s FaceTime, Snapchat and X, which like SpaceX
is owned by Musk. Encrypted messaging apps Signal and Discord, as well as
Japanese-owned Viber, have been inaccessible since 2024. Last month, President
Vladimir Putin signed a law requiring telecom operators to block cellular and
fixed internet access at the request of the Federal Security Service. Shortly
after it took effect on March 3, Moscow residents reported widespread problems
with mobile internet, calls and text messages across all major operators for
several days, with outages affecting mobile service and Wi-Fi even inside the
State Duma.
Those decisions have left Russians increasingly cut off from both the outside
world and one another, complicating battlefield coordination and disrupting
online communities that organize volunteer aid, fundraising and discussion of
the war effort. Deepening digital isolation could turn Russia into something
akin to “a large, nuclear-armed North Korea and a junior partner to China,”
according to Alexander Gabuev, the Berlin-based director of the Carnegie Russia
Eurasia Center.
In April, the Kremlin is expected to escalate its campaign against Telegram —
already one of Russia’s most popular messaging platforms, but now in the absence
of other social-media options, a central hub for news, business and
entertainment. It may block the platform altogether. That is likely to fuel an
escalating struggle between state censorship and the tools people use to evade
it, with Russia’s place in the world hanging in the balance.
“It’s turned into a war,” said Mikhail Klimarev, executive director of the
internet Protection Society, a digital rights group that monitors Russia’s
censorship infrastructure. “A guerrilla war. They hunt down the VPNs they can
see, they block them — and the ‘partisans’ run, build new bunkers, and come
back.”
THE APP THAT RUNS THE WAR
On Feb. 4, SpaceX tightened the authentication system that Starlink terminals
use to connect to its satellite network, introducing stricter verification for
registered devices. The change effectively blocked many terminals operated by
Russian units relying on unauthorized connections, cutting Starlink traffic
inside Ukraine by roughly 75 percent, according to internet traffic analysis
by Doug Madory, an analyst at the U.S. network monitoring firm Kentik.
The move threw Russian operations into disarray, allowing Ukraine to make
battlefield gains. Russia has turned to a workaround widely used before
satellite internet was an option: laying fiber-optic lines, from rear areas
toward frontline battlefield positions.
Until then, Starlink terminals had allowed drone operators to stream live video
through platforms such as Discord, which is officially blocked in Russia but
still sometimes used by the Russian military via VPNs, to commanders at multiple
levels. A battalion commander could watch an assault unfold in real time and
issue corrections — “enemy ahead” or “turn left” — via radio or Telegram. What
once required layers of approval could now happen in minutes.
Satellite-connected messaging apps became the fastest way to transmit
coordinates, imagery and targeting data.
But on Feb. 10, Roskomnadzor, the Russian communications regulator, began
slowing down Telegram for users across Russia, citing alleged violations of
Russian law. Russian news outlet RBC reported, citing two sources, that
authorities plan to shut down Telegram in early April — though not on the front
line.
In mid-February, Digital Development Minister Maksut Shadayev said the
government did not yet intend to restrict Telegram at the front but hoped
servicemen would gradually transition to other platforms. Kremlin spokesperson
Dmitry Peskov said this week the company could avoid a full ban by complying
with Russian legislation and maintaining what he described as “flexible contact”
with authorities.
Roskomnadzor has accused Telegram of failing to protect personal data, combat
fraud and prevent its use by terrorists and criminals. Similar accusations have
been directed at other foreign tech platforms. In 2022, a Russian court
designated Meta an “extremist organization” after the company said it would
temporarily allow posts calling for violence against Russian soldiers in the
context of the Ukraine war — a decision authorities used to justify blocking
Facebook and Instagram in Russia and increasing pressure on the company’s other
services, including WhatsApp.
Telegram founder Pavel Durov, a Russian-born entrepreneur now based in the
United Arab Emirates, says the throttiling is being used as a pretext to push
Russians toward a government-controlled messaging app designed for surveillance
and political censorship.
That app is MAX, which was launched in March 2025 and has been compared to
China’s WeChat in its ambition to anchor a domestic digital ecosystem.
Authorities are increasingly steering Russians toward MAX through employers,
neighborhood chats and the government services portal Gosuslugi — where citizens
retrieve documents, pay fines and book appointments — as well as through banks
and retailers. The app’s developer, VK, reports rapid user growth, though those
figures are difficult to independently verify.
“They didn’t just leave people to fend for themselves — you could say they led
them by the hand through that adaptation by offering alternatives,” said Levada
Center pollster Denis Volkov, who has studied Russian attitudes toward
technology use. The strategy, he said, has been to provide a Russian or
state-backed alternative for the majority, while stopping short of fully
criminalizing workarounds for more technologically savvy users who do not want
to switch.
Elena, a 38-year-old Yekaterinburg resident whose surname has been withheld
because she fears government reprisal, said her daughter’s primary school moved
official communication from WhatsApp to MAX without consulting parents. She
keeps MAX installed on a separate tablet that remains mostly in a drawer — a
version of what some Russians call a “MAXophone,” gadgets solely for that app,
without any other data being left on those phones for the (very real) fear the
government could access it.
“It works badly. Messages are delayed. Notifications don’t come,” she said. “I
don’t trust it … And this whole situation just makes people angry.”
THE VPN ARMS RACE
Unlike China’s centralized “Great Firewall,” which filters traffic at the
country’s digital borders, Russia’s system operates internally. Internet
providers are required to route traffic through state-installed deep packet
inspection equipment capable of controlling and analyzing data flows in real
time.
“It’s not one wall,” Klimarev said. “It’s thousands of fences. You climb one,
then there’s another.”
The architecture allows authorities to slow services without formally banning
them — a tactic used against YouTube before its web address was removed from
government-run domain-name servers last month. Russian law explicitly provides
government authority for blocking websites on grounds such as extremism,
terrorism, illegal content or violations of data regulations, but it does not
clearly define throttling — slowing traffic rather than blocking it outright —
as a formal enforcement mechanism. “The slowdown isn’t described anywhere in
legislation,” Klimarev said. “It’s pressure without procedure.”
In September, Russia banned advertising for virtual private network services
that citizens use to bypass government-imposed restrictions on certain apps or
sites. By Klimarev’s estimate, roughly half of Russian internet users now know
what a VPN is, and millions pay for one. Polling last year by the Levada Center,
Russia’s only major independent pollster, suggests regular use is lower, finding
about one-quarter of Russians said they have used VPN services.
Russian courts can treat the use of anonymization tools as an aggravating factor
in certain crimes — steps that signal growing pressure on circumvention
technologies without formally outlawing them. In February, the Federal
Antimonopoly Service opened what appears to be the first case against a media
outlet for promoting a VPN after the regional publication Serditaya Chuvashiya
advertised such a service on its Telegram channel.
Surveys in recent years have shown that many Russians, particularly older
citizens, support tighter internet regulation, often citing fraud, extremism and
online safety. That sentiment gives authorities political space to tighten
controls even when the restrictions are unpopular among more technologically
savvy users.
Even so, the slowdown of Telegram drew criticism from unlikely quarters,
including Sergei Mironov, a longtime Kremlin ally and leader of the Just Russia
party. In a statement posted on his Telegram channel on Feb. 11, he blasted the
regulators behind the move as “idiots,” accusing them of undermining soldiers at
the front. He said troops rely on the app to communicate with relatives and
organize fundraising for the war effort, warning that restricting it could cost
lives. While praising the state-backed messaging app MAX, he argued that
Russians should be free to choose which platforms they use.
Pro-war Telegram channels frame the government’s blocking techniques as sabotage
of the war effort. Ivan Philippov, who tracks Russia’s influential military
bloggers, said the reaction inside that ecosystem to news about Telegram has
been visceral “rage.”
Unlike Starlink, whose cutoff could be blamed on a foreign company, restrictions
on Telegram are viewed as self-inflicted. Bloggers accuse regulators of
undermining the war effort. Telegram is used not only for battlefield
coordination but also for volunteer fundraising networks that provide basic
logistics the state does not reliably cover — from transport vehicles and fuel
to body armor, trench materials and even evacuation equipment. Telegram serves
as the primary hub for donations and reporting back to supporters.
“If you break Telegram inside Russia, you break fundraising,” Philippov said.
“And without fundraising, a lot of units simply don’t function.”
Few in that community trust MAX, citing technical flaws and privacy concerns.
Because MAX operates under Russian data-retention laws and is integrated with
state services, many assume their communications would be accessible to
authorities.
Philippov said the app’s prominent defenders are largely figures tied to state
media or the presidential administration. “Among independent military bloggers,
I haven’t seen a single person who supports it,” he said.
Small groups of activists attempted to organize rallies in at least 11 Russian
cities, including Moscow, Irkutsk and Novosibirsk, in defense of Telegram.
Authorities rejected or obstructed most of the proposed demonstrations — in some
cases citing pandemic-era restrictions, weather conditions or vague security
concerns — and in several cases revoked previously issued permits. In
Novosibirsk, police detained around 15 people ahead of a planned rally. Although
a small number of protests were formally approved, no large-scale demonstrations
ultimately took place.
THE POWER TO PULL THE PLUG
The new law signed last month allows Russia’s Federal Security Service to order
telecom operators to block cellular and fixed internet access. Peskov, the
Kremlin spokesman, said subsequent shutdowns of service in Moscow were linked to
security measures aimed at protecting critical infrastructure and countering
drone threats, adding that such limitations would remain in place “for as long
as necessary.”
In practice, the disruptions rarely amount to a total communications blackout.
Most target mobile internet rather than all services, while voice calls and SMS
often continue to function. Some domestic websites and apps — including
government portals or banking services — may remain accessible through
“whitelists,” meaning authorities allow certain services to keep operating even
while broader internet access is restricted. The restrictions are typically
localized and temporary, affecting specific regions or parts of cities rather
than the entire country.
Internet disruptions have increasingly become a tool of control beyond
individual platforms. Research by the independent outlet Meduza and the
monitoring project Na Svyazi has documented dozens of regional internet
shutdowns and mobile network restrictions across Russia, with disruptions
occurring regularly since May 2025.
The communications shutdown, and uncertainty around where it will go next, is
affecting life for citizens of all kinds, from the elderly struggling to contact
family members abroad to tech-savvy users who juggle SIM cards and secondary
phones to stay connected. Demand has risen for dated communication devices —
including walkie-talkies, pagers and landline phones — along with paper maps as
mobile networks become less reliable, according to retailers interviewed by RBC.
“It feels like we’re isolating ourselves,” said Dmitry, 35, who splits his time
between Moscow and Dubai and whose surname has been withheld to protect his
identity under fear of governmental reprisal. “Like building a sovereign grave.”
Those who track Russian public opinion say the pattern is consistent: irritation
followed by adaptation. When Instagram and YouTube were blocked or slowed in
recent years, their audiences shrank rapidly as users migrated to alternative
services rather than mobilizing against the restrictions.
For now, Russia’s digital tightening resembles managed escalation rather than
total isolation. Officials deny plans for a full shutdown, and even critics say
a complete severing would cripple banking, logistics and foreign trade.
“It’s possible,” Klimarev said. “But if they do that, the internet won’t be the
main problem anymore.”
Spanish Prime Minister Pedro Sánchez on Wednesday unveiled a new government AI
tool that will rank social media sites based on how much hate speech they host.
“If hate is already dangerous, social networks have turned it into a weapon of
mass polarization that ends up seeping into everyday life,” Sánchez said at an
International Summit against Hate and Digital Harassment. “Today social networks
are a failed state,” he said.
The new system, known as HODIO, will analyze large volumes of publicly available
activity on social media to measure the scale and spread of online hate speech.
The data will be used to track how hateful content evolves and spreads on
platforms, and will feed into a public ranking comparing how much hate speech
circulates on major networks.
The European Union has rolled out laws and regulations like the Digital Services
Act to crack down on illegal and harmful online content. The rules have drawn
the ire of the United States’ administration, which sees them as online
censorship.
The new Spanish hate speech tool comes as Sanchéz repeatedly clashed with U.S.
President Donald Trump last week over the conflict in Iran.
The Spanish prime minister said the initiative is aimed at holding platforms
accountable for how their algorithms amplify polarizing content, and added that
the government plans to introduce a legal offense for “algorithmic
amplification” of hate speech.
Sánchez launched a broader push for stricter digital regulation last month and
wants to ban social media access for users under 16.
LONDON — Keir Starmer wants the public to know he’s going to move fast and fix
things.
Speaking to an audience of young people last month, the U.K. prime minister said
that unlike the previous Conservative government, which took eight years to pass
the country’s Online Safety Act, Labour will legislate fast enough to keep
up with the breakneck speed of technological change and its associated harms.
“We’ve taken the powers to make sure we can act within months, not years,” he
said.
His words came after the government decried Elon Musk’s X for
allowing deepfaked nude images to flood its platform. “The action we took on
Grok sent a clear message that no platform gets a free pass,” Starmer said.
Labour showcased its bold new approach last week,
tabling two legislative amendments that seek to grant ministers sweeping powers
to change the U.K.’s online safety regime without needing to pass primary
legislation through Parliament — meaning MPs and peers would have next to no
opportunity for scrutiny.
While Labour argues this is necessary to deal with the onslaught of online harms
brought about by technology — particularly AI — digital rights activists and
civil liberties campaigners fear executive overreach, and say Labour is
confusing fast action for good policy, especially as it mulls the possibility of
a social media ban for under-16s.
GOVERNMENT HANDS ITSELF NEW POWERS
The first amendment, to the Crime and Policing Bill, would empower any senior
government minister to amend the Online Safety Act near unilaterally for the
purposes of “minimizing or mitigating the risks of harm to individuals”
presented by illegal AI-generated content.
The second amendment, to the Children’s Wellbeing and Schools Bill, looks to go
even further, giving ministers the ability to alter any piece of primary
legislation to restrict children’s access to “certain internet services.”
The Department for Science, Innovation and Technology (DSIT) has said it wants
to act “at pace” in response to the findings of its consultation, the “key
focus” of which is whether to ban social media for under-16s, a policy idea
which has picked up momentum in multiple countries since Australia introduced a
ban at the end of last year.
Amendments like those tabled this week are commonly referred to as Henry VIII
clauses, which allow ministers to largely bypass Parliament. They are
not entirely new: successive governments since the 1980s have increasingly
relied on statutory instruments for lawmaking, according to the Institute for
Government.
But such clauses bring problems that could last long after Starmer’s
premiership. The government may have good intentions when it comes to online
safety, but the measures proposed are “storing up trouble for years to come at a
very worrying moment where anti-democratic parties [around the world] are
gaining traction,” Anna Cardaso, policy and campaigns officer at civil liberties
organisation Liberty told POLITICO.
“When you create a law, you have to think about what a future government could
do with those powers. A future government might not be motivated purely by
reducing harms to children, or might have a very different view of what counts
as harm,” agreed James Baker, advocacy manager at digital rights
organisation Open Rights Group.
Baker pointed to steps taken by the Trump administration in the U.S. to target
websites hosting LGBTQ+ content and reproductive health advice.
There are also questions to be asked about proportionality under the Human
Rights Act, he argued, not least because the evidence base on how children are
affected by social media is muddy at best — a DSIT-commissioned study published
in January found little high-quality evidence of a correlation between time
spent on social media and poorer reported mental health, for example.
Although the government hopes its use of Henry VIII powers will speed things
up, the move is vulnerable to challenge in the courts — not only from human
rights campaigners concerned about the impact on privacy and freedom of
expression, but also from tech companies navigating any new regulations.
“The inevitable consequence of such broad regulatory discretion is an explosion
in litigation,” Oliver Carroll, legal director at law firm Bird & Bird, said.
‘FIRE-FIGHTING’
The government has backed away from plans to introduce primary legislation
dedicated to artificial intelligence, with ministers instead looking to regulate
AI at the point of use on a sector-by-sector basis.
Primary legislation on AI would have allowed parliamentarians and other
stakeholders to “debate and hammer out the fundamental principles and a
framework of regulation,” Liberty’s Anna Carsado said. “But instead, they’ve
dodged the hard thing, and they’re just firefighting emergency by emergency by
statutory instrument.”
The Children’s Wellbeing and Schools Bill amendment gets its first outing in the
House of Commons today, where it stands a good chance of surviving thanks to
Labour’s 158-seat majority. Both amendments will also have to pass the House of
Lords, where they could meet more resistance.
DSIT did not respond when contacted by POLITICO for comment.
Germany’s data privacy authority on Thursday warned it can’t properly protect
citizens from surveillance by the country’s intelligence services, right as
Germany is moving to fortify its intelligence agency with sweeping new powers.
“Citizens have virtually no means of defending themselves against intelligence
measures that can deeply intrude on their privacy,” Louisa
Specht-Riemenschneider, the head of the Federal Commissioner for Data Protection
and Freedom of Information (BfDI), warned after a court ruled against the
commissioner’s request to get data on espionage activities.
Germany is drafting laws to give its intelligence services vast new powers, in a
historic shift that breaks with decades of strict limits on its espionage
abilities, rooted in the country’s Nazi and Cold War past.
Berlin’s plan to empower intelligence services comes as European leaders grow
increasingly concerned that U.S. President Donald Trump could move to halt
American intelligence sharing with Europe.
To keep German spies in check, the country’s privacy regulator started a legal
challenge against the Federal Intelligence Service (BND) after it refused to
share details of how it hacked electronic devices of foreigners abroad and
gathered data.
On Thursday, an administrative court ruled the privacy regulator didn’t have
legal standing to pursue the case, redirecting it to file a complaint with
Germany’s chancellery instead.
The ruling means “areas free from oversight will emerge” within German spy
agencies, Specht-Riemenschneider said, calling the agencies’ data processing
practices “secretive.”
Germany’s BND has historically been far more legally constrained than
intelligence agencies elsewhere, due to intentional protections put in place
after World War II to prevent a repeat of the abuses perpetrated by the Nazi spy
and security services Gestapo and SS. The agency was put under the oversight of
the chancellery and bound to a strict parliamentary control mechanism.
Germany’s stringent data protection laws — which are also largely a reaction to
the legacy of the East German secret police, or Stasi — restrict the BND
further. The agency must, for instance, redact personal information in documents
before passing them on to other intelligence services, POLITICO reported.
The German government is now reviewing those constraints and preparing an
overhaul of intelligence powers. Chancellor Friedrich Merz wants to boost and
unfetter his country’s foreign intelligence service, giving it much broader
authority to perpetrate acts of sabotage, conduct offensive cyber operations and
more aggressively carry out espionage.
Specht-Riemenschneider called on legislators to amend intelligence laws to make
sure her authority can challenge agencies’ data processing, because the spy
agency “can now effectively decide for itself what I am allowed to inspect and
what I can therefore monitor,” she said.
Spy services across Europe have also started to build a shared intelligence
operation to counter Russian aggression. The push for deeper intelligence
cooperation accelerated sharply after the Trump administration abruptly halted
the sharing of battlefield intelligence with Kyiv last March.
The BND did not immediately respond to a request for comment.
DUBLIN — TikTok on Tuesday began a defense of how it handles Europeans’ privacy
and data in a court case that will define how Chinese-owned companies in Europe
deal with Beijing’s spying laws.
The popular social media app is going head to head with the Irish Data
Protection Commission — Europe’s most powerful privacy regulator, which oversees
tech giants including Meta, X and Google.
At stake in the Irish court battle is whether TikTok is allowed to transfer
personal data of Europeans to China.
The company, which is owned by Chinese giant ByteDance, is challenging a €530
million fine by the Irish regulator last year, when officials found it had
allowed Chinese staff to access Europeans’ data — but failed “to verify,
guarantee and demonstrate” that the data was properly protected.
The Irish regulator wants TikTok to shut off data flows to China, unless it can
prove its user information is safe from Beijing’s invasive surveillance and
intelligence laws.
The case is a major test for Europe’s privacy rulebook, the General Data
Protection Regulation (GDPR), and how it protects Europeans when their data is
transferred to China. It comes as Europe is facing transatlantic pressure,
forcing the bloc to revisit trade ties with Beijing, despite long-held security
concerns over the Chinese government’s data snooping practices.
Lawyers faced off Tuesday in Dublin’s top courts building, for the start of a
grueling 10-day hearing, sparring over how to interpret the limits of Chinese
laws and the merits of TikTok’s data practices.
“The consequences of [the Irish regulator’s] decision are immense, even for a
very large organization like TikTok,” the firm’s senior counsel Paul Gallagher
told the court, estimating the cost of complying with the Irish order to run as
high as €5 billion.
If judges side with the Irish regulator, that could ultimately force TikTok to
unplug from China entirely to continue serving European users — just months
after it split off its U.S. operation into a new app, under the control of a
group of investors led by Silicon Valley giant Oracle and investment firms
Silver Lake and MGX, to alleviate long-standing American data security concerns.
TikTok has estimated that it would cost billions for it to comply with the Irish
regulator’s demand to cut off data flows, and would involve relocating thousands
of its workers outside of China.
DATA ACCESS WOES
The Irish regulator slapped TikTok with the privacy fine last May after it found
the platform couldn’t guarantee the data of its 159 million monthly users in
Europe were safe from China’s “problematic” surveillance laws.
“This is all about what TikTok have described as the relevant laws, and what the
[Data Protection Commission, or DPC] have described as the problematic laws,”
said TikTok’s senior counsel Gallagher, who is also a former attorney general
for the Irish government. “We don’t think they are problematic, because we think
they don’t apply. The DPC thinks they are problematic, because it thinks they do
apply.”
The fine was one of the highest the Irish regulator has handed out since it
started enforcing the GDPR in 2018.
It followed years of scrutiny from security and privacy authorities, as Western
governments increasingly viewed TikTok as a threat.
TikTok is owned by Beijing-based ByteDance, and staff in China have remote
access to some European user data stored outside the country. In details shared
with the Irish regulator during the investigation, TikTok said that the kind of
data accessed by staff in China could include usernames and account holder
details, interaction and activity data, and other personal data.
It said the company didn’t intend to collect sensitive data about users, but it
“may be collected incidentally or uploaded” by users, and staff needed to have
“restricted and limited” access for research, security, analytics and other
services.
TikTok has said Chinese laws don’t apply to its data, which it stores outside of
China, and has said it has never been asked to hand over data to Beijing’s
authorities.
The firm already launched a massive campaign to alleviate European politicians’
security concerns in 2023, when it presented what it called “Project Clover,”
a €12 billion plan designed to store data in Europe, overseen by a European
security company. It mimicked a U.S. campaign called “Project Texas,” which
promised similar controls to the U.S. in 2020.
But the moves failed to persuade politicians. The EU already cracked down on
TikTok for its own officials when it banned the app on their phones in 2023, a
move that was followed by many governments across Europe.
CHINA VS. US
The TikTok case is also forcing Europe to deal with a blind spot: data flowing
to China has, so far, been left largely unscrutinized.
The EU has skirmished with American authorities for years over how to protect
Europeans’ personal data from mass surveillance programs uncovered by
whistleblower Edward Snowden in 2013.
Data transfer agreements crafted by the EU and U.S. have been repeatedly wiped
out by Europe’s top court over surveillance concerns.
For data flowing to China, though, few cases have tested how companies protect
Europeans’ data when it comes within reach of Beijing’s surveillance
authorities.
The Irish regulator’s decision to fine TikTok meant the “screw is turning” on
data flows to China, Joe Jones, research director at the International
Association of Privacy Professionals, said after the decision came out.
“We’ve had over a decade of EU-U.K., EU-U.S. fights and sagas on [data flows].
This is the first time we’ve seen anything significant on any other country
outside of that transatlantic triangle — and it’s China,” Jones said.