Meta and TikTok have dealt a blow to the European Commission’s social media rule
book, pressing the EU executive to codify how it calculates the number of users
on online platforms.
The General Court at the Court of Justice of the European Union sided with the
social media companies on Wednesday in their challenge of an annual supervisory
fee the European Union charges to pay for the enforcement of its tech rulebook,
the Digital Services Act (DSA).
It’s the first major court loss for the Commission over the DSA, which entered
into force in 2022 and can be wielded to fine social media and e-commerce
platforms up to 6 percent of their global annual revenue. The EU has yet to
finalize investigations under the law.
At the heart of the case are platforms’ disagreements with how the EU calculated
the fee. The Commission directly supervises “very large online platforms” with
over 45 million average monthly users in the bloc.
Meta and TikTok challenged the European Commission’s decisions imposing
so-called supervisory fees in 2024. These fees are meant to support the
Commission’s work overseeing the very platforms that pay it — an extension of
the “polluter pays” principle often used in environmental policy — and are
proportionate to the number of users platforms have in the EU.
The EU’s General Court said in its ruling the Commission should have passed a
separate set of rules about how users are calculated before determining the
fees. Judges gave the Commission a year to draft a text on how it calculates
platform users, or else potentially refund the platforms’ 2023 fees.
The EU executive has already been working on such rules, called a delegated act.
The Commission said the court merely ruled against it on procedure and not
substance. “The Court confirms our methodology is sound: no error in
calculation, no suspension of any payments, no problem with the principle of the
fee nor the amount,” said spokesperson Thomas Regnier.
Meta said in a statement that the judgement “will force the European Commission
to reassess the unfair methodology being used to calculate these DSA fees,”
adding it “looks forward to the flaws in the methodology being addressed.”
TikTok “welcomed” the decision and will “closely follow the development” of the
case, company spokesperson Paolo Ganino said.
Tag - Consumer Policy
The United States Congress is amping up criticism of the European Union’s social
media law — and this time, they’ve brought receipts.
The U.S. House of Representatives Judiciary Committee is releasing a report on
Friday that singles out the European Union’s Digital Services Act (DSA) as a
“foreign censorship threat.” The report, shared exclusively with POLITICO,
describes the flagship social media law as a “comprehensive digital censorship
law” that threatens the freedom of speech of American citizens.
The White House and U.S. State Department have been going after the EU’s digital
rulebook for months, accusing the bloc of unfair, burdensome rules that target
American companies and free speech.
On the European side, proponents of these laws want to see them enforced
strictly, including on U.S. technology giants. Some EU officials have argued
that Washington officials are simply fronting the arguments of their homegrown
tech firms.
The Judiciary Committee’s 37-page “interim staff report” is the result of a
five-month, still-ongoing inquiry that started with subpoenas issued by the U.S.
Congress in February to Big Tech companies.
Evidence attached to the report includes correspondence between top European
Commission officials and Jim Jordan, the Republican representative from Ohio and
chairman of the U.S. House Judiciary Committee.
It also includes non-public information about how the European Commission and
national authorities implement the rules, including confidential information
from EU workshops, emails between the EU executive and companies, content
takedown requests in France, Germany and Poland and readouts from Commission
meetings with tech firms.
“On paper, the DSA is bad. In practice, it is even worse,” the report said.
“European censors” at the Commission and EU countries “target core political
speech that is neither harmful nor illegal, attempting to stifle debate on
topics such as immigration and the environment,” it said. Their censorship is
“largely one-sided” against conservatives, it added.
Commission spokesperson Thomas Regnier said in a comment that freedom of
expression “is a fundamental right in the EU. And it is at the heart of our
legislations, including the DSA.” He added: “Absolutely nothing in the DSA
requires a platform to remove lawful content.”
According to the Commission, “more than 99 percent of content moderation
decisions are in fact taken proactively by online platforms to enforce their own
Terms & Conditions” and “content removals based on regulatory authorities’
orders to act against illegal content account for less than 0.001 percent,”
Regnier said.
Judiciary Committee Chairman Jordan is set to lead a bipartisan congressional
delegation to Europe in the coming days, including a stop in Brussels, to
discuss issues of censorship and free speech, a source familiar with the
planning said.
BEHIND CLOSED DOORS
The Commission’s line is that the DSA is apolitical and doesn’t determine what
speech is illegal. But the U.S. committee said that behind closed doors it draws
a line in the sand.
At the workshop, which POLITICO reported exclusively in March, the Commission
asked Big Tech platforms, regulators and civil society how they would react to
different scenarios.
One scenario involved a user being exposed to the phrase “We need to take back
our country.” In this scenario on “illegal content,” the statement appeared
under a photo of a woman in a hijab with the caption “terrorist in disguise,”
the documents show.
The Commission said the user in question would be exposed to “illegal hate
speech.” The committee argued that the phrase “take back our country” is “a
common, anodyne political statement” used by the likes of Kamala Harris.
The Commission also instructed platforms to address memes and satirical content
at the workshop.
According to the U.S. committee’s report, the EU kept the workshop secret
because the “Commission wants to hide its censorship aims.”
The Commission regularly meets with technology companies and other stakeholders
to give information on its policymaking. Some of these meetings — including
those informing its competition cases — occur in public, while others take place
under varying levels of confidentiality.
The committee report also criticized a system of third parties, comprising
trusted flaggers and out-of-court settlement dispute bodies, as being neither
impartial nor independent of authorities.
The report criticized the need for fact-checkers to be approved by regulators.
It stated that it had identified various conflicts of interest stemming from
these organizations’ goals, funding, or litigation against tech platforms.
The report also criticized the regime set up for Very Large Online Platforms,
those with more than 45 million monthly active users in the bloc. VLOP
designations are “used to burden” non-EU firms, while EU firms are afforded
workarounds, it said.
One case that drew the attention of U.S. representatives: Spotify has been
allowed to split its products into music and podcasts, thus avoiding the more
cumbersome VLOP rules. In the first quarter of 2025, the Swedish firm reported
689 million monthly active users with 37 percent of its subscribers based in
Europe.
The European Commission on Monday said countries can implement their own
national bans for minors on social media, in new guidelines under its powerful
Digital Services Act.
The EU executive has been under pressure in recent months to roll out measures
to protect minors online. National governments in France, Denmark, Spain and
elsewhere have called for social media restrictions, with some criticizing the
EU for not acting quickly enough.
France and the Netherlands have supported an outright ban of social media for
minors under 15. Greece has said it thinks parental consent should be required
for children under a certain age. Denmark, which currently helms work in the
Council of the EU, is pushing for stronger EU-level actions.
Tech giant Meta has also come out suggesting legal restrictions that would
require parents to consent for their kids being on social media below a certain
age.
“Age verification is not a nice to have. It’s absolutely essential,” said
Denmark’s digital minister Caroline Stage Olsen, who presented the guidelines
alongside the Commission’s tech chief Henna Virkkunen.
The Commission’s new guidelines for minor protection online seek to make sure
platforms face a similar set of rules across Europe under the Digital Services
Act (DSA), the bloc’s landmark social media regulation. The guidelines are
non-binding and set the benchmark for companies to interpret requirements under
the DSA.
The Commission on Monday also released technical specifications for an age
verification app that could help verify if users are over 18 by using IDs and
even facial recognition. The app is set to be tested in France, Greece, Spain,
Italy and Denmark, five countries that are also pushing for restrictions and are
working on their own age verification solutions.
EU countries can also use the app should they decide to implement national
restrictions for social media use at a different age threshold, a senior
Commission official said, granted anonymity to disclose details of the plan
ahead of its release.
High-risk services like porn platforms and online alcohol shops are also
recommended to verify users’ ages.
“It’s hard to imagine a world where kids can enter a store to buy alcohol, go to
a nightclub by simply stating that they are old enough, no bouncers, no ID
checks, just a simple yes, I am over the age of 18,” but this is what “has been
the case online for many years,” said Stage Olsen.
Monday’s guidelines cover how platforms should adapt their systems to better
protect kids along a range of services.
The text suggested that platforms do not use browsing behavior in their
recommender systems; that they turn off features like streaks and read receipts
to decrease the addictiveness of platforms; that they set privacy and security
by default in settings, for example making their accounts invisible to other
users not in their networks; and that they consider turning off some features
like camera access.
The guidelines follow a risk-based approach, meaning platforms can evaluate what
possible threats they pose to minors and adopt measures accordingly.
Tech firms launched a last-minute lobbying push arguing that the guidelines
still allow for cumbersome fragmentation.
This article was updated.