Spanish Prime Minister Pedro Sánchez on Wednesday unveiled a new government AI
tool that will rank social media sites based on how much hate speech they host.
“If hate is already dangerous, social networks have turned it into a weapon of
mass polarization that ends up seeping into everyday life,” Sánchez said at an
International Summit against Hate and Digital Harassment. “Today social networks
are a failed state,” he said.
The new system, known as HODIO, will analyze large volumes of publicly available
activity on social media to measure the scale and spread of online hate speech.
The data will be used to track how hateful content evolves and spreads on
platforms, and will feed into a public ranking comparing how much hate speech
circulates on major networks.
The European Union has rolled out laws and regulations like the Digital Services
Act to crack down on illegal and harmful online content. The rules have drawn
the ire of the United States’ administration, which sees them as online
censorship.
The new Spanish hate speech tool comes as Sanchéz repeatedly clashed with U.S.
President Donald Trump last week over the conflict in Iran.
The Spanish prime minister said the initiative is aimed at holding platforms
accountable for how their algorithms amplify polarizing content, and added that
the government plans to introduce a legal offense for “algorithmic
amplification” of hate speech.
Sánchez launched a broader push for stricter digital regulation last month and
wants to ban social media access for users under 16.
Tag - Algorithms
NEW DELHI — Emmanuel Macron on Wednesday blasted social media platforms and the
tech executives who run them in a fiery dismissal of their claims to be
defending free speech.
The French president used a discussion on university partnerships between India
and France to flay nontransparent platforms and artificial intelligence systems.
“Some of them claim to be in favor of free speech. We are in favor of free
algorithms, totally transparent,” Macron said during his remarks in India. “Free
speech is pure bullshit if nobody knows how you are guided through this.”
“All the algorithms have biases, we know that. There is no doubt,” he said. “And
they are so impactful, when you speak about social media, that having no clue
about how the algorithm is made, how it is tested and where it will guide you —
the democratic biases of this could be huge.”
Since returning to office in 2025, U.S. President Donald Trump’s administration
has cast Europe’s tech rules as a threat to America’s free speech tradition.
While Brussels has spent the past decade designing legislation to rein in Big
Tech through landmark laws like the GDPR, Digital Services Act and Digital
Markets Act, Washington frames many of those efforts as incompatible with U.S.
principles on free expression.
That dispute has triggered a broader political clash, with U.S. officials and
tech companies warning that Europe’s content moderation rules amount to
censorship, while EU leaders insist the measures are necessary to curb illegal
content and platform abuses.
Macron has repeatedly called for restrictions on access to social media access
for younger users, as a groundswell of European political sentiment builds in
support of his position.
NEW DELHI — A top United States official on Wednesday told the European Union to
focus more on innovation in artificial intelligence — and less on rules.
“I do think the atmosphere in the EU needs to change and be more focused on
innovation, less focused on governance and less focused on doomerism,” said
Sriram Krishnan, the White House’s senior policy adviser on artificial
intelligence, at an event of the Tony Blair Institute on the sidelines of the
India AI Impact Summit.
Krishnan reiterated the U.S. opposition to the EU’s Artificial Intelligence Act,
which was adopted in 2024 and aims to mitigate risks associated with the
technology.
“The EU AI Act, which I have ranted about before this job, during this job,
maybe after this job … it’s not really conducive to an entrepreneur who wants to
build basic technology,” he said.
One example, Krishnan said, was Peter Steinberger, the Austrian coder behind the
personal AI assistant platform OpenClaw who is moving to the U.S. to join
OpenAI.
Krishnan was much more positive about India’s regulatory approach, which he
praised as “pro-innovation.”
World leaders, including EU tech chief Henna Virkkunen and French President
Emmanuel Macron, will gather on Thursday in New Delhi. A draft of the
declaration of the summit, seen by POLITICO, didn’t include the word safety.
Ever since the first AI Summit in the United Kingdom in 2023, the series of
annual summits has gradually shifted from discussions on AI governance to
business and investment deals between the industry and governments.
Call it “bots on the ground.”
One in three Germans think their country should allow artificial intelligence to
make life-or-death decisions on the battle field, according to The POLITICO
Poll.
A third of respondents in Germany said they favor AI systems to be used in
weapons in place of human decision makers, even if these systems are less
transparent, the poll showed.
The results suggest a cultural shift, as the government of Chancellor Friedrich
Merz no longer explicitly excludes lethal decisions without human checks.
It also puts Germany in a different category than some of its allies: In the
United States, United Kingdom, Canada and France, 26 percent of respondents said
militaries could rely on AI rather than human decision — or roughly a quarter of
people.
Forty-seven percent of German respondents still favored human involvement in the
use of weapons, even if they are slower than AI. But that figure was 10
percentage points lower than responses to the same question in the U.K., eight
points lower than in the U.S. and Canada, and five percentage points lower than
in France.
Almost half of respondents in Germany (46 percent) said cybersecurity and
artificial intelligence capabilities mattered as much as traditional military
power to win wars.
The online survey, conducted for POLITICO by the independent London-based
polling company Public First, comes as political leaders, security chiefs and
industry officials gather in Germany for the Munich Security Conference. Part of
their discussions get into how technologies like AI are changing the nature of
warfare and national security strategies.
The relatively high acceptance of so-called lethal autonomous weapons systems —
also known as “killer robots” — is surprising when considering Berlin’s slow
uptake of new technologies and its deep cultural attachment to data protection,
which is being put under pressure by new AI applications.
Germany has also had a fiery public debate over killer robots in past years. In
2021, a survey commissioned by an NGO coalition campaigning against killer
robots said only 19 percent of respondents approved of such autonomous weapon
systems, and 68 percent expressed ethical concerns about lethal decisions made
without human control. Three years earlier, in 2018, 72 percent of respondents
were against autonomous weapon systems.
Berlin’s governing coalition, which took office last year, no longer explicitly
excluded lethal decisions without human control in its coalition agreement —
unlike the center-to-left coalition government that preceded it.
AI-enabled weapons have changed the war in Ukraine, where drones have become a
chief vector for armies to hit critical military and strategic targets, often
operating independently.
Germany is preparing to spend €267.7 million on a new drone system from defense
startup Helsing, but field data from deployments in Ukraine showed its drones
have performed far below expectations, POLITICO reported last month.
United Nations Secretary General António Guterres has long opposed these
weapons, calling them “politically unacceptable and morally repugnant.” But
years of discussions between governments at the U.N. have so far not yield clear
rules on their use.
The EU has its AI Act in place since 2024 to deal with the risks stemming from
AI, but those rules don’t apply to military applications, which are a sovereign
competence of member countries.
This edition of The POLITICO Poll was conducted by Public First from Feb. 6 to
9, surveying 10,289 adults online, with at least 2,000 respondents each from the
U.S., Canada, U.K., France and Germany. Results for each country were weighted
to be representative on dimensions including age, gender and geography. The
overall margin of sampling error is ±2 percentage points for each country.
Smaller subgroups have higher margins of error.
The survey is an ongoing project from POLITICO and Public First, an independent
polling company headquartered in London, to measure public opinion across a
broad range of policy areas. You can find new surveys and analysis each month at
politico.com/poll. Have questions or comments? Ideas for future surveys? Email
us at poll@politico.com.
Sam Clark reported from Brussels. Anouk Schlung contributed reporting from
Berlin. Pieter Haeck contributed reporting from Brussels.
BRUSSELS — Doom scrolling is doomed, if the EU gets its way.
The European Commission is for the first time tackling the addictiveness
of social media in a fight against TikTok
that may set new design standards for the world’s most popular apps.
Brussels has told the company to change several key features, including
disabling infinite scrolling, setting strict screen time breaks and changing its
recommender systems. The demand follows the Commission’s declaration that
TikTok’s design is addictive to users — especially children.
The fact that the Commission said TikTok should change the basic design of its
service is “ground-breaking for the business model fueled by surveillance and
advertising,” said Katarzyna Szymielewicz, president of the
Panoptykon Foundation, a Polish civil society group.
That doesn’t bode well for other platforms, particularly Meta’s Facebook and
Instagram. The two social media giants are also under investigation over the
addictiveness of their design.
The findings laid out a week ago mark the first time the Commission has set out
its stance on the design of a social media platform under its Digital Services
Act, the EU’s flagship online-content law that Brussels says is essential for
protecting users.
TikTok can now defend its practices and review all the evidence the Commission
considered — and has said it would fight these findings. If it fails to satisfy
the Commission, the app could face fines up to 6 percent of
annual global revenue.
It’s the first time any regulator has attempted to set a legal standard for the
addictiveness of platform design, a senior Commission official said in a
briefing to reporters.
“The findings mark a turning point [because] the Commission is treating
addictive design on social media as an enforceable risk” under the Digital
Services Act, said Lena-Maria Böswald, senior policy researcher at think tank
Interface.
Jan Penfrat, senior policy adviser at civil rights group EDRi, said it would be
“very, very strange for the Commission to not then use this as a template and go
after other companies as well.”
DEFINING RISKS
The Digital Services Act requires platforms like TikTok to assess and mitigate
risks to their users. But these risks are vaguely defined in the law, so until
now it had been unclear exactly where the regulator would draw the line.
Two years after the TikTok probe was launched, the Commission has opted to
strike at the heart of platform design, claiming it poses a risk to the mental
health of users, particularly children. The Commission’s other concerns with
TikTok were settled amicably between the two sides.
Jaap Arriens/NurPhoto via Getty Images
At a briefing with reporters, EU tech chief Henna Virkkunen said the findings
signal that the Commission’s work is entering a new stage of maturity when it
comes to systemic risks.
Facebook and Instagram have been under investigation over the addictiveness of
their platforms since May 2024, including whether they endanger children. Just
like TikTok, the design and algorithms of the platforms are under scrutiny.
Meta has mounted a staunch defense in an ongoing California case, in which it is
accused of knowingly designing an addictive social media that hurts users.
TikTok and Snap settled the same case before it went to trial.
TikTok spokesperson Paolo Ganino said the Commission’s findings “present a
categorically false and entirely meritless depiction of our platform and we will
take whatever steps are necessary to challenge these findings through every
means available to us.”
THE RIGHT SOLUTION
The Commission could eventually agree with platforms on a wide range of changes
that address addictive design. What they decide will depend on the different
risk profiles and patterns of use of each platform — as well as how each company
defends itself.
That likely means it will take a while for TikTok to make any change to its
systems, as the platform reviews the evidence and tries to negotiate a solution
with the regulator.
In another, simpler DSA enforcement case, it took the Commission more than a
year after issuing preliminary findings to declare Elon Musk’s X was not
compliant with its obligations on transparency.
TikTok may pursue a series of changes and may push the Commission to adopt a
lighter regulatory approach. The video-sharing giant likely won’t “get it right”
the first time, said EDRi’s Penfrat, and it may take a few tries to satisfy
Brussels.
“It could be anything from changing default settings, to outright prohibiting a
specific design feature, or requiring more user control,” said Peter Chapman, a
governance researcher and lawyer who is associate director at the
Knight-Georgetown Institute.
He expects the changes could be different for each platform — as while the
findings show the Commission’s thinking, interventions must be targeted
depending on how design features are used.
“Multiple platforms use similar design features” but they serve different
purposes and carry different risks, said Chapman, pointing to the example of
notifications that try to draw you back in. For example, notifications for
messages carry a different risk of addiction to those alerting a user about a
livestream, he said.
Frank H. McCourt Jr. is an American business executive and civic entrepreneur.
He is the founder of the Project Liberty, a global initiative aiming to restore
agency in the digital age by giving people ownership and control of their
personal data.
At the height of the Cold War, a man named Ewald-Heinrich von Kleist-Schmenzin
convened the West’s leading security experts in Munich. As a World War II
resistance fighter and member of the Stauffenberg circle, which had attempted to
overthrow Hitler, his goal was simple: preventing World War III. And he
dedicated the rest of his life to fostering open dialogue, sharing defense
strategies and deescalating tensions.
Tomorrow, as global leaders gather at the annual Munich Security Conference once
again, the threats they face are no less profound than they were some 60 years
ago — though many of them are far less visible.
Yes, wars are raging across continents, alliances are being tested, and tensions
are escalating across borders and oceans. However, I would wager that if von
Kleist-Schmenzin were alive today, he would agree that the most consequential
struggle of our time may not be unfolding on traditional battlefields at all.
Instead, it’s unfolding in the digital realm, where control over personal data —
over our digital personhood — is the central source of power and influence in
the modern world.
When the World Wide Web was born, we were promised an era of democratic
participation — a digital town square for a new millennium. What we have instead
is something far darker: Predatory algorithms shredding civil society, warping
truth and pitting neighbor against neighbor, while a handful of the world’s
richest companies know more about us than any intelligence agency ever could.
Deep down, we all feel the absolute grip of the Internet on society. We feel it
at the national level, as polarization and misinformation continue to fray our
social fabric, upend elections and disrupt the world order. We feel it at our
kitchen tables, as artificial intelligence bots and polarizing voices prey on
the mental and social health of our children.
This crisis is no accident. It’s the world Big Tech has deliberately built.
From the moment Facebook introduced the “like” button, the Internet began its
descent from a boundless repository of knowledge into a system optimized for
rage, addiction and profit—one that rewards division and disregards truth.
The business model is quite straightforward: Algorithms are engineered to
capture our attention and exploit it, rather than inform or connect us. And by
the metric of stock price, this model has been wildly successful. Big Tech
companies have amassed trillions of dollars in record time. And they’ve done so
by accumulating the most valuable resource in human history — our personal data.
Acquiring it through a surveillance apparatus that would make the Stasi blush.
Now, with the rise of AI, these same companies are selling us a new story — that
of a brave new chapter for the Internet that is exponentially more powerful and
ostensibly benevolent. Yet, the underlying logic remains the same. These systems
are still designed to extract more data, exert more control, deepen
manipulation, all at an even more unprecedented scale.
The threat has particularly escalated with the emergence of the “agentic web,”
where autonomous AI systems are no longer confined to interpreting information
but are empowered to act on it – often with minimal oversight and inadequate
alignment safeguards. OpenClaw — an open-source autonomous AI assistant —
reflects this rapid shift from consumption to delegation perfectly: Individuals
are handing over sweeping permissions, enabling agents to interact and operate
freely with other agents in real time, dramatically amplifying exposure to
real-world harm, coordinated manipulation from bad actors and with even less
human control.
And yet, those who raise concerns about this concentration of power and these
security risks are quickly dismissed as anti-progress, or accused of ceding the
future of AI to China.
If Ewald-Heinrich von Kleist-Schmenzin were alive today, he would agree that the
most consequential struggle of our time may not be unfolding on traditional
battlefields at all. | Rainer Jensen/DPA/AFP via Getty Images
Let’s be clear: We won’t beat China by becoming China. Autocratic algorithms,
centralized power and mass surveillance are fundamentally incompatible with
democracy. And were von Kleist-Schmenzin to look at today’s AI frameworks, he’d
likely recognize them as far closer to the east of the Berlin Wall than the
west.
To reverse that reality, we must build alternative systems that respect
individual rights, return ownership and control of personal data to individuals,
and align with democratic principles. The technologies shaping our lives need to
be optimized to protect citizens, not endanger them.
Here’s the good news: This technology is already being built.
Around the world, leading technologists, universities, companies and governments
are working to establish a new paradigm for AI — open-source, transparent
systems governed by the public sector and civil society. My organization,
Project Liberty, is part of this effort, grounded in a simple belief: We can,
and must, build AI technology that’s in harmony with fundamental democratic
values.
Such upgraded AI architecture is designed for human flourishing. It will give
people a voice in how these platforms operate, real choices over how their data
is used, and a stake in the economic value they create online. It will be paired
with policy and governance frameworks that safeguard democracy, freedom and
trust.
As the world’s leaders gather in Munich, I call on them to help build a better
foundation for AI that embeds Western values and protects future generations.
Let them consider the world von Kleist-Schmenzin sought to save, and join us on
the front lines of democracy’s new battleground.
The German competition authority hit Amazon with a €59 million fine on Thursday
after finding the e-commerce giant’s pricing rules for third-party vendors to be
in breach of national and EU competition rules.
The authority determined that Amazon’s practices, and in particular its use of
algorithms to influence pricing by sellers and the enforcement of its Fair
Pricing Policy, are in breach of Germany’s digital dominance rules as well as EU
competition law.
“Amazon competes directly with marketplace sellers on its platform and
influences the prices of its competitors, including through price caps, which is
problematic from a competition standpoint,” Andreas Mundt, president of the
Federal Cartel Office, or Bundeskartellamt, said in a statement.
The agency takes issue with Amazon’s pricing restrictions, and in particular its
pricing cap, on what third-party sellers can charge without being penalized
under the platform’s rules.
“We will vigorously challenge the FCO’s conclusion, which is based on unique
German regulation and directly conflicts with EU competition law consumer
standards,” said Rocco Bräuniger, Amazon’s country manager for Germany, in a
statement.
Per Bräuniger, the agency is forcing Amazon to promote uncompetitive prices to
customers.
The decision follows a preliminary assessment sent to Amazon in June 2025, after
which the company submitted comments.
The Bundeskartellamt had designated Amazon as a company of paramount
significance for competition across markets in July 2022, a finding upheld by
the Federal Court of Justice in April 2024.
Spanish Prime Minister Pedro Sánchez announced Tuesday his government will ban
children under the age of 16 from accessing social media.
“Platforms will be required to implement effective age verification systems —
not just check boxes, but real barriers that work,” Sánchez said during an
address to the plenary session of the World Government Summit in Dubai. “Today
our children are exposed to a space they were never meant to navigate alone … We
will protect [minors] from the digital Wild West.”
The proposed ban, which is set to be approved by the country’s Council of
Ministers next week, will amend a draft bill currently being debated in the
Spanish parliament. Whereas the current version of the legislation seeks to
restrict access to social media to users aged 16 and older, the new amendment
would expressly prohibit minors from registering on platforms.
Spain joins a growing chorus of European countries hardening their approach to
restricting kids online. Denmark announced plans for a ban on under-15’s last
fall, and the French government is pushing to have a similar ban in place as
soon as September. In Portugal, the governing center-right Social Democratic
Party on Monday submitted draft legislation that would require under-16’s to
obtain parental consent to access social media.
Spain’s ban is included in a wider package of measures that Sánchez argued are
necessary to “regain control” of the digital space. “Governments must stop
turning a blind eye to the toxic content being shared,” he said.
That includes a legislative proposal to hold social media executives legally
accountable for the illegal content shared on their platforms, with a new tool
to track the spread of disinformation, hate speech or child pornography on
social networks. It also proposes criminalizing the manipulation of algorithms
and amplification of illegal content.
“We will investigate platforms whose algorithms amplify disinformation in
exchange for profit,” Sánchez said, adding that “spreading hate must come at a
cost — a legal cost, as well as an economic and ethical cost — that platforms
can no longer afford to ignore.”
The EU’s Digital Services Act requires platforms to mitigate risks from online
content. The European Commission works “hand in hand” with EU countries on
protections for kids online and the enforcement of these measures “towards the
very large platforms is the responsibility of the Commission,” Commission
spokesperson Thomas Regnier said Tuesday when asked about Sánchez’s
announcement.
The EU executive in December imposed a €120 million fine on Elon Musk’s X for
failing to comply with transparency obligations, and a probe into the platform’s
efforts to counter the spread of illegal content and disinformation is ongoing.
BRUSSELS — The European Commission opened a fresh investigation Monday into Elon
Musk’s X following an explosion of non-consensual sexualized deepfakes created
by the artificial intelligence chatbot Grok.
The Commission will decide whether X met EU requirements to protect users when
it integrated Grok into the social media platform and its underlying algorithm.
X is already under investigation on several fronts under the EU’s Digital
Services Act, which regulates social media platforms, and was in December fined
€120 million for lapses in transparency. Penalties can reach up to 6 percent of
X’s annual global revenue.
The new investigation will look into whether the company properly assessed and
mitigated the risks of integrating Grok, particularly those of “manipulated
sexually explicit images” including some that “may amount to child sexual abuse
material,” the Commission said.
But the investigation “is much broader” than these images, a senior Commission
official said during a briefing.
The chatbot may have generated as many as 3 million non-consensual sexual images
and 20,000 child sexual abuse images in the 11 days before it made changes to
stop the spread of such photos, an estimate by civil society found.
On top of the new investigation, the Commission will expand a 2023 probe to look
into the impact of X’s decision, announced last week, to switch the algorithm
for its social media platform to a Grok-based system.
The Commission said Monday it could take interim steps — for example, order X to
change its algorithms or shut down the chatbot — “in the absence of meaningful
adjustments to the X service,” something the EU has so far shied away from doing
for Musk’s platform.
The threshold for such measures is “really high,” a second senior Commission
official said.
The image-generating feature of Grok went viral just before the end of 2025, as
users instructed the chatbot to alter images of real people. This led to global
outcry and calls from EU lawmakers to ban nudification AI apps as well as crack
down on Grok.
The platform did restrict the chatbot’s image generation abilities in January,
initially by limiting them to paid subscribers of Grok. The Commission said at
the time it was assessing whether changes made to Grok were sufficient.
EU officials found initial changes insufficient and voiced their concerns to the
platform, after which the platform took further steps. “I dare say that without
our interaction, probably none of these kind of changes that they have done
would have appeared,” the second official said.
X did not immediately respond to POLITICO’s request for comment.
The deal creating a majority-American board for TikTok’s U.S. arm puts President
Donald Trump’s allies in charge of yet another driver of American culture.
The wildly popular short-form-video platform now joins CBS and the social media
giant X among the stable of key communication channels that have come under more
Trump-friendly management in recent years. The president has also taken more
modest swings at reshaping the zeitgeist, from placing his stamp on the Kennedy
Center to weighing in on television programming to appointing conservative
actors to be his “eyes” and “ears” in Hollywood.
But TikTok, which is used by over 200 million Americans according to the
company, stands out from the rest because of its huge appeal among teens and
pre-teens who form the next rising blocs of voters. For Trump’s critics, that
means years of worries about TikTok acting as a vector for Beijing’s
propaganda are giving way to fears that its algorithm could soon serve up a
flood of far-right, pro-MAGA content to impressionable users.
“We’ve seen the platform transfer from one set of owners, where there was one
set of concerns about propaganda and privacy, to a new set of owners, where now
there’s a new set of concerns about propaganda and privacy,” said Evan Greer,
director of the progressive tech group Fight for the Future.
Katie Harbath, a tech consultant and former longtime public policy director at
Meta, said Trump recognizes “the importance of trying to have friends in these
different places,” including TikTok. She said the president “understands the
influence it has on what people think — and then ultimately, how people vote.”
Trump himself expressed hope late Thursday that the deal could cement his place
in young voters’ hearts.
TikTok “will now be owned by a group of Great American Patriots and Investors,
the Biggest in the World, and will be an important Voice,” the president wrote
on his social media network Truth Social. “Along with other factors, it was
responsible for my doing so well with the Youth Vote in the 2024 Presidential
Election. I only hope that long into the future I will be remembered by those
who use and love TikTok.”
Spokespeople for TikTok and the White House did not respond to questions about
how the deal could impact TikTok’s algorithm or boost right-leaning content on
the platform.
The long-awaited deal, carefully brokered by the White House, is intended to
satisfy national security concerns with TikTok. A bipartisan law passed in 2024
required the platform’s China-based parent company to sell it to U.S. owners or
face a full-scale ban.
At the forefront of TikTok’s new ownership structure is Larry Ellison,
billionaire co-founder and executive chair of the tech giant Oracle and a close
Trump ally. Oracle first partnered with TikTok during Trump’s first term, when
the president helped broker a deal that tapped Ellison’s company to help run the
app’s U.S. operations. An Oracle spokesperson declined to comment.
Meanwhile, Skydance Media, a media conglomerate led by Ellison’s son David, made
a deal last year that gave it ownership of CBS News, then began making
programming and news decisions widely seen as steering the network in a more
pro-Trump direction. Those included installing new leadership at
CBS and delaying the airing of a report on “60 Minutes” that was critical of
Trump’s immigration policies. A spokesperson for Skydance Media did not respond
to a request for comment.
David Ellison is now vying to purchase the parent company of CNN — and,
according to The Wall Street Journal, offered assurances to Trump administration
officials that he would “make sweeping changes” to the news network.
After Elon Musk purchased Twitter in 2022, he rebranded the social media site as
X and ripped away safeguards meant to stop the spread of disinformation and
hateful content, while reinstating the accounts of far-right users whom the
company had previously banned. (Twitter’s old management had even kicked Trump
himself off its platform following the Jan. 6 Capitol Hill insurrection in
2021.) Several studies have since suggested that Musk’s changes prompted an
increase in hateful content, pro-Trump content and pro-GOP content across the
platform. A spokesperson for X did not respond to a request for comment.
Now, some observers on both sides of the political divide say the same
phenomenon could repeat under TikTok’s new owners.
“What I’m more interested in is just sort of the cultural vibe shift that the
change in ownership will bring,” said Harbath. She said TikTok’s fate could
mirror what happened when Musk took over Twitter — “before he even made changes,
there was kind of a mass exodus of people, particularly on the left, who left
Twitter and went to Bluesky.”
Only time will tell if TikTok goes the way of X under new management. Tilting
its algorithm toward far-right content could cause users to flee the platform,
potentially undermining its profitability — a fate some of TikTok’s new owners
may be keen to avoid.
“I haven’t heard anything to suggest that this is necessarily going to go in the
Elon Musk direction,” said Lindsay Gorman, managing director of the German
Marshall Fund’s technology program. “Many of these investors were previous
investors of TikTok originally.”
Alex Bruesewitz, a Trump political adviser and head of X Strategies — the firm
that manages the Team Trump TikTok account — said the president “has always been
popular on TikTok,” and that people shouldn’t worry that the new owners will
tweak its algorithm to boost Republicans.
“The Democrats are the party that likes to dictate what social media companies
do with their algorithms,” said Bruesewitz. “I don’t think that’s something that
the Trump White House is interested in doing. I don’t think that they want to
tell platforms how to run their businesses.”
Amanda Carey Elliott, a Republican digital consultant, expressed discomfort at
the notion of a “Republican billionaire pulling the levers of TikTok in our
favor,” fearing it could drive moderates and independents off the app.
“That said, you also have to understand where Republicans are coming from on
this,” said Elliott. “For years and years, we were subjected to online
censorship by platforms controlled by liberal Silicon Valley. Expecting to be
censored has literally been built into our DNA, so you’ll probably be
hard-pressed to find any Republican clutching their pearls at the thought of the
left suddenly waking up one day to find themselves on the wrong side of an
algorithm.”
John Hendel contributed to this report.