LONDON — Keir Starmer is off to China to try to lock in some economic wins he
can shout about back home. But some of the trickiest trade issues are already
being placed firmly in the “too difficult” box.
The U.K.’s trade ministry quietly dispatched several delegations to Beijing over
the fall to hash out deals with the Chinese commerce ministry and lay the
groundwork for the British prime minister’s visit, which gets going in earnest
Wednesday.
But the visit comes as Britain faces growing pressure from its Western allies to
combat Chinese industrial overproduction — and just weeks after Starmer handed
his trade chief new powers to move faster in imposing tariffs on cheap,
subsidized imports from countries like China.
For now, then, the aim is to secure progress in areas that are seen as less
sensitive.
Starmer’s delegation of CEOs and chairs will split their time between Beijing
and Shanghai, with executives representing City giants and high-profile British
brands including HSBC, Standard Chartered, Schroders, and the London Stock
Exchange Group, alongside AstraZeneca, Jaguar Land Rover, Octopus Energy, and
Brompton filling out the cast list. Starmer will be flanked on his visit by
Trade Secretary Peter Kyle and City Minister Lucy Rigby.
Despite the weighty delegation, ministers insist the approach is deliberately
narrow.
“We have a very clear-eyed approach when it comes to China,” Security Minister
Dan Jarvis said Monday. “Where it is in our national interest to cooperate and
work closely with [China], then we will do so. But when it’s our national
security interest to safeguard against the threats that [they] pose, we will
absolutely do that.”
Starmer’s wishlist will be carefully calibrated not to rock the boat. Drumming
up Chinese cash for heavy energy infrastructure, including sensitive wind
turbine technology, is off the table.
Instead, the U.K. has been pushing for lower whisky tariffs, improved market
access for services firms, recognition of professional qualifications, banking
and insurance licences for British companies operating in China, easier
cross-border investment, and visa-free travel for short stays.
With China fiercely protective of its domestic market, some of those asks will
be easier said than done. Here’s POLITICO’s pro guide to where it could get
bumpy.
CHAMPIONING THE CITY OF LONDON
Britain’s share of China’s services market was a modest 2.7 percent in 2024 —
and U.K. firms are itching for more work in the country.
British officials have been pushing for recognition of professional
qualifications for accountants, designers and architects — which would allow
professionals to practice in China without re-licensing locally — and visa-free
travel for short stays.
Vocational accreditation is a “long-standing issue” in the bilateral
relationship, with “little movement” so far on persuading Beijing to recognize
U.K. professional credentials as equivalent to its own, according to a senior
industry representative familiar with the talks, who, like others in this
report, was granted anonymity to speak freely.
But while the U.K.’s allies in the European Union and the U.S. have imposed
tariffs on Chinese EVs, the U.K. has resisted pressure to do so. | Jessica
Lee/EPA
Britain is one of the few developed countries still missing from China’s
visa-free list, which now includes France, Germany, Italy, Spain, the
Netherlands, Switzerland, Australia, New Zealand, Japan, Saudi Arabia, Russia
and Sweden.
Starmer is hoping to mirror a deal struck by Canadian PM Mark Carney, whose own
China visit unlocked visa-free travel for Canadians.
The hope is that easier business travel will reduce friction and make it easier
for people to travel and explore opportunities on the ground — it would allow
visa-free travel for British citizens, giving them the ability to travel for
tourism, attend business conferences, visit friends and family, and participate
in short exchange activities.
SMOOTHING FINANCIAL FLOWS
The Financial Conduct Authority’s Chair Ashley Alder is also flying out to
Beijing, hoping to secure closer alignment between the two countries’ capital
markets. He’ll represent Britain’s financial watchdog at the inaugural U.K-China
Financial Working Group in Beijing — and bang the drum for better market
connectivity between the U.K. and China.
Expect emphasis on the cross-border investments mechanism known as the
Shanghai-London and Shenzhen-London Stock Connect, plus data sovereignty issues
associated with Chinese companies jointly listing on the London Stock Exchange,
two figures familiar with the planning said.
The Stock Connect opened up both markets to investors in 2019 which, according
to FCA Chair Ashley Alder, led to listings worth almost $6 billion.
“Technical obstacles have so far prevented us from realizing Stock Connect’s
full potential,” Alder said in a speech last year. Alder pointed to a memorandum
of understanding being drawn up between the FCA and China’s National Financial
Regulatory Administration, which he said is “critical” to allow information to
be shared quickly and for firms to be supervised across borders. But that raises
its own concerns about Chinese use of data.
“The goods wins are easier,” said a senior British business representative
briefed on the talks. “Some of the service ones are more difficult.”
TAPPING INTO CHINA’S BIOTECH BOOM
Pharma executives, including AstraZeneca’s CEO Pascal Soriot, are among those
heading to China, as Britain tries to burnish its credentials as a global life
sciences hub — and attract foreign direct investment.
China, once known mainly for generics — cheaper versions of branded medicine
that deliver the same treatment — has rapidly emerged as a pharma powerhouse.
According to ING Bank’s global healthcare lead, Stephen Farrelly, the country
has “effectively replaced Europe” as a center of innovation.
ING data shows China’s share of global innovative drug approvals jumped from
just 4 percent in 2014 to 27 percent in 2024.
Pharma executives, including AstraZeneca’s CEO Pascal Soriot, are among those
heading to China, as Britain tries to burnish its credentials as a global life
sciences hub — and attract foreign direct investment. | John G. Mabanglo/EPA
Several blockbuster drug patents are set to expire in the coming years, opening
the door for cheaper generic competitors. To refill thinning pipelines,
drugmakers are increasingly turning to biotech companies. British pharma giant
GSK signed a licensing deal with Chinese biotech firm Hengrui Pharma last July.
“Because of the increasing relevance of China, the big pharma industry and the
U.K. by definition is now looking to China as a source of those new innovative
therapies,” Farrelly said.
There are already signs of progress. Science Minister Patrick Vallance said late
last year that the U.K. and China are ready to work together in
“uncontroversial” areas, including health, after talks with his Chinese
counterpart. AstraZeneca, the University of Cambridge and Beijing municipal
parties have already signed a partnership to share expertise.
And earlier this year, the U.K. announced plans to become a “global first choice
for clinical trials.”
“The U.K. can really help China with the trust gap” when it comes to getting
drugs onto the market, said Quin Wills, CEO of Ochre, a biotech company
operating in New York, Oxford and Taiwan. “The U.K. could become a global gold
stamp for China. We could be like a regulatory bridgehead where [healthcare
regulator] MHRA, now separate from the EU since Brexit, can do its own thing and
can maybe offer a 150-day streamlined clinical approval process for China as
part of a broader agreement.”
SLASHING WHISKY TARIFFS
The U.K. has also been pushing for lowered tariffs on whisky alongside wider
agri-food market access, according to two of the industry figures familiar with
the planning cited earlier.
Talks at the end of 2024 between then-Trade Secretary Jonathan Reynolds and his
Chinese counterpart ended Covid-era restrictions on exports, reopening pork
market access.
But in February 2025 China doubled its import tariffs on brandy and whisky,
removing its provisional 5 percent tariff and applying the 10 percent
most-favored-nation rate.
“The whisky and brandy issue became China leverage,” said the senior British
business representative briefed on the talks. “I think that they’re probably
going to get rid of the tariff.”
It’s not yet clear how China would lower whisky tariffs without breaching World
Trade Organization rules, which say it would have to lower its tariffs to all
other countries too.
INDUSTRIAL TENSIONS
The trip comes as the U.K. faces growing international pressure to take a
tougher line on Chinese industrial overproduction, particularly of steel and
electric cars.
But in February 2025 China doubled its import tariffs on brandy and whisky,
removing its provisional 5 percent tariff and applying the 10 percent
most-favored-nation rate. | Yonhap/EPA
But while the U.K.’s allies in the European Union and the U.S. have imposed
tariffs on Chinese EVs, the U.K. has resisted pressure to do so.
There’s a deal “in the works” between Chinese EV maker and Jaguar Land Rover,
said the senior British business representative briefed on the talks quoted
higher, where the two are “looking for a big investment announcement. But
nothing has been agreed.” The deal would see the Chinese EV maker use JLR’s
factory in the U.K. to build cars in Britain, the FT reported last week.
“Chinese companies are increasingly focused on localising their operations,”
said another business representative familiar with the talks, noting Chinese EV
makers are “realising that just flaunting their products overseas won’t be a
sustainable long term model.”
It’s unlikely Starmer will land a deal on heavy energy infrastructure, including
wind turbine technology, that could leave Britain vulnerable to China. The U.K.
has still not decided whether to let Ming Yang, a Chinese firm, invest £1.5
billion in a wind farm off the coast of Scotland.
Tag - Copyright
A clash between Poland’s right-wing president and its centrist ruling coalition
over the European Union’s flagship social media law is putting the country
further at risk of multimillion euro fines from Brussels.
President Karol Nawrocki is holding up a bill that would implement the EU’s
Digital Services Act, a tech law that allows regulators to police how social
media firms moderate content. Nawrocki, an ally of U.S. President Donald Trump,
said in a statement that the law would “give control of content on the internet
to officials subordinate to the government, not to independent courts.”
The government coalition led by Prime Minister Donald Tusk, Nawrocki’s rival,
warned this further exposed them to the risk of EU fines as high as €9.5
million.
Deputy Digital Minister Dariusz Standerski said in a TV interview that, “since
the president decided to veto this law, I’m assuming he is also willing to have
these costs [of a potential fine] charged to the budget of the President’s
Office.”
Nawrocki’s refusal to sign the bill brings back bad memories of Warsaw’s
years-long clash with Brussels over the rule of law, a conflict that began when
Nawrocki’s Law and Justice party rose to power in 2015 and started reforming the
country’s courts and regulators. The EU imposed €320 million in penalties on
Poland from 2021-2023.
Warsaw was already in a fight with the Commission over its slow implementation
of the tech rulebook since 2024, when the EU executive put Poland on notice for
delaying the law’s implementation and for not designating a responsible
authority. In May last year Brussels took Warsaw to court over the issue.
If the EU imposes new fines over the rollout of digital rules, it would
“reignite debates reminiscent of the rule-of-law mechanism and frozen funds
disputes,” said Jakub Szymik, founder of Warsaw-based non-profit watchdog group
CEE Digital Democracy Watch.
Failure to implement the tech law could in the long run even lead to fines and
penalties accruing over time, as happened when Warsaw refused to reform its
courts during the earlier rule of law crisis.
The European Commission said in a statement that it “will not comment on
national legislative procedures.” It added that “implementing the [Digital
Services Act] into national law is essential to allow users in Poland to benefit
from the same DSA rights.”
“This is why we have an ongoing infringement procedure against Poland” for its
“failure to designate and empower” a responsible authority, the statement said.
Under the tech platforms law, countries were supposed to designate a national
authority to oversee the rules by February 2024. Poland is the only EU country
that hasn’t moved to at least formally agree on which regulator that should be.
The European Commission is the chief regulator for a group of very large online
platforms, including Elon Musk’s X, Meta’s Facebook and Instagram, Google’s
YouTube, Chinese-owned TikTok and Shein and others.
But national governments have the power to enforce the law on smaller platforms
and certify third parties for dispute resolution, among other things. National
laws allow users to exercise their rights to appeal to online platforms and
challenge decisions.
When blocking the bill last Friday, Nawrocki said a new version could be ready
within two months.
But that was “very unlikely … given that work on the current version has been
ongoing for nearly two years and no concrete alternative has been presented” by
the president, said Szymik, the NGO official.
The Digital Services Act has become a flashpoint in the political fight between
Brussels and Washington over how to police online platforms. The EU imposed its
first-ever fine under the law on X in December, prompting the U.S.
administration to sanction former EU Commissioner Thierry Breton and four other
Europeans.
Nawrocki last week likened the law to “the construction of the Ministry of Truth
from George Orwell’s novel 1984,” a criticism that echoed claims by Trump and
his top MAGA officials that the law censored conservatives and right-wingers.
Bartosz Brzeziński contributed reporting.
LONDON — Standing in Imperial College London’s South Kensington Campus in
September, Britain’s trade chief Peter Kyle insisted that a tech pact the U.K.
had just signed with the U.S. wouldn’t hamper his country’s ability to make its
own laws on artificial intelligence.
He had just spoken at an intimate event to celebrate what was meant to be a new
frontier for the “special relationship” — a U.K.-U.S. Technology Prosperity
Deal.
Industry representatives were skeptical, warning at the time the U.S. deal would
make the path to a British AI bill, which ministers had been promising for
months, more difficult.
This month U.K. Tech Secretary Liz Kendall confirmed ministers are no
longer looking at a “big, all-encompassing bill” on AI.
But Britain’s shift away from warning the world about runaway AI to ditching its
own attempts to legislate frontier models, such as ChatGPT and Google’s Gemini,
go back much further than that September morning.
GEAR CHANGE
In opposition Prime Minister Keir Starmer promised “stronger” AI
regulation. His center-left Labour Party committed to “binding regulation” on
frontier AI companies in its manifesto for government in 2024, and soon after it
won a landslide election that summer it set out plans for AI legislation.
But by the fall of 2024 the view inside the U.K. government was changing.
Kyle, then tech secretary, had asked tech investor Matt Clifford to write an “AI
Opportunities Action Plan” which Starmer endorsed. It warned against copying
“more regulated jurisdictions” and argued the U.K. should keep
its current approach of letting individual regulators monitor AI in their
sectors.
In October 2024 Starmer described AI as the “opportunity of this
generation.” AI shifted from a threat to be legislated to an answer to Britain’s
woes of low productivity, crumbling public services and sluggish economic
growth. Labour had came to power that July promising to fix all three.
A dinner that month with Demis Hassabis, chief executive and co-founder of
Google DeepMind, reportedly opened Starmer’s eyes to the opportunities of AI.
Hassabis was coy on the meeting when asked by POLITICO, but Starmer got Hassabis
back the following month to speak to his cabinet — a weekly meeting of senior
ministers — about how AI could transform public services. That has been the
government’s hope ever since.
In an interview with The Economist this month Starmer spoke about AI as a binary
choice between regulation and innovation. “I think with AI you either lean in
and see it as a great opportunity, or you lean out and think, ‘Well, how do we
guard ourselves against the risk?’ I lean in,” he said.
ENTER TRUMP
The evolution of Starmer’s own views in the fall of 2024 coincided with the
second coming of Donald Trump to the White House.
In a letter to the U.S. attorney general the month Trump was elected influential
Republican senator Ted Cruz accused the U.K.’s AI Security Institute of hobbling
America’s efforts to beat China in the race to powerful AI.
The White House’s new occupants saw AI as a generational competition between
America and China. Any attempt by foreign regulators to hamper its development
was seen as a threat to U.S. national security.
It appeared Labour’s original plan, to force largely U.S. tech companies
to open their models to government testing pre-release, would not go down well
with Britain’s biggest ally.
Instead, U.K. officials adapted to the new world order. In Paris in February
2025, at an international AI Summit series which the U.K. had set up in 2023 to
keep existential AI risks at bay, the country joined the U.S. in refusing to
sign an international AI declaration.
The White House went on to attack international AI governance efforts, with its
director of tech policy Michael Kratsios telling the U.N. that the U.S. wanted
its AI technology to become the “global gold standard” with allies building
their own AI tech on top of it.
In opposition Prime Minister Keir Starmer promised “stronger” AI regulation. |
Jonathan Brady/PA Images via Getty Images
The U.K. was the first country to sign up, agreeing
the Technology Prosperity Deal with the U.S. that September. At the signing
ceremony, Trump couldn’t have been clearer. “We’re going to have a lot
of deregulation and a tremendous amount of innovation,” he told a group of
hand-picked business leaders.
The deal, which was light on detail, was put on ice in early December as the
U.S. used it to try to extract more trade concessions from the Brits. Kratsios,
one of the architects of that tech pact, said work on it would resume once the
U.K. had made “substantial” progress in other areas of trade.
DIFFICULT HOME LIFE
While Starmer’s overtures to the U.S. have made plans for an AI bill more
difficult, U.K. lawmakers have further complicated any attempt to introduce
legislation. A group of powerful “tech peers” in the House of Lords have vowed
to hijack any tech-related bill and use it to force the government to make
concessions in other areas they have concerns about like AI and copyright, just
as they did this summer over the Data Use and Access Bill.
Senior civil servants have also warned ministers a standalone AI bill could
become messy “Christmas Tree” bill, adorned with unrelated amendments, according
to two officials granted anonymity to speak freely.
The government’s intention is to instead break any AI-related legislation
up into smaller chunks. Nudification apps, for example, will be banned as part
of the government’s new Violence Against Women and Girls Strategy, AI chatbots
are being looked at through a review of the Online Safety Act, while there will
also need to be legislation for AI Growth Labs — testbeds where companies can
experiment with their products before going to market.
Asked about an AI bill by MPs on Dec. 3, Kendall said: “There are measures
we will need to take to make sure we get the most on growth and deal with
regulatory issues. If there are measures we need to do to protect kids online,
we will take those. I am thinking about it more in terms of specific areas where
we may need to act rather than a big all-encompassing bill.”
The team in Kendall’s department which looks at frontier AI regulation,
meanwhile, has been reassigned, according to two people familiar with the team.
Polling by the Ada Lovelace Institute shows Labour’s leadership is out of
sync with public views on AI, with 9 in 10 wanting an independent AI regulator
with enforcement powers.
“The public wants independent regulation,” said Ada Lovelace Director Gaia
Marcus. “They prioritize fairness, positive social impacts and safety in
trade-offs against economic gains, speed of innovation and international
competition.”
A separate study by Focal Data found that framing AI as a geopolitical
competition also doesn’t resonate with voters. “They don’t want to work more
closely with the United States on shared digital and tech goals because of their
distrust of its government,” the research found.
Political leadership must step in to bridge that gap, former U.K. prime minister
Tony Blair wrote in a report last month. “Technological competitiveness is not a
priority for voters because European leaders have failed to connect it to what
citizens care about: their security, their prosperity and their children’s
futures,” he wrote.
For Starmer, who has struggled to connect with the voters, that will be a huge
challenge.
La prima causa italiana contro presunte violazioni del copyright legate
all’intelligenza artificiale è stata avviata. Reti Televisive Italiane (Rti) e
Medusa Film, società del gruppo Mediaset, hanno depositato un ricorso presso il
Tribunale Civile di Roma contro la startup statunitense Perplexity AI, accusata
di aver utilizzato “senza permesso e su larga scala” contenuti audiovisivi e
cinematografici per addestrare i propri sistemi di AI generativa.
Secondo Rti e Medusa, l’attività della società statunitense non si limita al
semplice data scraping: rappresenterebbe una violazione dei diritti d’autore e
di altri diritti connessi, minacciando l’industria culturale e creativa. Con il
ricorso, le due aziende chiedono il blocco immediato di qualsiasi utilizzo non
autorizzato, il riconoscimento della responsabilità civile e il risarcimento dei
danni, con l’applicazione di una penale giornaliera in caso di nuove violazioni.
Il caso italiano si inserisce in un contesto internazionale già teso. Negli
Stati Uniti, Perplexity AI è stata citata da Encyclopaedia Britannica e
Merriam-Webster per aver riprodotto articoli e definizioni protette da
copyright, mentre altri gruppi editoriali, tra cui società legate a News Corp e
la Bbc, hanno segnalato un uso non autorizzato dei loro contenuti per
l’addestramento di modelli di AI. Anche in Giappone quotidiani come Asahi
Shimbun e Nikkei hanno presentato denunce simili.
Il contenzioso tocca uno dei nodi cruciali del dibattito sull’intelligenza
artificiale: come conciliare l’innovazione tecnologica con la tutela dei
contenuti creativi e la protezione del lavoro giornalistico e audiovisivo.
Perplexity AI non ha ancora rilasciato commenti sull’iniziativa legale, ma il
caso Rti-Medusa potrebbe segnare un precedente rilevante per le future
regolamentazioni in Italia e in Europa.
Aravind Srinivas, amministratore delegato di Perplexity AI, in passato
presentando il Publisher Program, che prevede la condivisione dei ricavi con gli
editori, aveva dichiarato: “Perplexity non può avere successo senza gli editori.
Non prendiamo i contenuti degli editori e non ci formiamo sopra dei modelli.
Vogliamo creare qualcosa di diverso, ovvero condividere le nostre entrate”.
L'articolo Copyright e AI, prima causa in Italia: Rti e Medusa contro Perplexity
AI proviene da Il Fatto Quotidiano.
From the SWAP: A Secret History of the New Cold War by Drew Hinshaw and Joe
Parkinson. Copyright © 2025 by Drew Hinshaw and Joe Parkinson. Published by
Harper, an imprint of HarperCollins Publishers.
In the third week of March 2023, Vladimir Putin dialed onto a video call and
reached for a winning tactic he had been honing since his first weeks as
president. He approved the arrest of another American.
By then, Russia’s president was running the world’s largest landmass from a
series of elaborately constructed, identical conference rooms. As far as the CIA
could tell, there were at least three of them across Russia, each custom-built
and furnished to the exact same specifications, down to the precise positioning
of a presidential pencil holder, engraved with a double-headed eagle, the state
symbol tracing back five centuries, on the lacquered wooden desk. Neither the 10
perfectly sharpened pencils inside nor any other detail in the windowless rooms,
with their beige-paneled walls and a decor of corporate efficiency, offered a
clue to Putin’s true location.
Russia’s president refused to use a cell phone and rarely used the internet.
Instead, he conducted meetings through the glow of a large screen monitor,
perched on a stand rolled in on wheels. The grim-faced officials flickering onto
the screen, many of whom had spent decades in his close company, often were not
aware from which of the country’s 11 time zones their commander in chief was
calling. Putin’s staff sometimes announced he was leaving one city for another,
then dispatched an empty motorcade to the airport and a decoy plane before he
appeared on a videoconference, pretending to be somewhere he was not.
From these Zoom-era bunkers, he had been governing a country at war, issuing
orders to front-line commanders in Ukraine, and tightening restrictions at home.
Engineers from the Presidential Communications Directorate had been sending
truckloads of equipment across Russia to sustain the routine they called Special
Comms, to encrypt the calls of “the boss.” The computers on his desks remained
strictly air-gapped, or unconnected to the web. Some engineers joked nervously
about the “information cocoon” the president was operating in.
But even from this isolation, the president could still leverage an asymmetric
advantage against the country his circle called their “main enemy.” One of the
spy chiefs on the call was proposing an escalation against America. Tall,
mustachioed, and unsmiling, Major General Vladislav Menschikov ranked among one
of the siloviki, or “men of strength” from the security services who had risen
in Putin’s slipstream. The president trusted him enough to run Russia’s nuclear
bunkers and he played ice hockey with his deputies.
Few people outside a small circle of Kremlinologists had heard of Menschikov,
head of the First Service of the Federal Security Service, or FSB, the successor
to the KGB. But everybody in America had watched the spectacular operation he
had pulled off just a few months earlier. An elite spy agency under his command
orchestrated the arrest of an American basketball champion, Brittney Griner.
Hollywood stars and NBA legends including Steph Curry and LeBron James demanded
President Joe Biden ensure her swift return, wearing “We Are BG” shirts on
court. Menschikov helped oversee her exchange in a prisoner swap for Viktor
Bout, an infamous Russian arms dealer nicknamed “the Merchant of Death,” serving
25 years in an Illinois penitentiary.
This account is based on interviews with former and current Russian, U.S. and
European intelligence officials, including those who have personally been on a
video call with Putin, and the recollections of an officer in the Russian
leader’s Presidential Communications Directorate, whose account of Putin’s
conference call routine matched publicly available information. Those sources
were granted anonymity to discuss the sensitive details of the president’s
calls.
Trading a notorious gunrunner for a basketball player was a stunning example of
Russia’s advantage in “hostage diplomacy,” a form of statecraft that died with
the Cold War only for Putin to resurrect it. In penal colonies across Russia,
Menschikov’s subordinates were holding still more Americans, ready to swap for
the right price. They included a former Marine, mistaken for an intelligence
officer, who had come to Moscow for a wedding, and a high school history teacher
whose students had included the CIA director’s daughter, caught in the airport
carrying medical marijuana. Disappointingly, neither of their ordeals had yet to
bring the desired offer from Washington.
Menschikov’s proposal was to cross a threshold Moscow hadn’t breached since the
Cold War and jail an American journalist for espionage. A young reporter from
New Jersey — our Wall Street Journal colleague and friend Evan Gershkovich — was
flying from Moscow to Yekaterinburg to report on the increased output of a local
tank factory. If the operation went to plan, the reporter could be exchanged for
the prisoner Putin referred to as “a patriot,” an FSB officer serving a life
sentence in Germany for gunning down one of Russia’s enemies in front of a
Berlin coffee shop called All You Need Is Love. The murderer had told the police
nothing, not even his name.
From the moment Putin gave his assent, a new round of the game of human poker
would begin that would see a cavalcade of spies, diplomats and wannabe mediators
including oligarchs, academy award-winning filmmakers and celebrities seek to
help inch a trade towards fruition. The unlikely combination of Hillary Clinton
and Tucker Carlson would both step in to advance talks, alongside the Saudi
Crown Prince Mohammed bin Salman, Turkey’s President Recep Tayyip Erdogan,
former Google CEO Eric Schmidt, and Rupert Murdoch, the media mogul who would
wrestle with whether to fly to Moscow to personally petition Putin.
All told, CIA officers would fly thousands of miles to orchestrate a deal that
would come to encompass 24 prisoners. On the Russian side: hackers, smugglers,
spies and Vadim Krasikov, the murderer Putin had set out to free were all
released. In return, the U.S. and its allies were able to free dissidents,
westerners serving draconian sentences, former Marine Paul Whelan, and
journalists that included the Washington Post’s Vladimir Kara-Murza, Radio Free
Europe’s Alsu Kurmasheva, and our newspaper’s Gershkovich.
Looking back, what is remarkable is how well it all went for the autocrat in the
Kremlin, who would manage to outplay his fifth U.S. president in a contest of
taking and trading prisoners once plied by the KGB he joined in his youth. An
adage goes that Russia, in the 21st century, has played a poor hand well. The
unbelievable events that followed also raise the question of how much blind luck
— and America’s own vulnerabilities — have favored the man in the “information
cocoon.” The prisoner game continues even under President Donald Trump, who in
his second term’s opening months conducted two swaps with Putin, then in May
discussed the prospect of an even larger trade.
It is a lesser-known item of the Russian president’s biography that he grabbed
his first American bargaining chip just eight days after his March 2000
election, when the FSB arrested a former naval officer, Edmond Pope, on
espionage charges. It took a phone call from Bill Clinton for the youthful Putin
to pardon Pope, an act of swift clemency he would never repeat.
Twenty-three years later, on the videoconference call with General Menschikov,
Putin was in a far less accommodating mood. He wanted to force a trade to bring
back the FSB hitman he privately called “the patriot” — he’d been so close to
Krasikov, they’d fired rounds together on the shooting range. Some CIA analysts
believed he was Putin’s personal bodyguard. In the previous months, before he
approved Gershkovich’s arrest, three Russian spy chiefs asked the CIA if they
could trade Krasikov, only to hear that rescuing a Russian assassin from a
German jail was a delusional request of the United States. Days before the call,
one of Putin’s aides phoned CIA Director Bill Burns and asked once more for good
measure and was told, again, the entire idea was beyond the pale.
Menschikov’s officers would test that point of principle. His men would arrest
the reporter, once he arrived in Yekaterinburg.
--------------------------------------------------------------------------------
It was just after 1 p.m. in The Wall Street Journal’s small security office in
New Jersey, and Gershkovich’s tracking app was no longer pinging. The small team
of analysts monitoring signals from reporters deployed across the front lines of
Ukraine and other global trouble spots had noticed his phone was offline, but
there was no need to raise an immediate alarm. Yekaterinburg, where the Russia
correspondent was reporting, was east of the Ural Mountains, a thousand miles
from the artillery and missile barrages pummeling neighboring Ukraine. Journal
staff regularly switched off their phones, slipped beyond the reach of cell
service, or just ran out of battery. The security team made a note in the log.
It was probably nothing.
A text came in to the Journal’s security manager. “Have you been in touch with
Evan?”
The security manager had spent the day monitoring reporters near the Ukrainian
front lines, or others in Kyiv who’d taken shelter during a missile bombardment.
But he noticed Gershkovich had missed two check-ins and was ordering the New
Jersey team to keep trying him. “Shit,” he texted back, then fired off a message
to senior editors.
The Journal’s headquarters in Midtown Manhattan looked out through a cold March
sky onto Sixth Avenue. Within minutes, staff gathering in the 45-story News
Corporation Building or dialing in from Europe were scrambling to reach contacts
and piece together what was happening in Russia. The paper’s foreign
correspondents with experience in Moscow were pivoting from finalizing stories
to calling sources who could locate their colleague. One reached a taxi driver
in Yekaterinburg and urged him to stop by the apartment where Gershkovich was
staying. The chauffeur called back minutes later, saying he’d found only dark
windows, the curtains still open. “Let’s hope for the best,” he said.
Though there were still no news reports on Gershkovich’s disappearance nor
official comment from Russia’s government, the data points suggested something
had gone badly wrong. The Journal scheduled a call with the Russian ambassador
in Washington but when the hour came was told, “He is unfortunately not
available.” The problem reached the new editor- in-chief, Emma Tucker, who
listened quietly before responding in a voice laced with dread. “I understand.
Now what do we do?”
Only eight weeks into the job — in a Manhattan apartment so new it was furnished
with a only mattress on the floor — Tucker was still trying to understand the
Journal’s global org chart, and had met Gershkovich just once, in the paper’s
U.K. office. Now she was corralling editors, lawyersand foreign correspondents
from Dubai to London onto conference calls to figure out how to find him. A
Pulitzer Prize finalist and Russia specialist on her staff made a grim
prediction. If the FSB had him, it wasn’t going to be a short ordeal: “He’s
going to spend his 30s in prison.” And when editors finally located the
Journal’s publisher to inform him of what was going on, they hoped it wasn’t an
omen. Almar Latour was touring Robben Island, the prison off the coast of Cape
Town, South Africa, where Nelson Mandela served 18 of his 27 years of
incarceration.
There was a reporter nobody mentioned, but whose face was engraved into a plaque
on the newsroom wall. Latour had once sat next to Daniel “Danny” Pearl, the
paper’s intrepid and gregarious South Asia correspondent. In 2002, the
38-year-old was lured into an interview that turned out to be his own abduction,
and was beheaded on camera by Khalid Sheikh Mohammed, a mastermind of the
terrorist attacks of September 11, 2001 — leaving behind a pregnant wife and a
newsroom left to report the murder of their friend.
Paul Beckett, the Washington bureau chief and one of the last reporters to see
Pearl alive, had thought of him immediately. He managed to get Secretary of
State Antony Blinken on the phone. America’s top diplomat knew exactly who Evan
was; just that morning he had emailed fellow administration officials the
reporter’s latest front-page article, detailing how and where Western sanctions
were exacting long-term damage on Russia’s economy. It was an example, Blinken
told his office, of the great reporting still being done in Russia.
“Terrible situation,” Blinken told Beckett, before adding a promise America
would pay a steep price to keep: “We will get him back.”
--------------------------------------------------------------------------------
The Biden White House’s first move after learning of Gershkovich’s arrest was to
call the Kremlin — an attempt to bypass the FSB.
The arrest of an American reporter was a major escalation and if National
Security Advisor Jake Sullivan could reach Yuri Ushakov, Vladimir Putin’s top
foreign policy specialist, Sullivan hoped he could convince Ushakov to step back
from the brink. At best, he assessed his odds of success at 10 percent, but this
was a crisis that seemed likely to either be resolved with a quick call or drag
on for who knows how long, and at what cost.
“We’ve got a big problem,” Sullivan told Ushakov. “We’ve got to resolve this.”
The answer that came back was swift and unambiguous.
“This is a legal process,” Ushakov said. There would be no presidential clemency
— only a trial, and if Washington wanted a prisoner trade, they were going to
have to arrange it through what the Russians called “the special channel.” In
other words, the CIA would have to talk to the FSB. Sullivan hung up, and his
team braced themselves to brief the Journal: the newspaper was going to need to
be patient.
The White House was trapped in a rigged game, facing the crude asymmetry between
the U.S. and Russia, whose leader, in power for a quarter-century, could simply
order foreigners plucked from their hotel rooms and sentenced to decades on
spurious charges. Griner, the basketball champion, hadn’t even returned to the
basketball court in the three months since her exchange for “the Merchant of
Death,” yet already, the Russians had scooped up another high-profile chip.
The CIA and its European allies had been quietly trying to fight back in this
game of human poker. They had spent enormous energy tracking and rounding up the
Russians Putin valued most: deep-cover spies, or “illegals,” who spent years
building false lives undercover, taking on foreign mannerisms and tongues.
Norwegian police, with U.S. help, had nabbed an agent for Russia’s GRU military
intelligence agency, posing as a Brazilian arctic security professor in Norway’s
far north. Poland had arrested a Spanish-Russian freelance journalist: His
iCloud held the reports he’d filed for the GRU, on the women — dissidents and
journalists — he’d wooed across Central and Eastern Europe. It had taken the spy
service of the Alpine nation of Slovenia, known as Owl, nearly a year to find,
then jail, a carefully hidden pair of married spies, pretending to be Argentines
running an art gallery — sleeper agents working for Moscow’s SVR foreign
intelligence agency. Not even their Buenos Aires-born children, who they spoke
to in fluent Spanish, knew their parents’ true nationality or calling.
Yet for all that work, none of these prisoners worked for the agency that
mattered most in Russia and ran the “special channel” — the FSB. Putin himself
had once run Russia’s primary intelligence agency, and now it was in the hands
of his siloviki, the security men he’d known for decades who included
Menschikov. There was, the CIA knew, only one prisoner the FSB wanted back:
Krasikov, the FSB officer serving life in a German prison.
America was stuck. Every stick it could beat Russia with was already being
wielded. The world’s financial superpower was drowning Putin’s elite in
sanctions, and almost every week Sullivan authorized another carefully designed
shipment of weaponry to the battlegrounds of Ukraine, whose government
complained bitterly it was being given just enough to perpetuate a war, not
enough to win. And yet America’s government had to worry about the conflict
tipping into a nuclear exchange.
What else is there in our toolbag? Sullivan asked himself. We’re doing
everything we can. But the game was rigged. Which is why Putin kept playing it.
Senate Commerce Chair Ted Cruz (R-Texas) insisted Tuesday the idea of a 10-year
moratorium on state and local artificial intelligence laws remains alive —
despite a Republican argument that knocked it out of the summer’s budget bill.
“Not at all dead,” Cruz said at POLITICO’S AI & Tech Summit on Tuesday. “We had
about 20 battles, and I think we won 19. So I feel pretty good.”
Cruz said the controversial proposal made it further than conventional wisdom in
Washington suggested it could, ultimately passing scrutiny with the Senate’s
rules referee thanks to the “very creative” work of his staff.
He took a swipe at the Democratic-led states that have been most aggressive in
passing tech legislation in the past few years: “Do you want Karen Bass and
Comrade Mamdani setting the rules for AI?,” he asked, referring to the Los
Angeles mayor and New York City mayoral candidate.
Cruz acknowledged the moratorium fell out due to the opposition of Sen. Marsha
Blackburn (R-Tenn.), who was worried about the fate of her own state’s law
protecting musicians from AI copyright violations.
Cruz suggested the two are not in further talks about a path forward.
“She is doing her own thing,” Cruz said, while saying he was working closely
with the White House.
Many in Washington have long suspected the idea’s legislative prospects were
effectively dead after the GOP budget bill passed without its inclusion. It was
also opposed by a firm bloc of Republicans, including conservatives like
Sen. Josh Hawley (Mo.), Rep. Marjorie Taylor Greene (Ga.) and Steve Bannon.
Cruz has been actively engaged on artificial intelligence issues throughout the
current Congress. Last week, he offered a regulatory “sandbox” proposal that
would effectively loosen the regulatory load on emerging AI technologies.
White House Office of Science and Technology policy director Michael Kratsios
formally endorsed Cruz’s new plan during a committee hearing. Rep. Jay
Obernolte (R-Calif.), a leading House voice on AI issues, is preparing his own
legislation and hoping for “legislative oxygen” to advance it by the end of the
year.
Cruz said that “of course” his legislation would ensure certain existing laws,
like consumer safety protections, remain in force — amid concerns from outside
groups and Democrats that it could imperil the ability to enforce current
protections.
He said failing to pass laws unshackling AI would only benefit U.S. adversaries.
“The biggest winner of the status quo with no moratorium is China. Why? Because
we’re going to see contradictory regulations,” Cruz said.
LONDON — MPs enjoyed free hospitality at the U.K.’s top events this summer, with
Prime Minister Keir Starmer accepting tickets to Ascot for his family.
Google paid for several MPs to attend Glastonbury Festival including Labour’s
Chris Curtis, Dan Aldridge, Leigh Ingham, Steve Race, Jake Richards and Fred
Thomas. The hospitality cost of the individual packages was worth up to £4,000.
Chair of the Culture, Media and Sport Committee, Conservative MP Caroline
Dinenage, who has been critical of tech companies using copyrighted material to
train their AI models, also took a Glastonbury ticket and a tent from Google.
Deputy Speaker in the Commons and fellow Tory Nusrat Ghani was also gifted a
ticket by the tech giant.
Other MPs getting free Glastonbury tickets were Lib Dem tech spokesperson
Victoria Collins and Labour MP James Frith thanks to PRS for Music, a group
lobbying to protect musicians’ copyrighted material.
Rows over freebies caused the prime minister problems last year and he repaid
£6,000 worth of gifts. The register shows him accepting £650 of Ascot tickets
this summer for three family members.
Dozens of Conservative and Labour MPs also took free tickets to Silverstone and
Wimbledon this summer. Enjoying the British Grand Prix were Labour’s Calum
Anderson and Paula Barker, and Conservatives Charlie Dewhirst and Sarah Bool.
MPs have to declare gifts worth over £300 within 28 days of accepting.
BRUSSELS — Brussels has served the world’s leading artificial intelligence
companies with a tricky summer dilemma.
OpenAI, Google, Meta and others must decide in the coming days and weeks whether
to sign up to a voluntary set of rules that will ensure they comply with the
bloc’s stringent AI laws — or refuse to sign and face closer scrutiny from the
European Commission.
Amid live concerns about the negative impacts of generative AI models such as
Grok or ChatGPT, the Commission on Thursday took its latest step to limit those
risks by publishing a voluntary set of rules instructing companies on how to
comply with new EU law.
The final guidance handed clear wins to European Parliament lawmakers and civil
society groups that had sought a strong set of rules, even after companies such
as Meta and Google had lambasted previous iterations of the text and tried to
get it watered down.
That puts companies in a tough spot.
New EU laws will require them to document the data used to train their models
and address the most serious AI risks as of Aug. 2.
They must decide whether to use guidance developed by academic experts under the
watch of the Commission to meet these requirements, or get ready to convince the
Commission they comply in other ways.
Companies that sign up for the rules will “benefit from more legal certainty and
reduced administrative burden,” Commission spokesperson Thomas Regnier told
reporters on Thursday.
French AI company Mistral on Thursday became the first to announce it would sign
on the dotted line.
WIN FOR TRANSPARENCY
Work on the so-called code of practice began in September, as an extension of
the bloc’s AI rulebook that became law in August 2024.
Thirteen experts embarked on a process focused on three areas: the transparency
AI companies need to show to regulators and customers who use their models; how
they will comply with EU copyright law; and how they plan to address the most
serious risks of AI.
The proceedings quickly boiled down to a few key points of contention.
OpenAI, Google, Meta and others must decide in the coming days and weeks whether
to sign up to a voluntary set of rules that will ensure they comply with the
bloc’s stringent AI laws. | Filip Singer/EPA
Industry repeatedly emphasized that the guidance should not go beyond the
general direction of the AI Act, while campaigners complained the rules were at
risk of being watered down amid intense industry lobbying.
On Wednesday, European Parliament lawmakers said they had “great concern” about
“the last-minute removal of key areas of the code of practice,” such as
requiring companies to be publicly transparent about their safety and security
measures and “the weakening of risk assessment and mitigation provisions.”
In the final text put forward on Thursday, the Commission’s experts handed
lawmakers a win by explicitly mentioning the “risk to fundamental rights” on a
list of risks that companies are asked to consider.
Laura Lázaro Cabrera of the Center for Democracy and Technology, a civil rights
group, said it was “a positive step forward.”
Public transparency was also addressed: the text says companies will have to
“publish a summarised version” of the reports filed to regulators before putting
a model on the market.
Google spokesperson Mathilde Méchin said the company was “looking forward to
reviewing the code and sharing our views.”
Big Tech lobby group CCIA, which includes Meta and Google among its members, was
more critical, stating that the code “still imposes a disproportionate burden on
AI providers.”
“Without meaningful improvements, signatories remain at a disadvantage compared
to non-signatories,” said Boniface de Champris, senior policy manager at CCIA
Europe.
He heckled “overly prescriptive” safety and security measures and slammed a
copyright section, with “new disproportionate measures outside the Act’s
remit.”
SOUR CLIMATE
A sour climate around the EU’s AI regulations and the drafting process for the
guidance will likely affect tech companies’ calculations on how to respond.
“The process for the code has so far not been well managed,” said Finnish
European Parliament lawmaker Aura Salla, a conservative politician and former
lobbyist for Meta, ahead of Thursday’s announcement.
The thirteen experts produced a total of four drafts over nine months, a process
that garnered the attention of over 1,000 participants and was discussed in
several iterations of plenaries and four working groups — often in the evenings
since some of the experts were based in the U.S. or Canada.
Google spokesperson Mathilde Méchin said the company was “looking forward to
reviewing the code and sharing our views.” | John Mabanglo/EPA
The Commission’s Regnier applauded the process as “inclusive,” but both industry
and civil society groups said they felt they had not been heard.
The U.S. tech companies that must now decide whether to sign the code have also
shown themselves critical of the EU’s approach to other parts of its AI
regulation.
Tech lobby groups, such as the CCIA, were among the first to call for a pause on
the parts of the EU’s AI Act that had not yet been implemented — specifically,
obligations for companies deploying high-risk AI systems, which are set to take
effect next year.
BRUSSELS — A series of Hitler-praising comments by Elon Musk’s artificial
intelligence chatbot Grok has fired up European policymakers to demand stronger
action against Big Tech companies as the bloc takes another step to enforce its
laws.
Musk’s chatbot this week sparked criticism for making antisemitic posts that
included glorifying Nazi leader Adolf Hitler as the best-placed person to deal
with alleged “anti-white hate,” after X updated its AI model over the weekend.
The latest foul-mouthed responses from the chatbot saw EU policymakers seize the
opportunity to demand robust rules for the most complex and advanced AI models —
such as the one that underpins Grok — in new industry guidance expected
Thursday.
It’s also put a spotlight on the EU’s handling of X, which is under
investigation for violating the bloc’s social media laws.
The Grok incident “highlights the very real risks the [EU’s] AI Act was designed
to address,” said Italian Social-Democrat European Parliament lawmaker Brando
Benifei, who led work on the EU’s AI rulebook that entered into law last year.
“This case only reinforces the need for EU regulation of AI chat models,” said
Danish Social-Democrat lawmaker Christel Schaldemose, who led work on the EU’s
Digital Services Act, designed to tackle dangerous online content such as hate
speech.
Grok owner xAI quickly removed the “inappropriate posts” and stated Wednesday it
had taken action to “ban hate speech before Grok posts on X,” without clarifying
what this entails.
The EU guidance is a voluntary compliance tool for companies that develop
general-purpose AI models, such as OpenAI’s GPT, Google’s Gemini or X’s Grok.
The European Commission last week gave a closed-door presentation seen by
POLITICO that suggested it would remove demands from earlier drafts, including
one requiring companies to share information on how they address systemic risks
stemming from their models.
Lawmakers and civil society groups say they fear the guidance will be weak to
ensure that frontrunning AI companies sign up to the voluntary rules.
AMMUNITION
After ChatGPT landed in November 2022, lawmakers and EU countries added a part
to the EU’s newly agreed AI law aimed at reining in general-purpose AI models,
which can perform several tasks upon request. OpenAI’s GPT is an example, as is
xAI’s Grok.
That part of the law will take effect in three weeks’ time, on August 2. It
outlines a series of obligations for companies such as xAI, including how to
disclose the data used to train their models, how they comply with copyright law
and how they address various “systemic” risks.
The Grok incident “highlights the very real risks the [EU’s] AI Act was designed
to address,” said Italian Social-Democrat European Parliament lawmaker Brando
Benifei, who led work on the EU’s AI rulebook that entered into law last year. |
Wael Hamzeh/EPA
But much depends on the voluntary compliance guidance that the Commission has
been developing for the past nine months.
On Wednesday, a group of five top lawmakers shared their “great concern” over
“the last-minute removal of key areas of the code of practice, such as public
transparency and the weakening of risk assessment and mitigation provisions.”
Those lawmakers see the Grok comments as further proof of the importance of
strong guidance, which has been heavily lobbied against by industry and the U.S.
administration.
“The Commission has to stand strongly against these practices under the AI Act,”
said Dutch Greens European Parliament lawmaker Kim van Sparrentak. But “they
seem to be letting Trump and his tech bro oligarchy lobby the AI rules to shreds
through the code of practice.”
One area of contention in the industry guidance relates directly to the Grok
fiasco.
In the latest drafts, the risk stemming from illegal content has been downgraded
to one that AI companies could potentially consider addressing, rather than one
they must.
That’s prompted fierce pushback. The industry code should offer “clear guidance
to ensure models are deployed responsibly and do not undermine democratic values
or fundamental values,” said Benifei.
The Commission’s tech chief Henna Virkkunen described work on the code of
practice as “well on track” in an interview with POLITICO last week.
RISKS
The Commission also pointed to its ongoing enforcement work under the Digital
Services Act, its landmark platform regulation, when asked about Grok’s
antisemitic outburst.
While there are no EU rules on what illegal content is, many countries
criminalize hate speech and particularly antisemitic comments.
Large-language models integrated into very large online platforms, which include
X, “may have to be considered in the risk assessments” that platforms must
complete and “fall within the DSA’s audit requirements,” Commission spokesperson
Thomas Regnier told POLITICO.
The problem is that the EU is yet to conclude any action against X through its
wide-reaching law.
The Commission launched a multi-company inquiry into generative AI on social
media platforms in January, focused on hallucinations, voter manipulation and
deepfakes.
In X’s latest risk assessment report, where the platform outlines potential
threats to civic discourse and mitigation measures, X did not outline any risks
related to AI and hate speech.
Neither X nor the Commission responded to POLITICO’s questions on whether a new
risk assessment for Grok has been filed after it was made available to all X
users in December.
French liberal MEP Sandro Gozi said she would ask the Commission whether the AI
Act and the DSA are enough to “prevent such practices” or whether new rules are
needed.