La prima causa italiana contro presunte violazioni del copyright legate
all’intelligenza artificiale è stata avviata. Reti Televisive Italiane (Rti) e
Medusa Film, società del gruppo Mediaset, hanno depositato un ricorso presso il
Tribunale Civile di Roma contro la startup statunitense Perplexity AI, accusata
di aver utilizzato “senza permesso e su larga scala” contenuti audiovisivi e
cinematografici per addestrare i propri sistemi di AI generativa.
Secondo Rti e Medusa, l’attività della società statunitense non si limita al
semplice data scraping: rappresenterebbe una violazione dei diritti d’autore e
di altri diritti connessi, minacciando l’industria culturale e creativa. Con il
ricorso, le due aziende chiedono il blocco immediato di qualsiasi utilizzo non
autorizzato, il riconoscimento della responsabilità civile e il risarcimento dei
danni, con l’applicazione di una penale giornaliera in caso di nuove violazioni.
Il caso italiano si inserisce in un contesto internazionale già teso. Negli
Stati Uniti, Perplexity AI è stata citata da Encyclopaedia Britannica e
Merriam-Webster per aver riprodotto articoli e definizioni protette da
copyright, mentre altri gruppi editoriali, tra cui società legate a News Corp e
la Bbc, hanno segnalato un uso non autorizzato dei loro contenuti per
l’addestramento di modelli di AI. Anche in Giappone quotidiani come Asahi
Shimbun e Nikkei hanno presentato denunce simili.
Il contenzioso tocca uno dei nodi cruciali del dibattito sull’intelligenza
artificiale: come conciliare l’innovazione tecnologica con la tutela dei
contenuti creativi e la protezione del lavoro giornalistico e audiovisivo.
Perplexity AI non ha ancora rilasciato commenti sull’iniziativa legale, ma il
caso Rti-Medusa potrebbe segnare un precedente rilevante per le future
regolamentazioni in Italia e in Europa.
Aravind Srinivas, amministratore delegato di Perplexity AI, in passato
presentando il Publisher Program, che prevede la condivisione dei ricavi con gli
editori, aveva dichiarato: “Perplexity non può avere successo senza gli editori.
Non prendiamo i contenuti degli editori e non ci formiamo sopra dei modelli.
Vogliamo creare qualcosa di diverso, ovvero condividere le nostre entrate”.
L'articolo Copyright e AI, prima causa in Italia: Rti e Medusa contro Perplexity
AI proviene da Il Fatto Quotidiano.
Tag - Copyright
From the SWAP: A Secret History of the New Cold War by Drew Hinshaw and Joe
Parkinson. Copyright © 2025 by Drew Hinshaw and Joe Parkinson. Published by
Harper, an imprint of HarperCollins Publishers.
In the third week of March 2023, Vladimir Putin dialed onto a video call and
reached for a winning tactic he had been honing since his first weeks as
president. He approved the arrest of another American.
By then, Russia’s president was running the world’s largest landmass from a
series of elaborately constructed, identical conference rooms. As far as the CIA
could tell, there were at least three of them across Russia, each custom-built
and furnished to the exact same specifications, down to the precise positioning
of a presidential pencil holder, engraved with a double-headed eagle, the state
symbol tracing back five centuries, on the lacquered wooden desk. Neither the 10
perfectly sharpened pencils inside nor any other detail in the windowless rooms,
with their beige-paneled walls and a decor of corporate efficiency, offered a
clue to Putin’s true location.
Russia’s president refused to use a cell phone and rarely used the internet.
Instead, he conducted meetings through the glow of a large screen monitor,
perched on a stand rolled in on wheels. The grim-faced officials flickering onto
the screen, many of whom had spent decades in his close company, often were not
aware from which of the country’s 11 time zones their commander in chief was
calling. Putin’s staff sometimes announced he was leaving one city for another,
then dispatched an empty motorcade to the airport and a decoy plane before he
appeared on a videoconference, pretending to be somewhere he was not.
From these Zoom-era bunkers, he had been governing a country at war, issuing
orders to front-line commanders in Ukraine, and tightening restrictions at home.
Engineers from the Presidential Communications Directorate had been sending
truckloads of equipment across Russia to sustain the routine they called Special
Comms, to encrypt the calls of “the boss.” The computers on his desks remained
strictly air-gapped, or unconnected to the web. Some engineers joked nervously
about the “information cocoon” the president was operating in.
But even from this isolation, the president could still leverage an asymmetric
advantage against the country his circle called their “main enemy.” One of the
spy chiefs on the call was proposing an escalation against America. Tall,
mustachioed, and unsmiling, Major General Vladislav Menschikov ranked among one
of the siloviki, or “men of strength” from the security services who had risen
in Putin’s slipstream. The president trusted him enough to run Russia’s nuclear
bunkers and he played ice hockey with his deputies.
Few people outside a small circle of Kremlinologists had heard of Menschikov,
head of the First Service of the Federal Security Service, or FSB, the successor
to the KGB. But everybody in America had watched the spectacular operation he
had pulled off just a few months earlier. An elite spy agency under his command
orchestrated the arrest of an American basketball champion, Brittney Griner.
Hollywood stars and NBA legends including Steph Curry and LeBron James demanded
President Joe Biden ensure her swift return, wearing “We Are BG” shirts on
court. Menschikov helped oversee her exchange in a prisoner swap for Viktor
Bout, an infamous Russian arms dealer nicknamed “the Merchant of Death,” serving
25 years in an Illinois penitentiary.
This account is based on interviews with former and current Russian, U.S. and
European intelligence officials, including those who have personally been on a
video call with Putin, and the recollections of an officer in the Russian
leader’s Presidential Communications Directorate, whose account of Putin’s
conference call routine matched publicly available information. Those sources
were granted anonymity to discuss the sensitive details of the president’s
calls.
Trading a notorious gunrunner for a basketball player was a stunning example of
Russia’s advantage in “hostage diplomacy,” a form of statecraft that died with
the Cold War only for Putin to resurrect it. In penal colonies across Russia,
Menschikov’s subordinates were holding still more Americans, ready to swap for
the right price. They included a former Marine, mistaken for an intelligence
officer, who had come to Moscow for a wedding, and a high school history teacher
whose students had included the CIA director’s daughter, caught in the airport
carrying medical marijuana. Disappointingly, neither of their ordeals had yet to
bring the desired offer from Washington.
Menschikov’s proposal was to cross a threshold Moscow hadn’t breached since the
Cold War and jail an American journalist for espionage. A young reporter from
New Jersey — our Wall Street Journal colleague and friend Evan Gershkovich — was
flying from Moscow to Yekaterinburg to report on the increased output of a local
tank factory. If the operation went to plan, the reporter could be exchanged for
the prisoner Putin referred to as “a patriot,” an FSB officer serving a life
sentence in Germany for gunning down one of Russia’s enemies in front of a
Berlin coffee shop called All You Need Is Love. The murderer had told the police
nothing, not even his name.
From the moment Putin gave his assent, a new round of the game of human poker
would begin that would see a cavalcade of spies, diplomats and wannabe mediators
including oligarchs, academy award-winning filmmakers and celebrities seek to
help inch a trade towards fruition. The unlikely combination of Hillary Clinton
and Tucker Carlson would both step in to advance talks, alongside the Saudi
Crown Prince Mohammed bin Salman, Turkey’s President Recep Tayyip Erdogan,
former Google CEO Eric Schmidt, and Rupert Murdoch, the media mogul who would
wrestle with whether to fly to Moscow to personally petition Putin.
All told, CIA officers would fly thousands of miles to orchestrate a deal that
would come to encompass 24 prisoners. On the Russian side: hackers, smugglers,
spies and Vadim Krasikov, the murderer Putin had set out to free were all
released. In return, the U.S. and its allies were able to free dissidents,
westerners serving draconian sentences, former Marine Paul Whelan, and
journalists that included the Washington Post’s Vladimir Kara-Murza, Radio Free
Europe’s Alsu Kurmasheva, and our newspaper’s Gershkovich.
Looking back, what is remarkable is how well it all went for the autocrat in the
Kremlin, who would manage to outplay his fifth U.S. president in a contest of
taking and trading prisoners once plied by the KGB he joined in his youth. An
adage goes that Russia, in the 21st century, has played a poor hand well. The
unbelievable events that followed also raise the question of how much blind luck
— and America’s own vulnerabilities — have favored the man in the “information
cocoon.” The prisoner game continues even under President Donald Trump, who in
his second term’s opening months conducted two swaps with Putin, then in May
discussed the prospect of an even larger trade.
It is a lesser-known item of the Russian president’s biography that he grabbed
his first American bargaining chip just eight days after his March 2000
election, when the FSB arrested a former naval officer, Edmond Pope, on
espionage charges. It took a phone call from Bill Clinton for the youthful Putin
to pardon Pope, an act of swift clemency he would never repeat.
Twenty-three years later, on the videoconference call with General Menschikov,
Putin was in a far less accommodating mood. He wanted to force a trade to bring
back the FSB hitman he privately called “the patriot” — he’d been so close to
Krasikov, they’d fired rounds together on the shooting range. Some CIA analysts
believed he was Putin’s personal bodyguard. In the previous months, before he
approved Gershkovich’s arrest, three Russian spy chiefs asked the CIA if they
could trade Krasikov, only to hear that rescuing a Russian assassin from a
German jail was a delusional request of the United States. Days before the call,
one of Putin’s aides phoned CIA Director Bill Burns and asked once more for good
measure and was told, again, the entire idea was beyond the pale.
Menschikov’s officers would test that point of principle. His men would arrest
the reporter, once he arrived in Yekaterinburg.
--------------------------------------------------------------------------------
It was just after 1 p.m. in The Wall Street Journal’s small security office in
New Jersey, and Gershkovich’s tracking app was no longer pinging. The small team
of analysts monitoring signals from reporters deployed across the front lines of
Ukraine and other global trouble spots had noticed his phone was offline, but
there was no need to raise an immediate alarm. Yekaterinburg, where the Russia
correspondent was reporting, was east of the Ural Mountains, a thousand miles
from the artillery and missile barrages pummeling neighboring Ukraine. Journal
staff regularly switched off their phones, slipped beyond the reach of cell
service, or just ran out of battery. The security team made a note in the log.
It was probably nothing.
A text came in to the Journal’s security manager. “Have you been in touch with
Evan?”
The security manager had spent the day monitoring reporters near the Ukrainian
front lines, or others in Kyiv who’d taken shelter during a missile bombardment.
But he noticed Gershkovich had missed two check-ins and was ordering the New
Jersey team to keep trying him. “Shit,” he texted back, then fired off a message
to senior editors.
The Journal’s headquarters in Midtown Manhattan looked out through a cold March
sky onto Sixth Avenue. Within minutes, staff gathering in the 45-story News
Corporation Building or dialing in from Europe were scrambling to reach contacts
and piece together what was happening in Russia. The paper’s foreign
correspondents with experience in Moscow were pivoting from finalizing stories
to calling sources who could locate their colleague. One reached a taxi driver
in Yekaterinburg and urged him to stop by the apartment where Gershkovich was
staying. The chauffeur called back minutes later, saying he’d found only dark
windows, the curtains still open. “Let’s hope for the best,” he said.
Though there were still no news reports on Gershkovich’s disappearance nor
official comment from Russia’s government, the data points suggested something
had gone badly wrong. The Journal scheduled a call with the Russian ambassador
in Washington but when the hour came was told, “He is unfortunately not
available.” The problem reached the new editor- in-chief, Emma Tucker, who
listened quietly before responding in a voice laced with dread. “I understand.
Now what do we do?”
Only eight weeks into the job — in a Manhattan apartment so new it was furnished
with a only mattress on the floor — Tucker was still trying to understand the
Journal’s global org chart, and had met Gershkovich just once, in the paper’s
U.K. office. Now she was corralling editors, lawyersand foreign correspondents
from Dubai to London onto conference calls to figure out how to find him. A
Pulitzer Prize finalist and Russia specialist on her staff made a grim
prediction. If the FSB had him, it wasn’t going to be a short ordeal: “He’s
going to spend his 30s in prison.” And when editors finally located the
Journal’s publisher to inform him of what was going on, they hoped it wasn’t an
omen. Almar Latour was touring Robben Island, the prison off the coast of Cape
Town, South Africa, where Nelson Mandela served 18 of his 27 years of
incarceration.
There was a reporter nobody mentioned, but whose face was engraved into a plaque
on the newsroom wall. Latour had once sat next to Daniel “Danny” Pearl, the
paper’s intrepid and gregarious South Asia correspondent. In 2002, the
38-year-old was lured into an interview that turned out to be his own abduction,
and was beheaded on camera by Khalid Sheikh Mohammed, a mastermind of the
terrorist attacks of September 11, 2001 — leaving behind a pregnant wife and a
newsroom left to report the murder of their friend.
Paul Beckett, the Washington bureau chief and one of the last reporters to see
Pearl alive, had thought of him immediately. He managed to get Secretary of
State Antony Blinken on the phone. America’s top diplomat knew exactly who Evan
was; just that morning he had emailed fellow administration officials the
reporter’s latest front-page article, detailing how and where Western sanctions
were exacting long-term damage on Russia’s economy. It was an example, Blinken
told his office, of the great reporting still being done in Russia.
“Terrible situation,” Blinken told Beckett, before adding a promise America
would pay a steep price to keep: “We will get him back.”
--------------------------------------------------------------------------------
The Biden White House’s first move after learning of Gershkovich’s arrest was to
call the Kremlin — an attempt to bypass the FSB.
The arrest of an American reporter was a major escalation and if National
Security Advisor Jake Sullivan could reach Yuri Ushakov, Vladimir Putin’s top
foreign policy specialist, Sullivan hoped he could convince Ushakov to step back
from the brink. At best, he assessed his odds of success at 10 percent, but this
was a crisis that seemed likely to either be resolved with a quick call or drag
on for who knows how long, and at what cost.
“We’ve got a big problem,” Sullivan told Ushakov. “We’ve got to resolve this.”
The answer that came back was swift and unambiguous.
“This is a legal process,” Ushakov said. There would be no presidential clemency
— only a trial, and if Washington wanted a prisoner trade, they were going to
have to arrange it through what the Russians called “the special channel.” In
other words, the CIA would have to talk to the FSB. Sullivan hung up, and his
team braced themselves to brief the Journal: the newspaper was going to need to
be patient.
The White House was trapped in a rigged game, facing the crude asymmetry between
the U.S. and Russia, whose leader, in power for a quarter-century, could simply
order foreigners plucked from their hotel rooms and sentenced to decades on
spurious charges. Griner, the basketball champion, hadn’t even returned to the
basketball court in the three months since her exchange for “the Merchant of
Death,” yet already, the Russians had scooped up another high-profile chip.
The CIA and its European allies had been quietly trying to fight back in this
game of human poker. They had spent enormous energy tracking and rounding up the
Russians Putin valued most: deep-cover spies, or “illegals,” who spent years
building false lives undercover, taking on foreign mannerisms and tongues.
Norwegian police, with U.S. help, had nabbed an agent for Russia’s GRU military
intelligence agency, posing as a Brazilian arctic security professor in Norway’s
far north. Poland had arrested a Spanish-Russian freelance journalist: His
iCloud held the reports he’d filed for the GRU, on the women — dissidents and
journalists — he’d wooed across Central and Eastern Europe. It had taken the spy
service of the Alpine nation of Slovenia, known as Owl, nearly a year to find,
then jail, a carefully hidden pair of married spies, pretending to be Argentines
running an art gallery — sleeper agents working for Moscow’s SVR foreign
intelligence agency. Not even their Buenos Aires-born children, who they spoke
to in fluent Spanish, knew their parents’ true nationality or calling.
Yet for all that work, none of these prisoners worked for the agency that
mattered most in Russia and ran the “special channel” — the FSB. Putin himself
had once run Russia’s primary intelligence agency, and now it was in the hands
of his siloviki, the security men he’d known for decades who included
Menschikov. There was, the CIA knew, only one prisoner the FSB wanted back:
Krasikov, the FSB officer serving life in a German prison.
America was stuck. Every stick it could beat Russia with was already being
wielded. The world’s financial superpower was drowning Putin’s elite in
sanctions, and almost every week Sullivan authorized another carefully designed
shipment of weaponry to the battlegrounds of Ukraine, whose government
complained bitterly it was being given just enough to perpetuate a war, not
enough to win. And yet America’s government had to worry about the conflict
tipping into a nuclear exchange.
What else is there in our toolbag? Sullivan asked himself. We’re doing
everything we can. But the game was rigged. Which is why Putin kept playing it.
Senate Commerce Chair Ted Cruz (R-Texas) insisted Tuesday the idea of a 10-year
moratorium on state and local artificial intelligence laws remains alive —
despite a Republican argument that knocked it out of the summer’s budget bill.
“Not at all dead,” Cruz said at POLITICO’S AI & Tech Summit on Tuesday. “We had
about 20 battles, and I think we won 19. So I feel pretty good.”
Cruz said the controversial proposal made it further than conventional wisdom in
Washington suggested it could, ultimately passing scrutiny with the Senate’s
rules referee thanks to the “very creative” work of his staff.
He took a swipe at the Democratic-led states that have been most aggressive in
passing tech legislation in the past few years: “Do you want Karen Bass and
Comrade Mamdani setting the rules for AI?,” he asked, referring to the Los
Angeles mayor and New York City mayoral candidate.
Cruz acknowledged the moratorium fell out due to the opposition of Sen. Marsha
Blackburn (R-Tenn.), who was worried about the fate of her own state’s law
protecting musicians from AI copyright violations.
Cruz suggested the two are not in further talks about a path forward.
“She is doing her own thing,” Cruz said, while saying he was working closely
with the White House.
Many in Washington have long suspected the idea’s legislative prospects were
effectively dead after the GOP budget bill passed without its inclusion. It was
also opposed by a firm bloc of Republicans, including conservatives like
Sen. Josh Hawley (Mo.), Rep. Marjorie Taylor Greene (Ga.) and Steve Bannon.
Cruz has been actively engaged on artificial intelligence issues throughout the
current Congress. Last week, he offered a regulatory “sandbox” proposal that
would effectively loosen the regulatory load on emerging AI technologies.
White House Office of Science and Technology policy director Michael Kratsios
formally endorsed Cruz’s new plan during a committee hearing. Rep. Jay
Obernolte (R-Calif.), a leading House voice on AI issues, is preparing his own
legislation and hoping for “legislative oxygen” to advance it by the end of the
year.
Cruz said that “of course” his legislation would ensure certain existing laws,
like consumer safety protections, remain in force — amid concerns from outside
groups and Democrats that it could imperil the ability to enforce current
protections.
He said failing to pass laws unshackling AI would only benefit U.S. adversaries.
“The biggest winner of the status quo with no moratorium is China. Why? Because
we’re going to see contradictory regulations,” Cruz said.
LONDON — MPs enjoyed free hospitality at the U.K.’s top events this summer, with
Prime Minister Keir Starmer accepting tickets to Ascot for his family.
Google paid for several MPs to attend Glastonbury Festival including Labour’s
Chris Curtis, Dan Aldridge, Leigh Ingham, Steve Race, Jake Richards and Fred
Thomas. The hospitality cost of the individual packages was worth up to £4,000.
Chair of the Culture, Media and Sport Committee, Conservative MP Caroline
Dinenage, who has been critical of tech companies using copyrighted material to
train their AI models, also took a Glastonbury ticket and a tent from Google.
Deputy Speaker in the Commons and fellow Tory Nusrat Ghani was also gifted a
ticket by the tech giant.
Other MPs getting free Glastonbury tickets were Lib Dem tech spokesperson
Victoria Collins and Labour MP James Frith thanks to PRS for Music, a group
lobbying to protect musicians’ copyrighted material.
Rows over freebies caused the prime minister problems last year and he repaid
£6,000 worth of gifts. The register shows him accepting £650 of Ascot tickets
this summer for three family members.
Dozens of Conservative and Labour MPs also took free tickets to Silverstone and
Wimbledon this summer. Enjoying the British Grand Prix were Labour’s Calum
Anderson and Paula Barker, and Conservatives Charlie Dewhirst and Sarah Bool.
MPs have to declare gifts worth over £300 within 28 days of accepting.
BRUSSELS — Brussels has served the world’s leading artificial intelligence
companies with a tricky summer dilemma.
OpenAI, Google, Meta and others must decide in the coming days and weeks whether
to sign up to a voluntary set of rules that will ensure they comply with the
bloc’s stringent AI laws — or refuse to sign and face closer scrutiny from the
European Commission.
Amid live concerns about the negative impacts of generative AI models such as
Grok or ChatGPT, the Commission on Thursday took its latest step to limit those
risks by publishing a voluntary set of rules instructing companies on how to
comply with new EU law.
The final guidance handed clear wins to European Parliament lawmakers and civil
society groups that had sought a strong set of rules, even after companies such
as Meta and Google had lambasted previous iterations of the text and tried to
get it watered down.
That puts companies in a tough spot.
New EU laws will require them to document the data used to train their models
and address the most serious AI risks as of Aug. 2.
They must decide whether to use guidance developed by academic experts under the
watch of the Commission to meet these requirements, or get ready to convince the
Commission they comply in other ways.
Companies that sign up for the rules will “benefit from more legal certainty and
reduced administrative burden,” Commission spokesperson Thomas Regnier told
reporters on Thursday.
French AI company Mistral on Thursday became the first to announce it would sign
on the dotted line.
WIN FOR TRANSPARENCY
Work on the so-called code of practice began in September, as an extension of
the bloc’s AI rulebook that became law in August 2024.
Thirteen experts embarked on a process focused on three areas: the transparency
AI companies need to show to regulators and customers who use their models; how
they will comply with EU copyright law; and how they plan to address the most
serious risks of AI.
The proceedings quickly boiled down to a few key points of contention.
OpenAI, Google, Meta and others must decide in the coming days and weeks whether
to sign up to a voluntary set of rules that will ensure they comply with the
bloc’s stringent AI laws. | Filip Singer/EPA
Industry repeatedly emphasized that the guidance should not go beyond the
general direction of the AI Act, while campaigners complained the rules were at
risk of being watered down amid intense industry lobbying.
On Wednesday, European Parliament lawmakers said they had “great concern” about
“the last-minute removal of key areas of the code of practice,” such as
requiring companies to be publicly transparent about their safety and security
measures and “the weakening of risk assessment and mitigation provisions.”
In the final text put forward on Thursday, the Commission’s experts handed
lawmakers a win by explicitly mentioning the “risk to fundamental rights” on a
list of risks that companies are asked to consider.
Laura Lázaro Cabrera of the Center for Democracy and Technology, a civil rights
group, said it was “a positive step forward.”
Public transparency was also addressed: the text says companies will have to
“publish a summarised version” of the reports filed to regulators before putting
a model on the market.
Google spokesperson Mathilde Méchin said the company was “looking forward to
reviewing the code and sharing our views.”
Big Tech lobby group CCIA, which includes Meta and Google among its members, was
more critical, stating that the code “still imposes a disproportionate burden on
AI providers.”
“Without meaningful improvements, signatories remain at a disadvantage compared
to non-signatories,” said Boniface de Champris, senior policy manager at CCIA
Europe.
He heckled “overly prescriptive” safety and security measures and slammed a
copyright section, with “new disproportionate measures outside the Act’s
remit.”
SOUR CLIMATE
A sour climate around the EU’s AI regulations and the drafting process for the
guidance will likely affect tech companies’ calculations on how to respond.
“The process for the code has so far not been well managed,” said Finnish
European Parliament lawmaker Aura Salla, a conservative politician and former
lobbyist for Meta, ahead of Thursday’s announcement.
The thirteen experts produced a total of four drafts over nine months, a process
that garnered the attention of over 1,000 participants and was discussed in
several iterations of plenaries and four working groups — often in the evenings
since some of the experts were based in the U.S. or Canada.
Google spokesperson Mathilde Méchin said the company was “looking forward to
reviewing the code and sharing our views.” | John Mabanglo/EPA
The Commission’s Regnier applauded the process as “inclusive,” but both industry
and civil society groups said they felt they had not been heard.
The U.S. tech companies that must now decide whether to sign the code have also
shown themselves critical of the EU’s approach to other parts of its AI
regulation.
Tech lobby groups, such as the CCIA, were among the first to call for a pause on
the parts of the EU’s AI Act that had not yet been implemented — specifically,
obligations for companies deploying high-risk AI systems, which are set to take
effect next year.
BRUSSELS — A series of Hitler-praising comments by Elon Musk’s artificial
intelligence chatbot Grok has fired up European policymakers to demand stronger
action against Big Tech companies as the bloc takes another step to enforce its
laws.
Musk’s chatbot this week sparked criticism for making antisemitic posts that
included glorifying Nazi leader Adolf Hitler as the best-placed person to deal
with alleged “anti-white hate,” after X updated its AI model over the weekend.
The latest foul-mouthed responses from the chatbot saw EU policymakers seize the
opportunity to demand robust rules for the most complex and advanced AI models —
such as the one that underpins Grok — in new industry guidance expected
Thursday.
It’s also put a spotlight on the EU’s handling of X, which is under
investigation for violating the bloc’s social media laws.
The Grok incident “highlights the very real risks the [EU’s] AI Act was designed
to address,” said Italian Social-Democrat European Parliament lawmaker Brando
Benifei, who led work on the EU’s AI rulebook that entered into law last year.
“This case only reinforces the need for EU regulation of AI chat models,” said
Danish Social-Democrat lawmaker Christel Schaldemose, who led work on the EU’s
Digital Services Act, designed to tackle dangerous online content such as hate
speech.
Grok owner xAI quickly removed the “inappropriate posts” and stated Wednesday it
had taken action to “ban hate speech before Grok posts on X,” without clarifying
what this entails.
The EU guidance is a voluntary compliance tool for companies that develop
general-purpose AI models, such as OpenAI’s GPT, Google’s Gemini or X’s Grok.
The European Commission last week gave a closed-door presentation seen by
POLITICO that suggested it would remove demands from earlier drafts, including
one requiring companies to share information on how they address systemic risks
stemming from their models.
Lawmakers and civil society groups say they fear the guidance will be weak to
ensure that frontrunning AI companies sign up to the voluntary rules.
AMMUNITION
After ChatGPT landed in November 2022, lawmakers and EU countries added a part
to the EU’s newly agreed AI law aimed at reining in general-purpose AI models,
which can perform several tasks upon request. OpenAI’s GPT is an example, as is
xAI’s Grok.
That part of the law will take effect in three weeks’ time, on August 2. It
outlines a series of obligations for companies such as xAI, including how to
disclose the data used to train their models, how they comply with copyright law
and how they address various “systemic” risks.
The Grok incident “highlights the very real risks the [EU’s] AI Act was designed
to address,” said Italian Social-Democrat European Parliament lawmaker Brando
Benifei, who led work on the EU’s AI rulebook that entered into law last year. |
Wael Hamzeh/EPA
But much depends on the voluntary compliance guidance that the Commission has
been developing for the past nine months.
On Wednesday, a group of five top lawmakers shared their “great concern” over
“the last-minute removal of key areas of the code of practice, such as public
transparency and the weakening of risk assessment and mitigation provisions.”
Those lawmakers see the Grok comments as further proof of the importance of
strong guidance, which has been heavily lobbied against by industry and the U.S.
administration.
“The Commission has to stand strongly against these practices under the AI Act,”
said Dutch Greens European Parliament lawmaker Kim van Sparrentak. But “they
seem to be letting Trump and his tech bro oligarchy lobby the AI rules to shreds
through the code of practice.”
One area of contention in the industry guidance relates directly to the Grok
fiasco.
In the latest drafts, the risk stemming from illegal content has been downgraded
to one that AI companies could potentially consider addressing, rather than one
they must.
That’s prompted fierce pushback. The industry code should offer “clear guidance
to ensure models are deployed responsibly and do not undermine democratic values
or fundamental values,” said Benifei.
The Commission’s tech chief Henna Virkkunen described work on the code of
practice as “well on track” in an interview with POLITICO last week.
RISKS
The Commission also pointed to its ongoing enforcement work under the Digital
Services Act, its landmark platform regulation, when asked about Grok’s
antisemitic outburst.
While there are no EU rules on what illegal content is, many countries
criminalize hate speech and particularly antisemitic comments.
Large-language models integrated into very large online platforms, which include
X, “may have to be considered in the risk assessments” that platforms must
complete and “fall within the DSA’s audit requirements,” Commission spokesperson
Thomas Regnier told POLITICO.
The problem is that the EU is yet to conclude any action against X through its
wide-reaching law.
The Commission launched a multi-company inquiry into generative AI on social
media platforms in January, focused on hallucinations, voter manipulation and
deepfakes.
In X’s latest risk assessment report, where the platform outlines potential
threats to civic discourse and mitigation measures, X did not outline any risks
related to AI and hate speech.
Neither X nor the Commission responded to POLITICO’s questions on whether a new
risk assessment for Grok has been filed after it was made available to all X
users in December.
French liberal MEP Sandro Gozi said she would ask the Commission whether the AI
Act and the DSA are enough to “prevent such practices” or whether new rules are
needed.