Tag - Technology UK

The UK government is safety testing AI toys
LONDON — Civil servants in Britain’s business department are testing AI-enabled toys to determine their safety ahead of potential new restrictions. The testing is being carried out by the little-known Office for Product Safety & Standards, part of the Department for Business and Trade, and involves officials putting the toys through real-life scenarios to see how they respond, according to one person involved granted anonymity because they weren’t authorized to discuss the work. AI toys integrate chatbots, which can engage in human-like conversations with the user, into physical toys designed for children — and many are already on the market, even as researchers warn we don’t know much about the risks they might pose to kids. If a toy were determined to be unsafe, the government could intervene through the Product Safety and Metrology Act passed last year, which grants it increased powers to impose regulations on consumer products put on the U.K. market, including those sold online. The government has also said it will consult shortly on “major reforms” to the U.K.’s product safety framework to tackle the prevalence of unsafe products sold to Brits and increase the regime’s enforcement powers. In a written statement in December, Digital Economy Minister Liz Lloyd said the government was committed to reviewing the regulations for toys, which would “examine whether changes are needed to detailed safety requirements to reflect modern challenges, such as the use of AI in toys.” It comes amid warnings from researchers and consumer and parent groups over the safety of AI toys and their impact on children. A study by University of Cambridge researchers this month warned that AI toys are already being marketed to children despite a lack of robust studies about how they could impact early years development. The researchers called for stricter regulation and labeling requirements to help inform parents. Testing one toy, the researchers found that it often misunderstood children and reacted inappropriately to emotions. In one instance a toy reacted to a five-year-old boy saying “I love you” with “please ensure interactions adhere to the guidelines provided.” In an open letter issued before Christmas, U.K.-based campaign group set@16 declared the marketing of AI toys to British toddlers a “national and international emergency” and demanded an “immediate moratorium on sales and an urgent product recall.” Some experts have suggested that a “product safety” approach — whereby the onus is on those marketing a product to demonstrate that it meets consumer safety standards — could provide a blueprint to regulate AI more broadly. Some within Labour have heard that message. Speaking at a conference in London last week, Labour MP Tom Collins argued that a product safety approach could provide a more familiar framework for regulating the novel technology than sweeping regulation. Product safety is “a really good benchmark that we can all agree on,” he said.
Artificial Intelligence
Technology
Technology UK
Innovation
4chan hit with £450,000 UK fine over age checks
LONDON — The U.K.’s media regulator Ofcom fined 4chan £450,000 on Thursday for failing to comply with age check requirements under the Online Safety Act. The regulator also levied two additional fines of £50,000 and £20,000 on the company for not assessing the risk of users encountering illegal material and failing to specify in its terms of service how they are to be protected from such content, respectively. Ofcom previously fined 4chan £20,000 for failing to respond to to requests for information from the regulator. 4chan has until 2 April to implement age assurance, carry out a “suitable and sufficient” illegal harms risk assessment, and rewrite its terms of service or face a daily penalty of £200. “Companies – wherever they’re based – are not allowed to sell unsafe toys to children in the U.K. And society has long protected youngsters from things like alcohol, smoking and gambling. The digital world should be no different,” Suzanne Cater, Ofcom’s director of enforcement, said in a statement. 4chan did not immediately respond when contacted for comment.
Companies
Services
digital
Technology UK
Online safety
FBI is buying data that can be used to track people, Patel says
The FBI is buying up information that can be used to track people’s movement and location history, Director Kash Patel said during a Senate hearing Wednesday. It is the first confirmation that the agency is actively buying people’s data since former Director Christopher Wray said in 2023 that the FBI had purchased location data in the past but was not doing so at that time. “We do purchase commercially available information that’s consistent with the Constitution and the laws under the Electronic Communications Privacy Act, and it has led to some valuable intelligence for us,” Patel told senators at the Intelligence Committee’s annual Worldwide Threats hearing. The U.S. Supreme Court has required law enforcement agencies to obtain a warrant for getting people’s location data from cell phone providers since 2018, but data brokers offer an alternative avenue by purchasing the information directly. Many lawmakers want to end the practice. Sens. Ron Wyden (D-Ore.) and Mike Lee (R-Utah) introduced the Government Surveillance Reform Act on March 13, which would require federal law enforcement and intelligence agencies to obtain a warrant to buy Americans’ personal information. “Doing that without a warrant is an outrageous end run around the Fourth Amendment, it’s particularly dangerous given the use of artificial intelligence to comb through massive amounts of private information,” Wyden said at Wednesday’s hearing. The bill has a House counterpart introduced by Rep. Zoe Lofgren (D-Calif.) and Warren Davidson (R-Ohio). Committee Chair Tom Cotton (R-Ark.) defended the practice at the hearing. “The key words are commercially available. If any other person can buy it, and the FBI can buy it, and it helps them locate a depraved child molester or savage cartel leader, I would certainly hope the FBI is doing anything it can to keep Americans safe,” he said. Defense Intelligence Agency Director James Adams told senators at the hearing that his agency also purchases commercially available information.
Data
Intelligence
Artificial Intelligence
Technology
Law enforcement
Elon Musk steps into the UK energy crisis
LONDON — Elon Musk has been granted a license to supply energy in the U.K. Ofgem announced Thursday morning it has issued Musk-owned Tesla Energy Ventures with a license to provide electricity to U.K. businesses and households.  It brings a fresh contender into the supplier market, amid fears the global energy crisis will force up household bills.  The decision comes at the end of a seven-month approval process. Musk’s bid to enter the U.K. market has been highly controversial, after the world’s richest man and ally of U.S. President Donald Trump publicly criticized Prime Minister Keir Starmer and his government’s handling of the grooming gangs scandal.  Musk appeared last year via video link at a rally organized by the far-right activist Tommy Robinson, where he warned that “violence is going to come” to the British people “whether you choose violence or not.”   Energy Secretary Ed Miliband responded at the Labour Party conference in September: “We have a message for Elon Musk. Get the hell out of our politics and our country.” Miliband said Musk “incites violence on our streets.”   But Miliband would not be drawn at the time on whether Tesla Ventures should be granted an energy license. He insisted it was a matter for Ofgem and had to “go through the proper process.”  Miliband has faced calls from the centrist Liberal Democrats, and from some of Labour’s own MPs, to block the license.  After Musk’s comments about violence, Labour backbencher Clive Lewis said in September: “Elon Musk shouldn’t be allowed anywhere near our critical infrastructure.” The news comes at it a critical time for the domestic retail market, with industry warnings that customer debts have hit £5.5 billion. Disruption of key trade routes in the Gulf has pushed up wholesale gas and oil prices sharply.   Ofgem’s license for Tesla Ventures took effect on Wednesday, the regulator said.  It said the company must comply with all licensing conditions including requirements for treating customers fairly, financial responsibility, operational capability, billing, information provision and consumer protection.  Ofgem will have assessed whether Musk was a “fit and proper” person to lead a U.K energy supplier, although experts have previously said that is unlikely to take political statements into account. Ed Miliband’s Department for Energy Security and Net Zero has been approached for comment.
Energy
Trade
Markets
Energy and Climate UK
Technology UK
EU set to ban AI nudification apps in wake of Grok scandal
BRUSSELS — Artificial intelligence systems that can generate sexualized deepfakes of real people would be banned in the EU under proposals seen by POLITICO. The push comes after X’s AI tool Grok allowed users to generate millions of images of real people in bikinis or fully nude, including images of children. A proposal set to be approved by EU ambassadors on Friday would make it illegal to market in Europe any artificial intelligence system that can generate non-consensual sexualized videos, images or audio files involving real people. European Parliament lawmakers backed a ban in separate talks on Wednesday. The plans — which could kick in as early as this summer after negotiations between EU countries and the Parliament — raise questions about the future of a host of apps that allow users to create fake nude images of people from real-life pictures, including Elon Musk’s tool. The EU is already looking into whether X properly mitigated the risks of integrating Grok into its platform to prevent harm from sexually explicit images. “This is not only about Grok,” said German Greens Member of Parliament Sergey Lagodinsky, one of multiple lawmakers who backed a ban. “It is about how much power we are willing to give AI to degrade people.” PULLING THE TRIGGER The image-generating capabilities of Grok went viral at the end of 2025. The chatbot may have generated as many as 3 million non-consensual sexual images and 20,000 child sexual abuse images in the 11 days before changes were made to stop the spread of such photos, an estimate by civil society found. The platform took steps to restrict the feature on Jan. 9 and again on Jan. 14. Announcing those changes, X said: “We remain committed to making X a safe platform for everyone and continue to have zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content.” The EU is investigating whether these steps were sufficient. Dozens of lawmakers first called for a ban on AI nudification apps and tools in mid-January. EU legislators now intend to make that a reality through a plan to amend the EU’s AI rulebook. The proposal was presented by the European Commission in November. | Thierry Monasse/Getty Images Presented by the European Commission in November, the proposal was originally intended to scale back restrictions on artificial intelligence companies and reduce the regulatory burden. That changed after the discovery that Grok users were undressing women and children, putting the issue top of mind among EU legislators and surpassing items originally seen as sensitive, including plans to delay restrictions on high-risk artificial intelligence. Cyprus, which holds the rotating presidency of the Council of the EU and is charged with finalizing a common position among EU countries, for weeks did not include a ban on AI nudification systems in several rounds of drafting. That changed Tuesday when the Cypriots floated a near-final text that backs a ban on AI systems that can generate images, video or audio “of an identifiable natural person’s intimate parts or of an identifiable natural person engaged in sexually explicit activities.” The inclusion of a ban is a win for countries such as Spain that had strongly pushed for it. EU ambassadors are set to greenlight the text on Friday. European Parliament lawmakers agreed Wednesday to include language to ban an “Al system that alters, manipulates or artificially generates realistic images or videos so as to depict sexually explicit activities or the intimate parts of an identifiable natural person, without that person’s consent.” However, the agreement reached in a political meeting Wednesday notes a ban would not apply to companies “who have put effective safety measures [in place] to prevent the generation of such depictions and to avoid misuse.” The text is not yet final, with the Parliament’s lead committees set to vote on it March 18. The Parliament and Council will then meet to agree a final version before a ban becomes law. On Tuesday the Parliament also called upon the Commission to “investigate measures to protect individuals against the dissemination of manipulated and AI-generated digital image, audio or video content” as part of a separate report on AI and copyright. “What is maybe a joke for one for 10 seconds, can bring lasting damage to a victim,” said Dutch Greens lawmaker Kim van Sparrentak on Monday. “High time to ban all of these apps.”
Intelligence
Politics
Negotiations
Parliament
Artificial Intelligence
UK government pivots its digital ID pitch to war on red tape
LONDON — The U.K. government published its long-awaited digital ID consultation Tuesday, claiming it will make public services “quicker, easier and more secure to access.”   It marks a shift in tone from Prime Minister Keir Starmer’s initial pitch last September, which framed the proposal as a way to curb illegal working and, by extension, unauthorized migration.  Now, digital ID is all about helping Brits interact with the state. “People too often dread their interactions with public services. Endless telephone calls, complicated printed forms and having to tell their story multiple times to different parts of government,” Chief Secretary to the Prime Minister Darren Jones said.   “Supermarkets, banks and shops have all chosen to move their services online because it delivers a better customer experience, and other countries like Estonia fully digitized public services years ago. We need to catch up,” Jones said.  The U.K. government has gradually pivoted in its approach to digital ID since Keir Starmer first announced it. In September, Starmer said: “You will not be able to work in the United Kingdom if you do not have digital ID,” but that’s no longer the case. In January, the Cabinet Office abandoned plans to make government-issued digital ID mandatory for proving Right to Work by 2029 amid public outcry and private sector lobbying.   Workers will be able to choose between a government-issued credential, private sector offerings, and physical documents like passports, meaning the only aspect of the process necessarily “digital” is on the employer’s end.   At the same time, the government wants to set out a much broader – and altogether more positive – vision for digital ID, based on the idea of “government by app,” per a Cabinet Office press release.   Alongside the consultation process, the government will create a “People’s Panel” that “brings together people across the country from different backgrounds” to share their perspectives.  The consultation will run for 8 weeks, until May 5.
Migration
Rights
Technology
Services
digital
Anthropic sues Trump admin over supply-chain risk label
Anthropic on Monday sued the Trump administration for declaring the artificial intelligence company a risk to the Defense Department’s supply chain, a step that further escalates a standoff over the ethical limits on increasingly powerful AI. In a lawsuit filed in the U.S. District Court for the Northern District of California, Anthropic accused the government of violating its First Amendment rights, exceeding the legal scope of the supply-chain risk statute and circumventing the process through which the president and cabinet secretaries are allowed to cancel government contracts. “Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive’s unlawful campaign of retaliation,” the company’s lawyers wrote in the California filing. The lawsuit names several federal agencies and cabinet officials as defendants, including the Defense Department and Defense Secretary Pete Hegseth. The company on a briefing call with reporters said it would also file a lawsuit against the Trump administration in the U.S. Court of Appeals for the D.C. Circuit. Spokespeople for the White House did not respond to a request for comment. A Pentagon spokesperson said the department does not comment on ongoing litigation. The lawsuit is the latest development in a tumultuous dispute between the Pentagon and Anthropic over the company’s restrictions on the military’s use of its technology. At a meeting with Hegseth last month, Anthropic CEO Dario Amodei said he would not allow Claude to be used to surveil American citizens or empower autonomous weapons. In response, Hegseth threatened to label the company a supply-chain risk — an unprecedented designation historically reserved for companies with ties to U.S. adversaries. On Wednesday, the Pentagon told Anthropic it was formally designating the company a supply-chain risk. Days earlier, President Donald Trump sent a social media post ordering all federal agencies to stop using Anthropic’s Claude AI model (which has reportedly been used by the Pentagon in ongoing combat operations in Iran). Anthropic’s lawsuit highlights statements made by Trump and Hegseth in the course of their spat with the company to argue that the government seeks to suppress its constitutionally-protected speech. “The Constitution confers on Anthropic the right to express its views — both publicly and to the government — about the limitations of its own AI services and important issues of AI safety,” the company’s lawyers wrote. The company in the lawsuit also accuses the government of exceeding the legal scope of the supply-chain risk designation statute. Anthropic argues in court documents that the law is narrow and meant to address the risk that foreign adversaries could sabotage or subvert a national security system — a determination it says the government has not made about the company. Anthropic also argues that Trump and Hegseth exceeded their authority by attempting to cancel the startup’s government contracts without following correct procurement procedures. Finally, the company claims the government violated the Administrative Procedures Act and Anthropic’s Fifth Amendment right to due process, according to court documents. Kyle Cheney contributed to this report.
Technology
Technology UK
UK eyes sweeping powers to regulate tech without parliamentary scrutiny
LONDON — Keir Starmer wants the public to know he’s going to move fast and fix things.  Speaking to an audience of young people last month, the U.K. prime minister said that unlike the previous Conservative government, which took eight years to pass the country’s Online Safety Act, Labour will legislate fast enough to keep up with the breakneck speed of technological change and its associated harms.  “We’ve taken the powers to make sure we can act within months, not years,” he said.   His words came after the government decried Elon Musk’s X for allowing deepfaked nude images to flood its platform. “The action we took on Grok sent a clear message that no platform gets a free pass,” Starmer said.  Labour showcased its bold new approach last week, tabling two legislative amendments that seek to grant ministers sweeping powers to change the U.K.’s online safety regime without needing to pass primary legislation through Parliament — meaning MPs and peers would have next to no opportunity for scrutiny.  While Labour argues this is necessary to deal with the onslaught of online harms brought about by technology — particularly AI — digital rights activists and civil liberties campaigners fear executive overreach, and say Labour is confusing fast action for good policy, especially as it mulls the possibility of a social media ban for under-16s.  GOVERNMENT HANDS ITSELF NEW POWERS The first amendment, to the Crime and Policing Bill, would empower any senior government minister to amend the Online Safety Act near unilaterally for the purposes of “minimizing or mitigating the risks of harm to individuals” presented by illegal AI-generated content.   The second amendment, to the Children’s Wellbeing and Schools Bill, looks to go even further, giving ministers the ability to alter any piece of primary legislation to restrict children’s access to “certain internet services.”   The Department for Science, Innovation and Technology (DSIT) has said it wants to act “at pace” in response to the findings of its consultation, the “key focus” of which is whether to ban social media for under-16s, a policy idea which has picked up momentum in multiple countries since Australia introduced a ban at the end of last year.  Amendments like those tabled this week are commonly referred to as Henry VIII clauses, which allow ministers to largely bypass Parliament. They are not entirely new: successive governments since the 1980s have increasingly relied on statutory instruments for lawmaking, according to the Institute for Government.   But such clauses bring problems that could last long after Starmer’s premiership. The government may have good intentions when it comes to online safety, but the measures proposed are “storing up trouble for years to come at a very worrying moment where anti-democratic parties [around the world] are gaining traction,” Anna Cardaso, policy and campaigns officer at civil liberties organisation Liberty told POLITICO.  “When you create a law, you have to think about what a future government could do with those powers. A future government might not be motivated purely by reducing harms to children, or might have a very different view of what counts as harm,” agreed James Baker, advocacy manager at digital rights organisation Open Rights Group.   Baker pointed to steps taken by the Trump administration in the U.S. to target websites hosting LGBTQ+ content and reproductive health advice.   There are also questions to be asked about proportionality under the Human Rights Act, he argued, not least because the evidence base on how children are affected by social media is muddy at best — a DSIT-commissioned study published in January found little high-quality evidence of a correlation between time spent on social media and poorer reported mental health, for example.   Although the government hopes its use of Henry VIII powers will speed things up, the move is vulnerable to challenge in the courts — not only from human rights campaigners concerned about the impact on privacy and freedom of expression, but also from tech companies navigating any new regulations.   “The inevitable consequence of such broad regulatory discretion is an explosion in litigation,” Oliver Carroll, legal director at law firm Bird & Bird, said.   ‘FIRE-FIGHTING’ The government has backed away from plans to introduce primary legislation dedicated to artificial intelligence, with ministers instead looking to regulate AI at the point of use on a sector-by-sector basis.   Primary legislation on AI would have allowed parliamentarians and other stakeholders to “debate and hammer out the fundamental principles and a framework of regulation,” Liberty’s Anna Carsado said. “But instead, they’ve dodged the hard thing, and they’re just firefighting emergency by emergency by statutory instrument.”   The Children’s Wellbeing and Schools Bill amendment gets its first outing in the House of Commons today, where it stands a good chance of surviving thanks to Labour’s 158-seat majority. Both amendments will also have to pass the House of Lords, where they could meet more resistance.  DSIT did not respond when contacted by POLITICO for comment.  
Social Media
Politics
Policy
Regulation
Rights
World’s money launderers are shifting to crypto, report warns
LONDON — Western governments are being urged to clamp down on cryptocurrency as new research suggests $350 billion has been laundered by criminals and hostile states using the technology in the past two decades. A new report for the Henry Jackson Society think tank, shared with POLITICO, finds that worldwide money laundering has shifted dramatically towards cryptocurrency in recent years — with the United States, Russia and Britain seeing the highest number of confirmed cases. The report draws on a database of 164 publicly identified and documented money laundering cases between 2005 and 2025. It was compiled by Alexander Browder, son of American-British financier and anti-corruption campaigner Bill Browder. Alexander Browder said that the true figure could even be “many multiples” higher than the hundreds of billions that have been identified. The study also sheds light on lax enforcement of money laundering powered by crypto. It finds that 79 percent of cases have resulted in no convictions, while only 29 percent of funds have been recovered by authorities. The researchers, based in the U.K., call on the British government to set up a new Cryptocurrency Asset Recovery Office. This would hold recovered funds to transfer back to their rightful owners. Chris Coghlan, a member of the House of Commons Treasury Select Committee told POLITICO: “The sophistication and speed of crypto currency money launderers is much higher and faster than our government’s ability to react.  “As a result, our sanctions and law enforcement are in an increasingly weak position to stop it. This report highlights the need for a robust policy response to this pressing issue.” POLITICAL ISSUE Cryptocurrency is increasingly becoming a regulatory battleground in both the U.K. and the U.S. In America, President Donald Trump has come under fire for his ties to the industry. In April last year the U.S. disbanded a Department for Justice unit tasked with investigating crypto-related fraud. In Britain, Nigel Farage’s right-wing Reform UK became the first major British political party to accept crypto donations. The British government is considering a ban on political donations through crypto. But cryptocurrency exchanges will not be regulated by the country’s Financial Conduct Authority until 2027. Much of Britain’s concern about crypto comes from Russia’s recent embrace of the currency as an alternate means of financing its war economy following the invasion of Ukraine. Browder said Russia is now successfully evading sanctions using cryptocurrency — and that it is becoming a global epicenter for its illicit use. “Half of the illicit exchanges identified in the database have been based in Russia. Four out of five major ransomware groups in the database have been based in Russia.  “It is the home to crypto darknet marketplaces such as Hydra — one of the largest in the world, which had processed over $5 billion in illicit funds through the sale of harmful drugs and other illegal services,” he warned. Browder added that British, American and EU policymakers have so far been unable to tackle the problem: “Criminals and rogue regimes are basically running circles around U.K., U.S. and EU prosecutors.” “Criminals are able to escape without legal consequences, and victims are left without redress and adequate compensation.”
UK
Rights
Technology
Fraud
Law enforcement
City’s AI czar says financial services need protection from unpredictable Trump
The U.K. government must move to protect the financial services industry from the potential costs of an unpredictable Trump administration, the City of London’s newly appointed artificial intelligence czar told POLITICO.  City firms which are “heavily reliant on U.S. technology” face the “risk” of changes beyond their control due to the climate of uncertainty stemming from U.S. President Donald Trump’s government, said Harriet Rees, who is one of two appointments by the U.K. Treasury to champion artificial intelligence adoption in financial services. “I definitely see a geopolitical risk right now when it comes to our relationship with U.S. technology, our reliance on it,” said Rees, who serves as the chief information officer at Starling Bank. She added: “Within my role as AI champion, I will be looking for some more confidence for the industry as to what the government is doing to protect firms, or what mitigations the industry needs to be put in place, so that we’ve got the confidence that we won’t be out of pocket for the things that we don’t have any input over.” Her warnings come as multiple sectors are eyeing ways to diversify away from the U.S., particularly in the EU, in the wake of Trump’s ongoing tariff war and threat to use force to take Greenland. In financial services, the focus is on creating a new payments system to replace U.S. card heavyweights Visa and Mastercard. Aurore Lalucq, a left-leaning member of the European Parliament, said last month: “The urgency is our payment system. Trump can cut us off from everything.” In Britain, banks will meet in mid-March to discuss account-to-account payments, a system which would also bypass Visa and Mastercard by allowing payments directly between bank accounts. But regulators in the U.K. insist plans are about “resilience” rather than an intention to cut out the U.S. Industry plans should take into account this eventuality, Rees argued. “We see that the U.S. is prepared to make changes, be it tariffs, be it the way trade operates between countries and so where we are reliant … on exports from the U.S. we need to make sure that we understand the risks,” she said, adding that it’s key to “have plans in place as an industry to be able to cope with that, should that eventuality happen, that we have the government really lobbying on our side to make sure that that is an unlikely risk to crystallize.” British firms’ reliance on American cloud service providers poses a particular risk, Rees said, with U.S. tech giants Amazon, Microsoft and Google dominating in the cloud computing space. She called on regulators to ensure the providers are adhering to legislation. Any outage of these cloud providers could cause “significant disruption” for the financial services industry, Rees said, and Britain should “ensure that we hold those technologies to the same standards as we would any other critical infrastructure here in the U.K.” A bug in automation software took down Amazon Web Services, the largest cloud provider in the world, in October last year, causing outages for thousands of sites and applications. Last month, MPs criticized the government for not acting decisively enough on cloud service providers. New rules for “critical third parties” — firms, such as cloud providers, whose disruption could impact Britain’s financial stability — came into effect in Jan. 2025. They give the U.K.’s City regulators new powers of investigation and enforcement over providers designated as critical.  Despite the regime being in place for a year, no providers have been handed the designation. MPs on the Treasury Committee queried why the government “has been so slow to use the new powers at its disposal.”
Intelligence
UK
Rights
Tariffs
Artificial Intelligence