Tag - Data / privacy

10 years after Brussels attacks, threat has moved online, says EU terror chief
BRUSSELS — In the 10 years since the Brussels terror attacks, the EU has tightened its security strategy but the internet is opening up new threats, according to the bloc’s counterterrorism coordinator.  Daesh is “mutating jihadism,” Bartjan Wegter told POLITICO in an interview on the eve of the anniversary of the terrorist attacks in Brussels, which pushed the bloc to bolster border protection and step up collaboration and information-sharing. The group has “calculated that it’s much more effective to radicalize people who are already inside the EU through online environments rather than to organize orchestrated attacks from outside our borders,” he said.  “And they’re very good at it.” Ten years ago, two terrorists from Daesh (also known as the so-called Islamic State) blew themselves up at Brussels Airport. Another explosion tore through a metro car at Maelbeek station, in the heart of Brussels’ EU district. Thirty-two people were killed, and hundreds more injured.  The attacks came just months after terrorists killed 130 people in attacks on a concert hall, a stadium, restaurants and bars in Paris, exposing gaps in information-sharing in the bloc’s free-travel area. The terrorists had moved between countries, planning the attacks in one and carrying them out in another, said Wegter, who is Dutch. “That’s where our vulnerabilities were.” Today, violent jihadism remains a threat and new large-scale attacks can’t be excluded. But the probability is “much, much lower today than it was 10 years ago,” said Wegter. In the aftermath of the attacks, the bloc changed its security strategy with a focus on prevention and a “security reflex” across every policy field, according to Wegter. It’s also stepping up police and judicial collaboration through Europol and Eurojust, and it’s putting in place databases — including the Schengen Information System — so countries could alert each other about high-risk individuals, as well as an entry/exit system to monitor who enters and leaves the free-travel area. But the bloc is facing a new type of threat, as security officials see a gradual increase in attempted terrorist attacks by lone actors. A lot of that is being cultivated online and increasingly, younger people are involved. “We’ve seen cases of children 12 years old. And, the radicalization process [is] also happening faster,” Wegter said. “Sometimes we’re talking about weeks or months.” In 2024, a third of all arrests connected to potential terror threats were of people aged between 12 and 20 years old, and France recorded a tripling of the number of minors radicalized between 2023 and 2024, said Wegter.  “Just put yourself in the shoes of law enforcement … You’re dealing with young people who spend most of their time online … Who may not have a criminal record. Who, if they are plotting attacks, may not be using registered weapons. It’s very hard to prevent.” Violent jihadism is just one of the threats EU security officials worry are being cultivated online. Wegter said there is also an emerging trend of a violent right-wing extremist narrative online — and to a lesser extent, violent left-wing extremism. There’s also what he called “nihilistic extremist violence,” a new phenomenon that can feature elements of different ideologies or a drive to overthrow the system, but which is fundamentally minors seeking an identity through violence. “What we see online, some of these images are so horrible that even law enforcement needs psychological support to see this kind of stuff,” said Wegter. Law enforcement’s ability to get access to encrypted data and information on people under investigation is crucial, he stressed, and he drew parallels with the steps the EU took to secure the Schengen free movement 10 years ago. “If you want to preserve the good things of the internet, we also need to make sure that we have … some key mechanisms to safeguard the internet also.”
Data
Social Media
Politics
Law enforcement
Online safety
Why Europe can’t defend what it can’t connect
Europe enters a more contested decade than any since the end of the Cold War. Yet the frontline shaping its security is no longer limited to land, sea, air or even space.   It runs directly through the digital backbone that powers modern life: the networks, data infrastructures and connectivity systems on which governments, economies and armed forces depend.  But Europe will not be secure until it takes this digital backbone’s security seriously, and governs its openness through risk-based, verifiable sovereignty rather than isolationism or complacency.  >  Europe will not be secure until it takes this digital backbone’s security > seriously, and governs its openness through risk-based, verifiable sovereignty A digital frontline that remains dangerously exposed  Hybrid threats no longer sit at the margins of European security. In reality, they cut straight through its core systems. Hospitals, energy grids, transport networks, financial markets and military command-and-control all rely on constant, resilient connectivity.   Via Vodafone. Joakim Reiter, group chief external and corporate affairs officer, Vodafone. And when those systems falter, nations falter. Recent blackouts in Portugal and Spain revealed what this means in practice. A ‘digital failure’ is not an IT incident. It is a national security event.   Adversaries have already drawn the lesson. Subsea cables carrying 95 percent of the world’s internet traffic face mounting sabotage risks. Satellites have become open theatres of geopolitical competition. And cyberattacks now routinely target both critical national infrastructure and the commercial networks that underpin defense readiness.   Despite this, much of Europe’s digital backbone is still approached as a utility, not a strategic asset. Market forces, on their own, cannot deliver the resilience, redundancy and diversity that modern deterrence requires. Piecemeal upgrades and fragmented responsibilities across civil, military and regulatory silos leave avoidable gaps that adversaries will inevitably exploit.  > A ‘digital failure’ is not an IT incident. It is a national security event.   Europe must therefore elevate secure connectivity to the level of defense preparedness — politically, financially and operationally. It requires moving beyond incrementalism to a coordinated framework that fosters and defends critical digital infrastructure — one that enables governments and operators to plan, train and respond together before, not during, the next crisis.  Sovereignty is about control, not isolation  Connectivity alone is not the issue. Europe’s strategic vulnerability also stems from how it governs the technologies on which its digital backbone depends.  And while digital sovereignty is one pillar of Europe’s wider resilience agenda — spanning critical value chains such as defense, automotive, chemicals and energy — it is the pillar without which none of the others can function.  Europe cannot attain digital sovereignty by continuing excessive dependence on a small number of non-European providers. But it also cannot achieve it by walling itself off from global innovation. Both extremes weaken resilience.  That’s why sovereignty done right means governing openness on Europe’s terms. Europe must keep critical operations in trusted European hands while maintaining access to the scale, performance and innovation that global platforms can provide.   This approach starts with understanding sovereignty across three dimensions:  — Data sovereignty: who has lawful access to information.  — Operational sovereignty: who runs and can intervene in critical systems.  — Technological sovereignty: which capabilities Europe must own or control.  The false choice between ‘ban foreign tech’ and ‘do nothing’ is a trap. The real path forward is risk-based, proportionate and verifiable. We must define what truly requires European control and work with like-minded international partners to build a trusted technology ecosystem. Sovereignty needs to be demonstrated in practice, not merely asserted in policy.  This approach would also enable Europe to pool industrial capacity with trusted partners such as Japan, Canada, Australia, the United Kingdom and South Korea. This is cooperation that strengthens Europe rather than diluting control.   From principles to verifiable control  Europe should reject blanket bans based on EU borders that raise costs, slow next-generation deployment and fail to deliver true control. Instead, sovereignty must be translated into concrete, auditable mechanisms that strengthen resilience.   To deliver it, Europe should follow four core principles:   1. Harden the backbone: Europe must create a much better business case for investing in resilient fiber, advanced 5G technologies and future networks built with defense-grade security. And it must fortify subsea cables, satellite systems and cross-border infrastructure against hybrid threats. This is defense spending by another name.  2. Engineer sovereignty into operations: ensure Europe retains verifiable control over access to sensitive systems and require European oversight of critical operations. Authorities must be able to verify who operates critical systems, where data is processed and which legal jurisdiction applies.  3. Certify ‘Trusted European Operators’: establish an EU-wide certification enabling European-anchored providers to manage access to global platforms within EU-governed environments. Make interoperability and portability mandatory to prevent lock-in and ensure resilience.  4. End ‘sovereignty washing’: providers claiming sovereign capabilities must prove it. Europe must require auditable disclosures and rigorous, risk-based assessments. If claims cannot be verified, they should not determine Europe’s critical infrastructure decisions.  In parallel, Europe should adopt a single EU framework defining practical levels across the data, operational and technological dimensions. This would give CIOs, regulators and public bodies clarity and consistency.   From doctrine to delivery  As the dust settles on the annual Munich Security Conference, Europe faces a defining choice. It can carry on treating its digital backbone as regulatory plumbing and watch vulnerabilities compound. Or it can recognise this backbone for what it is — a core line of defence.  > The real test of seriousness is whether governments and operators can plan > together, train together and respond together when systems are stressed.  The real test of seriousness is whether governments and operators can plan together, train together and respond together when systems are stressed. And this depends on whether investment, procurement and certification systems finally move at the speed security demands.  The way forward lies neither in dependence nor in fantasies of self-sufficiency. It must be grounded in risk-based sovereignty, delivered through verifiable control, modernized infrastructure and deeper public–private cooperation, aligned with trustworthy allies.  Ultimately, Europe cannot defend what it cannot connect, and it cannot compete if it closes itself off. Europe will fail this critical strategic test if the regulatory agenda for connectivity — the Digital Networks Act, Cybersecurity Act and merger guidelines revisions — does little to strengthen the very networks its security depends on.  If Europe gets this right, it can build a digital backbone capable of deterring adversaries, supporting allies, protecting citizens and powering innovation for decades to come.  -------------------------------------------------------------------------------- Disclaimer POLITICAL ADVERTISEMENT * The sponsor is Vodafone Group plc * The ultimate controlling entity is Vodafone Group plc * The political advertisement is linked to EU-level security and digital policy with particular focus on the Digital Networks Act, Cybersecurity Act, merger guidelines and broader digital sovereignty strategy. More information here.
Data
Energy
Cooperation
Military
Security
Ireland launches ‘large-scale inquiry’ into Musk’s AI bot Grok
Ireland’s powerful data protection regulator has launched an investigation into Elon Musk’s social media firm X over the wave of sexualized deepfakes generated by its AI tool Grok. The Irish Data Protection Commission is investigating Grok’s “apparent creation” of “potentially harmful, non-consensual intimate and/or sexualised images” by processing the personal data of Europeans, including children, the regulator said in a statement. The Irish probe is the latest in a string of investigations into X since Grok, which is integrated in the social media platform, started generating a wave of non-consensual sexualized deepfakes at the end of last year, some of which depicted minors. It could trigger another fight between the U.S. and the EU over enforcement of Europe’s tech regulations. Top officials in the administration of U.S. President Donald Trump blasted a move by the European Union in December to fine X €120 million over violations of the bloc’s content moderation rulebook, the Digital Services Act.  The Irish regulator is in charge of enforcing the European Union’s General Data Protection Regulation (GDPR) on many of the world’s tech giants that have their European headquarters in Ireland. It has the power to impose fines on X as large as 4 percent of global annual turnover. The Data Protection Commission “has commenced a large-scale inquiry which will examine compliance [of X’s international entity] with some of their fundamental obligations under the GDPR in relation to the matters at hand,” Deputy Commissioner Graham Doyle said. Doyle said the Data Protection Commission had been engaging with X since media reports about the sexualized deepfakes emerged “a number of weeks ago.” The European Commission in January launched an investigation into the Grok deepfakes under the Digital Services Act. The U.K.’s privacy watchdog has also opened a formal inquest, while French authorities are pursuing a criminal investigation and Californian authorities are also probing the issue. Grok’s image-generation feature went viral at the end of 2025, particularly due to its ability to undress people. Rights groups have estimated that Grok created 3 million sexualized images over 11 days in January, including 23,000 of children. The platform took some steps to restrict the feature on Jan. 9 and again on Jan. 14. The latest measures stopped all users from using Grok to generate such images.
Artificial Intelligence
Technology
Services
Privacy
Platforms
Personal data is the new battleground for democracy
Frank H. McCourt Jr. is an American business executive and civic entrepreneur. He is the founder of the Project Liberty, a global initiative aiming to restore agency in the digital age by giving people ownership and control of their personal data. At the height of the Cold War, a man named Ewald-Heinrich von Kleist-Schmenzin convened the West’s leading security experts in Munich. As a World War II resistance fighter and member of the Stauffenberg circle, which had attempted to overthrow Hitler, his goal was simple: preventing World War III. And he dedicated the rest of his life to fostering open dialogue, sharing defense strategies and deescalating tensions. Tomorrow, as global leaders gather at the annual Munich Security Conference once again, the threats they face are no less profound than they were some 60 years ago — though many of them are far less visible. Yes, wars are raging across continents, alliances are being tested, and tensions are escalating across borders and oceans. However, I would wager that if von Kleist-Schmenzin were alive today, he would agree that the most consequential struggle of our time may not be unfolding on traditional battlefields at all. Instead, it’s unfolding in the digital realm, where control over personal data — over our digital personhood — is the central source of power and influence in the modern world. When the World Wide Web was born, we were promised an era of democratic participation — a digital town square for a new millennium. What we have instead is something far darker: Predatory algorithms shredding civil society, warping truth and pitting neighbor against neighbor, while a handful of the world’s richest companies know more about us than any intelligence agency ever could. Deep down, we all feel the absolute grip of the Internet on society. We feel it at the national level, as polarization and misinformation continue to fray our social fabric, upend elections and disrupt the world order. We feel it at our kitchen tables, as artificial intelligence bots and polarizing voices prey on the mental and social health of our children. This crisis is no accident. It’s the world Big Tech has deliberately built. From the moment Facebook introduced the “like” button, the Internet began its descent from a boundless repository of knowledge into a system optimized for rage, addiction and profit—one that rewards division and disregards truth. The business model is quite straightforward: Algorithms are engineered to capture our attention and exploit it, rather than inform or connect us. And by the metric of stock price, this model has been wildly successful. Big Tech companies have amassed trillions of dollars in record time. And they’ve done so by accumulating the most valuable resource in human history — our personal data. Acquiring it through a surveillance apparatus that would make the Stasi blush. Now, with the rise of AI, these same companies are selling us a new story — that of a brave new chapter for the Internet that is exponentially more powerful and ostensibly benevolent. Yet, the underlying logic remains the same. These systems are still designed to extract more data, exert more control, deepen manipulation, all at an even more unprecedented scale. The threat has particularly escalated with the emergence of the “agentic web,” where autonomous AI systems are no longer confined to interpreting information but are empowered to act on it – often with minimal oversight and inadequate alignment safeguards. OpenClaw — an open-source autonomous AI assistant — reflects this rapid shift from consumption to delegation perfectly: Individuals are handing over sweeping permissions, enabling agents to interact and operate freely with other agents in real time, dramatically amplifying exposure to real-world harm, coordinated manipulation from bad actors and with even less human control. And yet, those who raise concerns about this concentration of power and these security risks are quickly dismissed as anti-progress, or accused of ceding the future of AI to China. If Ewald-Heinrich von Kleist-Schmenzin were alive today, he would agree that the most consequential struggle of our time may not be unfolding on traditional battlefields at all. | Rainer Jensen/DPA/AFP via Getty Images Let’s be clear: We won’t beat China by becoming China. Autocratic algorithms, centralized power and mass surveillance are fundamentally incompatible with democracy. And were von Kleist-Schmenzin to look at today’s AI frameworks, he’d likely recognize them as far closer to the east of the Berlin Wall than the west. To reverse that reality, we must build alternative systems that respect individual rights, return ownership and control of personal data to individuals, and align with democratic principles. The technologies shaping our lives need to be optimized to protect citizens, not endanger them. Here’s the good news: This technology is already being built. Around the world, leading technologists, universities, companies and governments are working to establish a new paradigm for AI — open-source, transparent systems governed by the public sector and civil society. My organization, Project Liberty, is part of this effort, grounded in a simple belief: We can, and must, build AI technology that’s in harmony with fundamental democratic values. Such upgraded AI architecture is designed for human flourishing. It will give people a voice in how these platforms operate, real choices over how their data is used, and a stake in the economic value they create online. It will be paired with policy and governance frameworks that safeguard democracy, freedom and trust. As the world’s leaders gather in Munich, I call on them to help build a better foundation for AI that embeds Western values and protects future generations. Let them consider the world von Kleist-Schmenzin sought to save, and join us on the front lines of democracy’s new battleground.
Data
Security
Rights
Artificial Intelligence
Technology
EU closes deal to slash green rules in major win for von der Leyen’s deregulation drive
BRUSSELS — More than 80 percent of Europe’s companies will be freed from environmental-reporting obligations after EU institutions reached a deal on a proposal to cut green rules on Monday.   The deal is a major legislative victory for European Commission President Ursula von der Leyen in her push cut red tape for business, one of the defining missions of her second term in office. However, that victory came at a political cost: The file pushed the coalition that got her re-elected to the brink of collapse and led her own political family, the center-right European People’s Party (EPP), to team up with the far right to get the deal over the line. The new law, the first of many so-called omnibus simplification bills, will massively reduce the scope of corporate sustainability disclosure rules introduced in the last political term. The aim of the red tape cuts is to boost the competitiveness of European businesses and drive economic growth. The deal concludes a year of intense negotiations between EU decision-makers, investors, businesses and civil society, who argued over how much to reduce reporting obligations for companies on the environmental impacts of their business and supply chains — all while the effects of climate change in Europe were getting worse. “This is an important step towards our common goal to create a more favourable business environment to help our companies grow and innovate,” said Marie Bjerre, Danish minister for European affairs. Denmark, which holds the presidency of the Council of the EU until the end of the year, led the negotiations on behalf of EU governments. Marie Bjerre, Den|mark’s Minister for European affairs, who said the agreement was an important step for a more favourable business environment. | Philipp von Ditfurth/picture alliance via Getty Images Proposed by the Commission last February, the omnibus is designed to address businesses’ concerns that the paperwork needed to comply with EU laws is costly and unfair. Many companies have been blaming Europe’s overzealous green lawmaking and the restrictions it places on doing business in the region for low economic growth and job losses, preventing them from competing with U.S. and Chinese rivals.   But Green and civil society groups — and some businesses too — argued this backtracking would put environmental and human health at risk. That disagreement reverberated through Brussels, disturbing the balance of power in Parliament as the EPP broke the so-called cordon sanitaire — an unwritten rule that forbids mainstream parties from collaborating with the far right — to pass major cuts to green rules. It set a precedent for future lawmaking in Europe as the bloc grapples with the at-times conflicting priorities of boosting economic growth and advancing on its green transition. The word “omnibus” has since become a mainstay of the Brussels bubble vernacular with the Commission putting forward at least 10 more simplification bills on topics like data protection, finance, chemical use, agriculture and defense. LESS PAPERWORK   The deal struck by negotiators from the European Parliament, EU Council and the Commission includes changes to two key pieces of legislation in the EU’s arsenal of green rules: The Corporate Sustainability Reporting Directive (CSRD) and the Corporate Sustainability Due Diligence Directive (CSDDD).  The rules originally required businesses large and small to collect and publish data on their greenhouse gas emissions, how much water they use, the impact of rising temperatures on working conditions, chemical leakages and whether their suppliers — which are often spread across the globe — respect human rights and labor laws.    Now the reporting rules will only apply to companies with more than 1,000 employees and €450 million in net turnover, while only the largest companies — with 5,000 employees and at least €1.5 billion in net turnover — are covered by supply chain due diligence obligations. They also don’t have to adopt transition plans, with details on how they intend to adapt their business model to reach targets for reducing greenhouse gas emissions.   Importantly the decision-makers got rid of an EU-level legal framework that allowed civilians to hold businesses accountable for the impact of their supply chains on human rights or local ecosystems. MEPs have another say on whether the deal goes through or not, with a final vote on the file slated for Dec. 16. It means that lawmakers have a chance to reject what the co-legislators have agreed to if they consider it to be too far from their original position.
Data
Defense
MEPs
Negotiations
Parliament
Von der Leyen drifts right with new digital deregulation plans
BRUSSELS — A fresh proposal by European Commission President Ursula von der Leyen to reform digital laws on Wednesday was welcomed by lawmakers on the right but shunned on the left. It signals a possible repeat of a pivotal parliamentary clash last week in which von der Leyen’s center-right European People’s Party sided with the far right to pass her first omnibus proposal on green rules — sidelining the centrist coalition that voted the Commission president into office last year. The EU executive on Wednesday presented plans to overhaul everything from its flagship General Data Protection Regulation to data rules and its fledgling Artificial Intelligence Act. The reforms aim to help businesses using data and AI, in an effort to catch up with the United States, China and other regions in the global tech race. Drafts of the plans obtained by POLITICO caused an uproar in Brussels in the past two weeks, as everyone from liberal to left-leaning political groups and privacy-minded national governments rang the alarm. Von der Leyen sought to extend an olive branch with last-minute tweaks to her proposal, but she’s still a long way away from center-left groups. The Progressive Alliance of Socialists and Democrats, Greens and The Left all slamming the plans in recent days. Tom Vandendriessche, a Belgian member of the far-right Patriots for Europe group, said the GDPR is not “untouchable,” and that there needs to be simplification “to ensure our European companies can compete again.” He added: “If EPP supports that course, we’re happy to collaborate on that.” Charlie Weimers a Swedish member of the right-wing European Conservatives and Reformists, welcomed the plan for “cleaning up overlapping data rules, cutting double reporting and finally tackling the cookie banner circus.” Weimers argued von der Leyen could go further, saying it falls short of being “the regulatory U-turn the EU actually needs” to catch up in the AI race. Those early rapprochements on the right are what Europe’s centrists and left fear most. The digital omnibus “should not be a repetition of omnibus one,” German Greens lawmaker Sergey Lagodinsky told reporters on Wednesday. Lagodinsky warned EPP leader Manfred Weber that “there should be no games with anti-democratic and anti-European parties.” BIG REFORMS, SMALL CONCESSIONS The Commission’s double-decker digital omnibus package includes one plan to simplify the EU’s data-related laws (including the GDPR as well as rules for nonpersonal data), and another specifically targeting the AI Act. A Commission official, briefing reporters without being authorized to speak on the record, said the omnibus’ impact on the GDPR was subject to “intense discussion” internally in the run up to Wednesday’s presentation, after its rough reception from some parliament groups and privacy organizations. Much in the EU executive’s final text remained unchanged. Among the proposals, the Commission wants to insert an affirmation into the GDPR that AI developers can rely on their “legitimate interest” to legally process Europeans’ data. That would give AI companies more confidence that they don’t always have to ask for consent. It also wants to change the definition of personal data in the GDPR to allow pseudonymized data — where a person’s details have been obscured so they can’t be identified — to be more easily processed. The omnibus proposals also aim to reduce the number of cookie banners that crop up across Europe’s internet. To assuage privacy concerns, Commission officials scrapped a hotly contested clause that would have redefined what is considered “special category” data, like a person’s religious or political beliefs, ethnicity or health data, which are afforded extra protections under the GDPR. The new cookie provision will also contain an explicit statement that website and app operators still need to get consent to access information on people’s devices. SEEKING POLITICAL SUPPORT The final texts will now be scrutinized by the Parliament and Council of the European Union. Von der Leyen’s center-right EPP welcomed the digital simplification plans as a “a critical boost for Europe’s industrial competitiveness.” Parliament’s group of center-left Socialists and Democrats came out critical of the reforms. Birgit Sippel, a prominent German member of the group, said in a statement the Commission “wants to undermine its own standards of protection in the area of data protection and privacy in order to facilitate data use, surveillance, and AI tools ‘made in the U.S.’” On the EPP’s immediate left, the liberal Renew group cited “important concerns” about the final texts but said it was “delighted” that the Commission backtracked on changing the definition of sensitive data, one idea in the leaked drafts that triggered a backlash. Renew said it would “support changes in the digital omnibus that will make life easier for our European companies.” If von der Leyen goes looking for votes for her digital omnibus among far-right groups, she will find support but it might not be a united front. German lawmaker Christine Anderson of the Alternative for Germany party, part of the far-right Europe of Sovereign Nations group, warned the digital omnibus could end up boosting “the ability to track and profile people.” Weaker privacy rules would “enable enhanced surveillance architecture,” she said, adding her party had “always opposed” such changes. “On these issues, we find ourselves much closer to the groups on the left in the Parliament,” she said. Pieter Haeck contributed reporting.
Data
Intelligence
Social Media
Far right
Negotiations
Parliament’s center pans von der Leyen’s draft digital reforms
BRUSSELS — Ursula von der Leyen hasn’t even published her plans to overhaul the EU’s digital laws yet and already the European Parliament is signaling: This shall not pass.   Political groups to the left of von der Leyen’s center-right European People’s Party are coming out against draft proposals for a digital omnibus legislation that reveal how the EU executive is looking to loosen privacy rules, amend its artificial intelligence law, and overhaul data legislation to the benefit of industry — not least American tech giants.   In letters to the European Commission, political groups from center to left barreled into the draft reforms, calling them “extremely worrying,” asking the executive to “reverse course,” and slamming it for what they see as a capitulation to U.S. demands. The backlash puts von der Leyen in a bind. She could opt to change her proposals ahead of the formal presentation next Wednesday, or else she’ll have to seek votes on the far right — yet again — to pass a key part of her political platform. The EPP is already expected to lean on right-wing support to pass its green rules simplification legislation on Thursday due to a lack of support in the center. The Commission also backed down on its budget plans to avert a rebellion of centrist groups in the Parliament, POLITICO reported Sunday.  The digital omnibus draft proposals, obtained by POLITICO last week, showed how the EU executive is looking to ease rules on AI firms under the flagship General Data Protection Regulation (GDPR). It’s looking to create exceptions for AI companies that would allow them to legally process data linked to people’s religious or political beliefs, ethnicity or health data to train and operate their tech, and also wants to redefine categories of personal data, which would relieve swaths of data from the privacy protections they currently enjoy. The proposals also envision tweaks to the EU’s landmark AI law, like delays on fines for watermarked content and exemptions for small businesses. The drafts drew the ire of the center and the left in the Parliament in recent days. Such outcries are exceptional: Parliament groups often refrain from taking a position until a proposal is formally presented. The Greens group, liberal Renew and Socialists and Democrats have all drawn up letters slamming the Commission. The Greens addressed von der Leyen and the Commission’s tech chief Henna Virkkunen, asking them to “reverse course and focus on actual simplification” of tech laws, in a letter shared with POLITICO.   Alexandra Geese, a prominent German member of the Greens group, said the Commission’s plans would “dismantle the protection of European citizens for the benefit of U.S. tech giants.” She said “the Commission should focus on real simplification and streamlining of definitions rather than bending their knee to the U.S. administration.”  The Renew group voiced “strong opposition to certain changes” and called some of the draft tweaks “extremely worrying.” “We would strongly ask you to remove and reconsider those proposed changes before presenting the official proposals,” the group wrote in its letter to von der Leyen and key commissioners, shared with POLITICO. The Greens addressed von der Leyen and the Commission’s tech chief Henna Virkkunen, asking them to “reverse course and focus on actual simplification” of tech laws, in a letter shared with POLITICO.  | Thierry Monasse/Getty Images Italian S&D MEP Brando Benifei, the Parliament’s lead negotiator on the AI Act, said he was “deeply skeptical of reopening the AI Act before it’s fully in force and without impact assessment.” Two dozen lawmakers from The Left, the Greens and S&D also backed a written question drawn up by French left-wing MEP Leïla Chaibi that will be filed this week. It follows the EU executive’s reportedly “engaging” with the Donald Trump administration in the lead-up to the omnibus proposal. In it, lawmakers said: “The European Commission’s apparent willingness to yield to pressure from the White House in this way raises serious concerns about the European Union’s digital sovereignty.”  The S&D came out swinging in a letter on Tuesday, warning the Commission that they’ll oppose “any attempt” to weaken the foundations of the EU’s privacy framework that would “lower the level of personal data protection, or narrow the GDPR’s scope.” The group said Europe’s digital laws at large have “inspired international partners and positioned Europe as a normative power in global tech governance.” RIGHT TO THE RESCUE?  Von der Leyen’s EPP hasn’t yet issued a united statement about the draft digital simplification plans.   Finnish center-right lawmaker Aura Salla — who previously led Meta’s Brussels lobbying office — said earlier she would “warmly” welcome the proposal “if done correctly,” as it could bring legal certainty for AI companies.  The center right, which holds the most seats in the Parliament, could seek support to its right with the right-wing European Conservatives and Reformists and the far-right Europe of Sovereign Nations (ESN) and Patriots for Europe. Piotr Müller, a Polish ECR member, welcomed the Commission’s draft texts: “After years of excessive legislation that has stifled progress, it is five to midnight: We need ambitious deregulation now.” Further to the right, French lawmaker Sarah Knafo from the ESN said it would be a “breath of fresh air for our businesses,” lamenting that “Europe has locked itself into absurd over-regulation in the technology sector, which stifles all innovation.” On the issue of privacy, though, some right-wing lawmakers could turn against the draft idea. The right has previously defended personal privacy and personal freedoms over industry’s interests in some legislative fights. “We need to let our tech players move forward, while remaining vigilant about sovereignty and control over our data,” Knafo said.  Lawmakers on both the left and right will be under fire from powerful privacy lobbyists. Civil society campaigners have sounded the alarm in recent days after the drafts leaked. The Commission is “secretly trying to overrun everyone else in Brussels,” Max Schrems, founder of Austrian privacy group Noyb and a prominent European privacy campaigner, said previously. The proposals also have to make their way through the Council of the EU, where countries are equally divided on whether to touch privacy rules.  Documents seen by POLITICO show that at least four countries — Estonia, France, Austria and Slovenia — are firmly against any rewrite of the GDPR. Germany, usually seen as one of the most privacy-minded countries, came out in favor of big changes to help AI blossom. 
Data
Far right
Regulation
Rights
Artificial Intelligence
Commission says no power to take action on Ireland’s tech regulator appointment
BRUSSELS — The European Commission said it is “not empowered to take action” amid concerns about the appointment of a former tech lobbyist to Ireland’s privacy regulator. The Irish Council for Civil Liberties — a non-profit transparency campaign group — on Tuesday filed a complaint calling on the Commission to launch an inquiry into how Niamh Sweeney was appointed to co-lead the Irish Data Protection Commission. Citing reporting from POLITICO, the complaint alleges the appointment process “lacked procedural safeguards against conflicts of interest and political interference.” It’s the first formal challenge to the decision after Sweeney took up her role as one of three chief regulators at Ireland’s top data regulator this month. Her prior experience as a lobbyist for Facebook and WhatsApp reignited concerns that the regulator is too close to Big Tech. In response to the complaint, Commission spokesperson Guillaume Mercier said that “it is for the member states to appoint members to their respective data protection authorities.” The Commission “is not involved in this process and is not empowered to take action with respect to those appointments,” Mercier told a daily press briefing Tuesday. He emphasized that countries do need to respect requirements set out in EU law — that the appointment process must be “transparent,” and that those appointed should “have the qualifications, the experience, the skills, in particular in the protection of personal data, required to perform their duties and to exercise their powers.” The complaint asked the Commission to look into the appointment as part of its duties to oversee the application of EU law, claiming these responsibilities had not been met by Ireland. Sweeney was appointed by the Irish government on the advice of the Public Appointments Service, the authority that provides recruitment services for public jobs, which has previously expressed its full confidence in the process.
Data
Regulation
Technology
Privacy
Platforms
Big Tech lawyer played key role in picking Ireland’s new privacy regulator
A corporate lawyer who has worked for Big Tech played a key role in picking a former lobbyist for Facebook and WhatsApp as one of Europe’s most powerful privacy regulators. Niamh Sweeney will take up her role as one of three chief regulators at Ireland’s powerful Data Protection Commission (DPC) next week. Her previous experience as a lobbyist for Facebook and WhatsApp has reignited concerns that Ireland’s top data regulator is too close to Big Tech. Now, new details about her appointment process seen by POLITICO show that a lawyer representing tech giants at a prominent law firm in Ireland was a member of a small panel that picked Sweeney. The inclusion of that lawyer on the panel triggered a conflict of interest complaint by a candidate that competed with her for the job earlier this year. The Irish Data Protection Commissioner enforces Europe’s mighty General Data Protection Regulation (GDPR) on many of the world’s largest technology companies, including Meta, X, Google, TikTok and others that have their European headquarters in Ireland. For years, the Irish authority has faced criticism for being too soft on tech giants, with critics pointing to Ireland’s heavy reliance on Big Tech for its domestic economy. After the GDPR took effect in 2018, it took years before the DPC started imposing sizable fines on tech giants. Commissioners at the Irish DPC are appointed by the Irish government on the advice of the Public Appointments Service, the authority that provides recruitment services for public jobs. The authority is known as publicjobs. In a confidential letter dated May 14 and seen by POLITICO, publicjobs said it had assembled a selection panel of five people to pick the newest privacy chief. According to the letter, that panel included consultant Shirley Kavanagh as chair, Department of Justice Deputy Secretary Doncha O’Sullivan, the head of Ireland’s ComReg communications watchdog Garrett Blaney, publicjobs recruitment specialist Louise McEntee, and Leo Moore, a partner at law firm William Fry. Moore heads the firm’s technology group. He has advised domestic and multinational companies, including “several ‘Big Tech’ and social media companies,” the law firm’s own website states. The law firm advised Microsoft in a landmark court case where U.S. authorities wanted to access data on Irish servers, it said in a 2016 press release. Irish media also reported that the firm had advised the Irish government in a case in which the government pushed back on collecting almost €14 billion in back taxes from Apple. Moore did not respond to POLITICO’s requests for comment. William Fry did not provide a comment in time for publication. The Irish Data Protection Commissioner enforces Europe’s mighty General Data Protection Regulation (GDPR) on many of the world’s largest technology companies, including Meta, X, Google, TikTok and others that have their European headquarters in Ireland. | Artur Widak/NurPhoto via Getty Images The chair of the panel, Kavanagh, has previously worked in senior leadership roles in the pharma, financial services, retail and public sectors, including with Inizio, Axa, Primark and Ireland’s central bank, she stated on her website. The site said she has also worked with “technology companies” as a “coach and senior team facilitator.” Kavanagh declined POLITICO’s request for comment, directing questions to the publicjobs service and the Irish justice department. REVOLVING DOOR COMPLAINT Sweeney is set to take office Oct. 13 alongside co-Commissioners Des Hogan and Dale Sunderland. The DPC switched to having three top commissioners after former Data Protection Commissioner Helen Dixon (who carried out the role alone) left office in 2024. Sweeney worked as Facebook’s head of public policy in Ireland from 2015-2019, then as EMEA director of public policy for WhatsApp until 2021, followed by a year working as head of communications for financial technology firm Stripe. She was a director at lobby firm Milltown Partners until this summer, her LinkedIn page showed. Sweeney’s appointment as co-commissioner raised concerns among privacy activists when it was announced in September. Austrian privacy group Noyb described it as Ireland’s “kissing US Big Tech’s backside” and said it left companies like Meta to regulate themselves. A candidate competing with Sweeney for the commissioner role submitted a complaint about the process in April, publicjobs’ May letter seen by POLITICO showed. The complainant’s name was redacted from the documents. The complainant questioned the inclusion of tech lawyer Moore on the panel that selected the former Meta official. They alleged that Moore had a conflict of interest given his role “as a corporate lawyer who represents clients whose business practices are regulated by the very agency this role oversees,” according to the letter, which responded to the complaint. Publicjobs in the letter defended the independence and expertise of the board that it had assembled and said it was “assured that Mr Moore’s professional role was not considered to conflict with his role on the Board.” The complainant also argued that no member of the panel had enough technological expertise to make a fair assessment of applications.   In the letter, publicjobs highlighted the “extensive” expertise of Moore in data protection and cybersecurity. GOVERNMENT STANDS BY APPOINTMENT Publicjobs said in the letter that it found “no evidence that the Board convened was inappropriate, or incapable of assessing candidates against the key requirements of the role in question.”  In a written comment to POLITICO, a spokesperson for publicjobs said the authority has “full confidence in the composition, independence, expertise and qualifications of the chosen Assessment Board” to recruit a third data protection commissioner, and that the complaint submitted about the competition had been “fully addressed” by the service’s review process.     A corporate lawyer who has worked for Big Tech played a key role in picking a former lobbyist for Facebook and WhatsApp as one of Europe’s most powerful privacy regulators. | Samuel Boivin/Getty Images They said publicjobs works to ensure assessment boards for senior roles are “balanced, diverse and not conflicted, with all panelists required to complete a confidentiality agreement and a conflict-of-interest form.” Boards at this level are approved by the service’s Chief Executive Margaret McCabe and Head of Recruitment Talent Strategy Michelle Noone, the spokesperson added. A spokesperson for Ireland’s Department of Justice, Home Affairs and Migration told POLITICO the ministry is “fully satisfied with the appointment process.” The Irish Data Protection Commission declined to comment, saying it was not involved in the appointment process. Blaney declined to comment, directing POLITICO to publicjobs and Ireland’s justice department. McEntee did not immediately respond to a request for comment.
Data
Social Media
Technology
Privacy
Platforms
Naming and shaming doping athletes is against EU law, says top lawyer
Publishing the name of a professional athlete online because they have broken anti-doping rules is against the EU’s privacy laws, a top EU lawyer has said. The fresh opinion from Advocate General Dean Spielmann weighs a case taking place in Austria, where four professional athletes who have broken anti-doping rules are arguing that publication of their details online would breach the EU’s General Data Protection Regulation. Austrian law requires details including the athletes’ names, sporting discipline, duration of their exclusion and the reasons for that exclusion to be published on the websites of the Austrian anti-doping agency and an associated legal committee. Spielmann said he had “serious doubts” about the need to publish all those details online, according to a court press release, on the basis that any national laws that require personal data to be published have to be proportionate. He said publishing pseudonymized details on the internet would still deter athletes from doping and prevent offenders from circumventing doping rules, while also protecting the individual’s privacy. The opinion is not binding but will inform the final decision at the Court of Justice of the EU.
Data
Sport
Privacy
Cybersecurity and Data Protection
Data / privacy