Week of 2025-4-21

IAPP publishes AI Governance Profession Report 2025

Richard Sentinella | Ashley Casovan | Joe Jones | Evi Fuelle | IAPP

The IAPP's AI Governance Profession Report 2025, developed in collaboration with Credo AI, highlights the growing emphasis organizations place on AI governance. Approximately 47% of surveyed organizations identified AI governance as a top-five strategic priority, with 77% actively developing governance programs. Notably, even among organizations not yet utilizing AI, 30% are proactively establishing governance frameworks, indicating a trend toward 'governance-first' approaches. The report also underscores a significant talent gap, with 23.5% of respondents citing challenges in finding qualified AI governance professionals. Typically, AI governance responsibilities are distributed across privacy, legal, IT, and data governance teams, reflecting the interdisciplinary nature of the field. As AI technologies and regulations continue to evolve, organizations are adapting by integrating AI governance into existing structures and prioritizing cross-functional collaboration.

Report explores the emergence of UK's responsible AI professionals

Tess Buckley | Sue Daley | Tech UK

​techUK's recent report, Mapping the Responsible AI Profession: A Field in Formation, examines the evolving role of Responsible AI (RAI) practitioners in the UK. As AI becomes more integrated across sectors, RAI professionals are emerging as essential figures in ensuring ethical, safe, and fair AI development and deployment. The report identifies three critical gaps hindering the effectiveness of RAI practitioners: the absence of clearly defined roles and organizational placement, the lack of structured career pathways, and the absence of standardized skills and training frameworks. These gaps pose tangible business risks, including inconsistent ethical implementation, potential regulatory non-compliance, and barriers to establishing stakeholder trust. To address these challenges, the report provides a roadmap for cultivating the professional ecosystem necessary to ensure that AI development in the UK remains both innovative and aligned with societal values and ethical standards. ​

CNIL releases report on AI regulatory sandbox findings

CNIL

France’s data protection authority, the CNIL, has released a report summarizing its 2023–2024 AI sandbox program, which supported eight public-sector AI projects. The initiative offered tailored guidance to help these projects balance technological innovation with fundamental rights protections. Projects included a virtual assistant for civil servants, an AI-driven video system for the RATP that avoids personal data collection, and an AI tool in Nantes to raise awareness about water use. CNIL’s key recommendations emphasized meaningful human oversight, data minimization, and safeguards against algorithmic bias, aligning with GDPR principles. The report provides a roadmap for responsibly deploying AI in the public sector while maintaining privacy and non-discrimination standards.

Dozens of tech firms now let their AI agents work together on complex tasks

Murad Hemmadi | The Logic

Major tech companies, including Google, Cohere, and ServiceNow, are collaborating on a new system that enables their AI agents to communicate and work together on complex tasks. This initiative, known as the Agent2Agent protocol, is already being utilized by firms like PayPal, Salesforce, and Workday. By allowing AI assistants to interact seamlessly, the protocol aims to enhance efficiency and coordination in various applications. This development marks a significant step forward in integrating AI more deeply into workplace operations, facilitating more sophisticated and collaborative automated solutions.

Meta blocks livestreaming by teenagers on Instagram

Dan Milmo | The Guardian

Meta has introduced new safety measures on Instagram that block users under 16 from livestreaming unless they have parental consent. The company also now requires parental approval for minors to disable automatic blurring of suspected nudity in direct messages. These updates expand existing teen protections to Facebook and Messenger, with teen accounts featuring built-in parental controls and usage limits in the US, UK, Canada, and Australia. The changes align with upcoming obligations under the UK’s Online Safety Act, which mandates stronger protections for minors online. While praised by child safety groups like the NSPCC, critics warn that broader enforcement and proactive content moderation remain necessary.

Shocked Lyft rider gets mysterious text with a transcript of her convo

Michael Crider | PC World

A Toronto Lyft passenger was shocked to receive a text containing a full transcript of her conversation during an eight-minute ride with friends. Initially, a Lyft representative suggested it was part of a pilot program, but the company later attributed the incident to unauthorized recording by the driver, stating that appropriate action had been taken. Lyft clarified that its official audio recording pilot is limited to select U.S. markets with strict opt-in protocols and is not active in Canada. Privacy experts have raised concerns, emphasizing that under Canadian law, companies must obtain informed consent before recording or sharing personal information. The incident has sparked broader discussions about transparency and surveillance in ride-sharing services.

'Appalling' Pembina Trails hack could cause a lot of damage, privacy expert says

Arturo Chang | CBC News

The Pembina Trails School Division in Manitoba experienced a significant cyberattack in late 2023, which disrupted access to internal systems and email services. The division confirmed that the attack involved ransomware, a type of malware that encrypts data and demands payment for its release. While specific details about the extent of the data breach were not disclosed, officials have stated that they are working with cybersecurity experts and law enforcement to investigate the incident and restore affected systems. The school division has also notified the Manitoba Ombudsman and is taking steps to enhance its cybersecurity measures to prevent future attacks. This incident highlights the growing threat of ransomware attacks on educational institutions and the importance of robust cybersecurity protocols.

Episode 231: Sara Bannerman on How Canadian Political Parties Maximize Voter Data Collection and Minimize Privacy Safeguards

Michael Geist

In Episode 231 of the Law Bytes podcast, Michael Geist interviews Sara Bannerman, Canada Research Chair in Communications Policy and Governance at McMaster University, about Canadian political parties' extensive collection and use of voter data. Bannerman highlights how parties operate as data-driven organizations, amassing detailed profiles on potential supporters, often without transparent consent or oversight. She discusses the significant gap between Canadians' expectations of privacy and the actual practices of political parties, which are largely exempt from federal privacy laws. This lack of regulation raises concerns about accountability and the ethical use of personal information in political campaigns. The conversation underscores the need for stronger privacy safeguards and greater transparency in how political parties handle voter data.

In Canada’s North, communities reckon with a melting world

Rhiannon Russel | The Logic

In Canada's North, communities like Nain, Nunatsiavut, are confronting rapid climate change, with the region warming approximately three times faster than the global average. This has led to unpredictable weather patterns, such as unseasonal rain and slushy sea ice, disrupting traditional practices like snowmobiling for hunting and firewood collection. The thawing permafrost has also caused infrastructure failures, including the collapse of communication towers, leaving residents without internet and phone services. To adapt, these communities are blending traditional knowledge with locally developed technologies to navigate the changing environment. This integration of Indigenous expertise and innovation is crucial for resilience in the face of accelerating climate impacts.

An Algorithm Deemed This Nearly Blind 70-Year-Old Prisoner a “Moderate Risk.” Now He’s No Longer Eligible for Parole

Richard A. Webster | ProPublica

In Louisiana, a new law has shifted parole decisions from human discretion to an algorithm called TIGER (Targeted Interventions to Greater Enhance Re-entry). This system assesses inmates' risk of reoffending based solely on unchangeable factors like past criminal history, age at first arrest, and employment background, without considering rehabilitation efforts made during incarceration. As a result, individuals like Calvin Alexander—a 70-year-old, nearly blind man in a wheelchair with a clean disciplinary record—have been deemed "moderate risk" and are now ineligible for parole. Critics argue that the algorithm disproportionately affects Black inmates and those from disadvantaged backgrounds, raising concerns about fairness and potential constitutional violations. Louisiana is currently the only state to use such a risk assessment tool as the sole determinant for parole eligibility, impacting nearly half of its prison population.

Europe’s GDPR privacy law is headed for red tape bonfire within ‘weeks’

Ellen O’Regan | Politico

The European Commission, led by President Ursula von der Leyen, is preparing to propose reforms to the General Data Protection Regulation (GDPR) in an effort to reduce regulatory burdens on businesses. The GDPR, implemented in 2018, is considered one of the EU's most complex laws, imposing strict data management and compliance requirements on companies operating within Europe. The proposed changes aim to simplify these regulations to enhance competitiveness, particularly against rivals in the U.S. and China. While the Commission emphasizes that privacy remains a priority, critics express concern that easing GDPR provisions could weaken data protection standards and compromise individual rights. This initiative is part of a broader EU strategy to streamline regulations and stimulate economic growth. ​

Speeding ‘a serious problem,’ says Chow, as Toronto doubles speed cameras from 75 to 150

Jermaine Wilson | CTV News

Toronto is doubling its automated speed enforcement cameras from 75 to 150 in response to what Mayor Olivia Chow describes as a "serious problem" with speeding. The expansion aims to enhance road safety and reduce traffic-related injuries and fatalities. While the city has seen a decrease in speeding violations in areas with existing cameras, officials note that vandalism of these devices remains an ongoing issue. The initiative is part of Toronto's broader Vision Zero strategy to eliminate traffic deaths and serious injuries.

Inside a Powerful Database ICE Uses to Identify and Deport People

Jason Koebler | 404 Media

A recent investigation by 404 Media reveals that U.S. Immigration and Customs Enforcement (ICE) utilizes a powerful database called Investigative Case Management (ICM) to identify and deport individuals. Developed by Palantir, ICM allows ICE agents to filter and search through hundreds of specific categories, including visa status, physical characteristics, criminal affiliations, and location data. The system integrates data from various federal agencies, such as the DEA, FBI, ATF, and CIA, raising concerns among privacy advocates about potential overreach and lack of transparency. Critics argue that the use of such comprehensive surveillance tools without clear oversight could lead to the targeting of individuals for minor infractions or based on profiling. This development underscores the growing debate over the balance between national security and individual privacy rights.

Social media report sparks heated debate between Halton Hills councillors

Herb Garbutt | The Trillium

A recent Halton Hills Council meeting saw a heated debate over the town's social media strategy, particularly the proposed shift from X (formerly Twitter) to Bluesky. The town's communications team recommended maintaining both platforms but suggested prioritizing Bluesky due to concerns about X's increasingly toxic environment. Councillor Clark Somerville supported the move, citing his own departure from X over its negativity and lack of proper verification processes. Conversely, Councillor D’Arcy Keene criticized the report as biased and an attack on free speech, questioning the staff's authority to implement such changes without council approval. The communications director clarified that the report aimed to inform council about emerging platforms and that no immediate actions were being taken without further discussion.

New Ontario government ethics watchdog appointed

The Trillium

Cathryn Motherwell has been appointed as Ontario's new Integrity Commissioner, succeeding J. David Wake. Motherwell, who previously served as the province’s assistant deputy attorney general, brings extensive legal and public service experience to the role. Her appointment comes at a time when the Integrity Commissioner's office is handling several high-profile investigations, including matters related to the Greenbelt land removals and lobbying activities. Motherwell's leadership is expected to play a crucial role in upholding ethical standards and ensuring accountability within the provincial government.

UK Regulator Issues Three Million GBP Monetary Penalty in Connection with Ransomware Attack

Nikolaos Theodorakis | Tom Evans | Laura Brodahl | Matthew Nuding | Wilson Sonsini

In March 2025, the UK Information Commissioner's Office (ICO) fined Advanced Computer Software Group £3.07 million following a 2022 ransomware attack that compromised the personal data of 79,404 individuals, including sensitive health records. The breach, which disrupted NHS services, occurred due to security lapses such as incomplete multi-factor authentication (MFA) coverage and inadequate patch management. The ICO emphasized that customer resistance to MFA implementation did not excuse the company's failure to secure sensitive data. This penalty marks the ICO's first fine against a data processor under the UK GDPR, highlighting that processors, not just data controllers, bear direct responsibility for data protection. Advanced's proactive engagement with authorities and agreement not to appeal contributed to a reduced fine from an initial £6 million.

Privacy on the Map: How States Are Fighting Location Surveillance

Rindala Alajaji | Electronic Frontier Foundation

The Electronic Frontier Foundation’s new report highlights how U.S. states are responding to the growing threat of location surveillance. With smartphones and apps constantly collecting GPS data, individuals are increasingly vulnerable to tracking by advertisers, law enforcement, and data brokers. Some states have begun enacting laws that require a warrant for location tracking or ban the sale of location data outright. The EFF's interactive map tracks these legislative efforts and identifies which states offer the strongest protections. As more location data is collected without user consent, the report urges stronger privacy safeguards to ensure people can move through public life without constant surveillance.

French CNIL Issues Draft Guidance On The Use of Location Data From Connected Vehicles

Kristof Van Quathem | Alix Bertrand | Covington

The French data protection authority (CNIL) has issued draft guidelines on the use of location data from connected vehicles, now open for public consultation until May 20, 2025. These guidelines emphasize that accessing vehicle geolocation data typically requires the driver's explicit consent, aligning with France's implementation of the ePrivacy Directive. This stance challenges the use of legitimate interest as a legal basis for processing such data, even in contexts like theft prevention or fleet management. The CNIL provides detailed recommendations on data minimization, anonymization, and security measures, including encryption and access controls. These guidelines are particularly relevant for manufacturers, rental companies, telematics providers, and data aggregators involved in the connected vehicle ecosystem.

Users of ‘phishing-as-a-service’ site hit with fines, police lectures

David Reevely | The Logic

Canadian law enforcement agencies, led by the RCMP, have initiated over 118 enforcement actions against individuals suspected of using LabHost, a now-defunct phishing-as-a-service platform dismantled in April 2024. LabHost offered customizable phishing tools for cryptocurrency payments, enabling users to create tailored cyber scams. The current enforcement measures include fines, warning letters, and in-person engagements, but no criminal charges have been filed. This approach reflects a broader strategy to deter cybercrime by holding users of illicit digital services accountable, even in the absence of formal charges.

Microsoft rolls out AI screenshot tool dubbed 'privacy nightmare'

Imran Rahman-Jones | BBC

Microsoft has relaunched its controversial Copilot+ Recall feature, which takes periodic snapshots of users' screens to help them search past activity across emails, files, photos, and websites. The tool is now available in preview mode to select users in the Windows Insider testing program and will roll out globally later in 2025—though EU users will have to wait. Microsoft insists the feature is opt-in, stores data locally, and offers controls like app exclusions and deletion of stored snapshots. However, privacy advocates remain concerned that Recall could inadvertently capture sensitive content from other people without consent and pose risks if a device is compromised. The UK’s Information Commissioner’s Office is monitoring the rollout and expects Microsoft to maintain transparency and compliance with data protection laws.

UN: New resolution on human rights defenders

Article 19

On April 4, 2025, the UN Human Rights Council adopted a landmark resolution focused on the protection of human rights defenders in the digital age. Led by Norway and co-sponsored by over 50 countries, the resolution addresses the challenges posed by new and emerging technologies, including biometric surveillance, internet shutdowns, and spyware. Notably, it is the first UN resolution to call on governments to refrain from using biometric technologies for mass surveillance, emphasizing the need for such tools to align with international human rights laws. The resolution also mandates the Office of the UN High Commissioner for Human Rights to convene regional workshops to assess additional risks digital technologies create for human rights defenders and to prepare a report.

Quebec's privacy regulator puts spotlight on employee hiring practices

Carly Meredith | François Tremblay | Dimka Markova | DLA Piper

Québec’s privacy regulator has issued guidance highlighting compliance expectations under Law 25, especially around employee hiring practices and data handling. As of September 22, 2024, individuals in Québec now have the right to data portability, requiring organizations to provide their computerized personal data in a structured, commonly used format upon request. The regulation applies only to digital data collected directly from the individual—not inferred or obtained from third parties. Organizations must ensure secure transmission, verify requesters’ identities, and respond within 30 days. This marks a significant step in strengthening individual privacy rights and underscores the importance of transparent data practices in HR and beyond.

Next
Next

Week of 2025-4-7