Week of 2025-02-21

Ugly efforts at cutting off access-to-information requests

Ken Rubin | The Hill Times

Ken Rubin highlights growing government efforts to hinder access to public records through false claims of non-existent records, accusations of vexatious requests, and allegations of malicious intent. Departments like National Defence, Finance, and Procurement Canada have denied possessing records that clearly exist, raising concerns about transparency. The Information Commissioner rarely discloses cases where departments attempt to label requesters as vexatious, limiting public awareness of suppression tactics. Rubin also details his own legal battle after being falsely accused of malicious intent for filing FOI requests, as well as journalist David Pugliese facing baseless accusations of Russian ties for his investigative reporting. These tactics, Rubin argues, threaten the public's right to information and obstruct accountability.

‘Engine of inequality’: delegates discuss AI’s global impact at Paris summit

Dan Milmo | The Guardian

At the recent AI Action Summit in Paris, global leaders and experts discussed the dual impact of artificial intelligence on environmental sustainability and economic inequality. French special envoy Anne Bouverot highlighted AI's significant energy consumption, warning that its "current trajectory is unsustainable." Christy Hoffman of UNI Global Union cautioned that without proper oversight, AI could become an "engine of inequality," exacerbating economic disparities. Scientist Max Tegmark expressed concerns about the rapid development of AI, comparing the situation to the film "Don't Look Up," where existential threats are ignored until it's too late. The summit underscored the need for sustainable and equitable AI development to prevent further global disparities

U.S. claims AI for its own as Canada makes push for global co-operation

Murad Hemmadi | The Logic

The Paris AI Action Summit brought together global leaders to discuss the ethical and economic challenges of artificial intelligence, with Canada playing a significant role in the discussions. François-Philippe Champagne, Canada’s Minister of Innovation, emphasized the importance of international collaboration to ensure AI development aligns with democratic values and human rights. The summit highlighted concerns over AI's environmental impact, global inequality, and workforce disruption, with Canada advocating for stronger regulations and accountability measures. Some experts warned that Canada's AI sector risks being overshadowed by dominant players like the U.S. and China if it does not take a proactive stance. The event underscored the urgency of balancing AI innovation with responsible governance to prevent further global disparities.

15 state attorneys general condemn DOGE access to US Treasury

Rob Bonta | State of California Department of Justice

California Attorney General Rob Bonta, along with a coalition of 14 attorneys general, has expressed serious concerns over the U.S. Department of the Treasury granting Elon Musk and his Department of Government Efficiency (DOGE) access to sensitive payment systems containing Americans' personal information. In a statement, the coalition emphasized that such unauthorized access is "unlawful, unprecedented, and unacceptable," highlighting the potential risks to individuals' privacy and the security of critical federal payments. The attorneys general announced plans to file a lawsuit to halt this access and protect citizens' private data

BC, Canada Ban Public Sector Staff from Using Chinese-Owned DeepSeek AI

Andrew MacLeod | The Tyee

The British Columbia and federal governments have banned public sector staff from using DeepSeek, a Chinese AI chatbot, due to security and privacy concerns. The ban follows growing scrutiny of the company’s data collection practices, with both the U.S. and European regulators investigating its handling of personal information. Officials cited risks of unauthorized data access and potential foreign influence as key reasons for the decision. Cybersecurity experts warn that AI tools handling sensitive government data require stricter oversight. The move aligns with broader efforts to limit reliance on foreign AI technologies deemed high-risk.

As federal government explores new uses for AI in the public service, experts call for caution

The Canadian Press | CTV News

As the federal government expands its use of AI in public services, experts are urging caution due to concerns over transparency, accountability, and bias. AI is being tested in areas like administrative automation and decision-making, but critics warn that improper implementation could lead to unfair outcomes or privacy risks. Experts emphasize the need for strong regulations, independent oversight, and public consultation to ensure AI systems align with ethical and legal standards. Some warn that AI’s role in government must not erode human oversight in critical services. The government maintains that AI adoption aims to improve efficiency while adhering to existing privacy and ethical guidelines.

UK government releases public sector AI playbook

UK Government

The UK government has released the Artificial Intelligence Playbook, providing comprehensive guidance for civil servants and public sector organizations on the safe, effective, and secure use of AI technologies. This playbook builds upon the previous Generative AI Framework for HMG and has been expanded to encompass a broader range of AI applications. It outlines ten key principles for AI deployment, covering aspects such as understanding AI capabilities and limitations, ensuring data protection, and engaging with stakeholders. Additionally, the playbook offers practical advice on selecting, procuring, and implementing AI solutions within government settings. To support continuous learning, the government has also introduced a series of AI courses available through Civil Service Learning and other training platforms. These resources aim to equip public sector employees with the necessary skills and knowledge to harness AI responsibly, enhancing public service delivery while safeguarding public trust.

Students sue Education Department, allege DOGE is accessing private data

Zachary Schermele | USA Today

A federal judge has denied a request to block Elon Musk's Department of Government Efficiency (DOGE) from accessing the U.S. Department of Education's internal systems, which contain sensitive student financial aid information. The University of California Student Association filed the lawsuit, arguing that DOGE's access violated the Privacy Act of 1974 and threatened student data privacy. However, Judge Randolph Moss ruled that there was no evidence of misuse or improper disclosure by DOGE personnel, allowing their access to continue. Despite this setback, the litigation is ongoing as concerns about data security and privacy persist.

Cyber attack delayed cancer treatment at NHS hospital

Nicole Kobie | ITPro

In November 2024, Wirral University Teaching Hospital Trust in Merseyside experienced a significant cyber attack that disrupted its systems, leading to the cancellation of all outpatient appointments from November 25 to December 4. This incident notably increased wait times for cancer treatments and diagnostics, with the Trust acknowledging that recovery would take several months. The attack forced a shift to paper-based operations, as hackers accessed systems through an online appointment portal. Despite these challenges, staff implemented business continuity plans to maintain essential services and prioritize affected patients. This event underscores the growing vulnerability of healthcare institutions to cyber threats, as the sector faces attacks at a rate four times higher than the global average across industries.

Summerside hospital worker fired for improperly accessing patient files, says Health P.E.I.

Stephen Brun | CBC News

In February 2025, Prince County Hospital in Prince Edward Island reported a significant privacy breach involving an employee who accessed over 300 patient records without authorization. The breach was discovered during a routine audit, leading to the employee's suspension and an ongoing investigation. Affected patients were notified, and the hospital emphasized its commitment to patient privacy, implementing additional training and reviewing policies to prevent future incidents. This event highlights the critical importance of robust internal controls and regular audits in safeguarding sensitive patient information within healthcare institutions.

Amazon Sued in First 'My Health, My Data' Privacy Dispute

Tonya Riley | Bloomberg Law

In February 2025, Amazon.com Inc. faced a lawsuit under Washington state's "My Health, My Data" Act, marking the first legal action under this legislation. The lawsuit alleges that Amazon violated federal wiretap laws and state privacy regulations by collecting location data from tens of millions of consumers without proper consent. This case underscores the increasing legal scrutiny tech companies face regarding data privacy practices, especially concerning sensitive health-related information. The outcome of this lawsuit could set a significant precedent for future enforcement of the "My Health, My Data" Act and similar privacy laws.

First Nations and Artificial Intelligence Research Paper

Chiefs of Ontario

The "First Nations and Artificial Intelligence Research Paper," published by the Chiefs of Ontario, examines the multifaceted implications of AI technology for First Nations communities in Ontario. It provides a comprehensive overview of AI, including its history, types, and current applications, and delves into proposed regulatory frameworks, such as Canada's Artificial Intelligence and Data Act (AIDA), analyzing their potential impacts on First Nations. The paper highlights critical concerns, including algorithmic bias, data sovereignty, and the necessity for First Nations' involvement in AI development and governance. Additionally, it explores opportunities where AI can support First Nations, such as in language revitalization, enhancing self-governance, and environmental stewardship. The document emphasizes the importance of integrating First Nations' perspectives and rights into AI discourse to ensure ethical and equitable outcomes.

What’s on our Radar in 2025: Canada’s Privacy and AI Landscape

Lisa R. Lifshitz | Laura Crimi | Torkin Manes

In February 2025, Torkin Manes LLP published an article discussing the current state of Canada's privacy and artificial intelligence (AI) legislation. The article highlights that the federal reform efforts, particularly Bill C-27—which aimed to replace the Personal Information Protection and Electronic Documents Act (PIPEDA) with the Artificial Intelligence and Data Act (AIDA) and the Consumer Privacy Protection Act—have been paused due to the recent prorogation of Parliament, leaving the future of federal AI regulation uncertain. Despite this, several provinces have enacted their own legislation: Ontario's Bill 194 mandates public sector entities to disclose their use of AI systems and implement risk management and accountability frameworks; Alberta's Bills 33 and 34 amend existing privacy laws, requiring public bodies to ensure accuracy in personal information used for decision-making and to provide notice when such information is collected for automated systems; and Québec's Law 25 imposes obligations on organizations to inform individuals when automated decision-making systems are used and to allow access to the information and criteria involved in such decisions. These provincial initiatives underscore a fragmented approach to AI governance in Canada in the absence of cohesive federal legislation.

Understanding the EU’s Cyber Solidarity Act: Key Takeaways

Cédric Burton | Demian Ahn | Laura Brodahl | Matthew Nuding | Wilson Sonsini

The EU Cyber Solidarity Act (CSA), which took effect on February 4, 2025, aims to strengthen cybersecurity cooperation across Member States. It establishes a European Cybersecurity Alert System to enhance threat detection, a Cybersecurity Emergency Mechanism to support coordinated crisis response, and an EU Cybersecurity Reserve of trusted service providers for incident recovery. The Act also tasks ENISA with reviewing major cyber incidents to improve future preparedness. While companies aren't directly obligated, those in critical sectors can voluntarily participate in cybersecurity initiatives and gain insights into emerging threats. The CSA reflects the EU's broader push for collective cyber resilience.

Trade war or not, Canada will keep working with the U.S. on cybersecurity

David Reevely | The Logic

Despite rising trade tensions, Canada and the U.S. remain committed to cybersecurity cooperation to protect shared critical infrastructure. Canadian officials stress that cyber threats transcend borders, making collaboration essential for national security. Both nations continue to work on intelligence sharing and joint defense strategies despite political and economic disputes. However, regulatory differences and concerns over data sovereignty could pose challenges to deeper integration. The evolving landscape will test whether cybersecurity remains a unifying priority amid broader trade conflicts.

Intimate images shared after hacking impact 117 Canada, U.S., overseas victims, maybe more: Thunder Bay police

Michelle Allen | CBC News

In February 2025, Thunder Bay Police Service reported a significant privacy breach involving the unauthorized distribution of intimate images. The incident affected at least 117 individuals across Canada, the U.S., and other countries, with the possibility of more victims emerging as the investigation continues. The breach underscores the critical need for robust cybersecurity measures and public awareness to protect personal data from malicious attacks. Authorities are urging individuals to exercise caution when sharing sensitive information online and to report any suspicious activities to law enforcement agencies promptly. This event highlights the pervasive risks associated with digital data and the importance of proactive steps to safeguard personal privacy.

Trump fired me. Now it will be easier for the government to spy on Americans

Travis LeBlanc | The Guardian

In early February 2025, President Donald Trump dismissed 18 inspectors general from various federal agencies, including the Departments of Defense, Energy, and State, without providing the legally required 30-day notice to Congress. This action has been widely criticized as a violation of federal law and an attempt to undermine independent oversight. Notably, Adam Schiff, a Democratic senator, described the firings as a "clear violation of law," emphasizing the threat they pose to transparency and accountability within the government. The removals have raised concerns about the potential appointment of political loyalists in place of impartial watchdogs, potentially leading to increased waste, fraud, and corruption. Legal challenges are anticipated to contest the legality of these dismissals and to uphold the integrity of governmental oversight mechanisms.

EDPB adopts statement on age assurance, creates a task force on AI enforcement and gives recommendations to WADA

European Data Protection Board

In February 2025, the European Data Protection Board (EDPB) adopted a statement outlining ten principles for age assurance, aiming to protect minors online while ensuring data protection compliance. The EDPB also expanded its ChatGPT task force to encompass broader AI enforcement and plans to establish a quick response team for urgent privacy matters. Additionally, the Board provided recommendations on the 2027 World Anti-Doping Agency (WADA) Code, emphasizing the need to safeguard athletes' personal data during anti-doping processes. These initiatives reflect the EDPB's commitment to addressing emerging digital challenges through coordinated efforts.

Global DPAs reaffirm commitment to 'privacy-protective AI'

Haksoo Ko | Marie-Laure Denis | John Edwards | Dale Sunderland | Carly Kind | Data Protection Comission

On February 11, 2025, data protection authorities from Ireland, Australia, Korea, France, and the United Kingdom issued a joint statement emphasizing the need for trustworthy data governance frameworks to foster innovative and privacy-protective artificial intelligence (AI). The statement highlights the importance of embedding privacy-by-design principles into AI systems from their inception and implementing robust data governance measures throughout their lifecycle. Recognizing the complexity of AI data processing, the authorities commit to fostering a shared understanding of lawful data processing grounds, exchanging information on proportionate safety measures, and collaborating with various stakeholders to monitor AI's technical and societal impacts. This collaborative effort aims to balance the transformative potential of AI with the protection of fundamental rights, ensuring that AI development aligns with data protection and privacy norms.

EU's new AI Act restricts emotion recognition systems in workplaces

Dexter Tilo | Human Resources Director

In February 2025, the European Union's Artificial Intelligence Act introduced significant restrictions on the use of emotion recognition systems in workplaces. The legislation prohibits AI systems that infer emotions from biometric data—such as facial expressions, voice patterns, keystrokes, body postures, or movements—unless used for medical or safety purposes. This includes banning the use of AI to monitor employees' emotional states during recruitment, probation periods, or through tools like webcams and voice recognition systems in settings like call centers. The Act aims to address privacy concerns and potential biases in AI-driven emotion detection, protecting workers' dignity and preventing discriminatory practices. While the guidelines offer insights into the Commission's interpretation, they are non-binding, with authoritative interpretations reserved for the Court of Justice of the European Union.

Company Ordered to Cease Using Facial Recognition Technology to Monitor Access to its Facilities: Overview of Quebec Privacy Regulator’s Decision

Amir Kashdaran | McMillan

In September 2024, Quebec's Commission d’accès à l’information (CAI) ordered Transcontinental Printing Inc. to cease using facial recognition technology for facility access control and to destroy all previously collected biometric data. The CAI determined that the company's practices violated the Act respecting the protection of personal information in the private sector, as the collection of sensitive biometric information was deemed unnecessary and disproportionate to the intended security objectives. This decision underscores the stringent requirements for collecting and processing biometric data in Quebec's private sector, emphasizing the need for organizations to consider less intrusive alternatives and to ensure that their data practices align with privacy regulations.

Previous
Previous

Week of 2025-02-28

Next
Next

Week of 2025-02-12