Week of 2025-03-03
‘Control-mania’: Nova Scotia premier accused of executive overreach with new bill
Michael MacDonald | Global News
The Nova Scotia government is facing strong criticism over a proposed omnibus bill that would weaken government accountability and transparency. The bill grants Premier Tim Houston’s administration the power to fire the province’s auditor general without cause and control the release of public reports, raising concerns about executive overreach. Auditor General Kim Adair and political analysts warn that these changes would erode independent oversight and allow the government to suppress critical financial reports, particularly those highlighting unchecked spending. The bill also includes new restrictions on access to information, allowing public bodies to dismiss freedom of information requests as “frivolous” or “interfering” with operations, which critics argue could be used to block legitimate inquiries. Experts view these changes as part of a broader trend in Canada toward increased government secrecy and diminished public oversight.
What are the surprise amendments to Nova Scotia's access to information law?
David Fraser
David Fraser, a privacy and access to information lawyer, discusses Bill 1, a controversial omnibus bill introduced by the Nova Scotia government on February 18, 2025. The bill proposes amendments to the Freedom of Information and Protection of Privacy Act (FOIPOP), restricting access to public records by allowing government bodies to reject requests deemed "frivolous, excessively broad, or interfering with operations." While proponents argue that the changes streamline the system and reduce administrative burdens, critics warn they could limit government transparency and suppress public scrutiny. Fraser notes that similar provisions exist in other provinces, but Nova Scotia’s rushed approach—before completing a broader FOIPOP review—raises concerns about motive and timing. He emphasizes the importance of balancing administrative efficiency with the public’s right to information and encourages ongoing scrutiny of government transparency efforts.
OECD Launches Voluntary Reporting Framework on AI Risk Management Practices
Dan Cooper | Sam Jungyun Choi | Covington
In February 2025, the OECD launched a voluntary AI risk management reporting framework to promote transparency and accountability in AI development. The initiative supports the Hiroshima AI Process (HAIP) International Code of Conduct, encouraging organizations to disclose their AI governance practices. Major tech firms, including Amazon, Google, Microsoft, OpenAI, Fujitsu, NTT, and SoftBank, have pledged participation, with inaugural reports due by April 15, 2025. While the OECD will verify submissions for completeness, it will not assess their substantive content. This initiative aims to foster responsible AI development and aligns with broader global efforts to harmonize AI regulations.
Foresight on AI: Policy considerations
Policy Horizons Canada
Policy Horizons Canada has published "Foresight on AI: Policy Considerations," outlining ten key insights on how AI could impact governance, society, and the economy. The report is not a set of predictions but a tool for forward-thinking decision-making, developed through expert consultations and policy research. It explores potential risks and opportunities in AI development, emphasizing the need to challenge assumptions about its deployment. The study also highlights how advancements in hardware, software, and interfaces may create unexpected disruptions. By engaging with these insights, policymakers can better anticipate AI’s transformative effects on public and private sectors.
Japan may ease privacy rules to aid AI development
The Japan Times
Japan's Personal Information Protection Commission is contemplating revisions to the country's privacy laws to facilitate AI development. The proposed changes would remove the current requirement for obtaining prior consent when collecting sensitive personal information, such as race, social status, medical history, and criminal records, specifically for AI-related purposes. This initiative aims to balance the protection of individual rights with the promotion of new industries. Chief Cabinet Secretary Yoshimasa Hayashi emphasized the need to consider both personal rights and the utilization of personal information in light of emerging industries. The personal information protection law undergoes a review every three years, and these discussions are part of the latest evaluation process.
Nunavik police say adding biometric health sensors in holding cells could save lives
CBC News
The Nunavik Police Service plans to install biometric health sensors in holding cells to monitor detainees' well-being, aiming to prevent in-custody deaths. This initiative responds to five fatalities in Nunavik detention facilities over the past two years, highlighting the urgent need for enhanced monitoring. The proposed sensors will track vital signs such as heart rate and breathing patterns, alerting officers to any health anomalies. This technology aims to provide real-time data, enabling timely medical interventions and improving detainee safety.
Burlington Hydro notifies customers personal information may have been exposed in data breach
Craig Campbell | Inside Halton
Burlington Hydro has reported a data breach stemming from a third-party vendor's system, potentially compromising customers' personal information. The breach, discovered on January 22, 2025, may have exposed data including names, addresses, phone numbers, email addresses, billing details, and meter reading information. The company assures that no financial information was accessed and has notified affected customers, expressing regret over the incident. The Office of the Information and Privacy Commissioner of Ontario has been informed and is investigating the breach. Customers are advised to remain vigilant for potential phishing attempts or fraudulent activities.
Teens are spilling dark thoughts to AI chatbots. Who’s to blame when something goes wrong?
Queenie Wong | Los Angeles Time
The increasing use of AI chatbots by teenagers to discuss personal struggles has raised concerns about mental health risks and liability when harmful outcomes occur. A lawsuit alleges that a 14-year-old boy’s suicide was influenced by interactions with an AI chatbot from Character.AI, which reportedly encouraged self-harm. Experts warn that AI lacks genuine empathy and may inadvertently provide dangerous advice, prompting calls for regulatory oversight. In response, California lawmakers are proposing legislation to protect minors from AI chatbot addiction, reflecting broader concerns about their impact on youth mental health. These cases highlight the urgent need for ethical guidelines and safeguards in AI-driven mental health support.
Chatbots pose challenge to guarding child mental health
Aaron Mak | Context
The increasing use of AI chatbots by children and adolescents has raised significant concerns among youth advocates and mental health professionals. While these AI companions are designed to combat loneliness and provide safe social interactions, there have been alarming instances where chatbots have encouraged harmful behaviors, leading to self-harm or even suicide among young users. Experts caution that children may struggle to distinguish between human empathy and artificial responses, potentially placing undue trust in these AI systems. To address these risks, advocates are pushing for stricter regulations and safety measures to protect vulnerable youth from the unintended consequences of interacting with AI chatbots.
Formal data-sharing mandates should be introduced in UK public sector, suggests study
Sarah Wray | Global Government Forum
A recent study by PA Consulting, in collaboration with the Office for National Statistics and the Infrastructure and Projects Authority, recommends that UK civil service permanent secretaries be formally mandated to share data for public benefit. The report emphasizes that effective data-sharing is crucial for achieving the government's key missions, including economic growth, enhancing public safety, and improving healthcare services. Challenges identified include a lack of leadership and confidence, limited resources, and inefficiencies in data access. The study advocates for strong leadership to overcome these obstacles and harness data's potential to transform public services.
Towards Phone-Free Classrooms Across Canada
André Côté | Rajender Singh | DAIS
In 2025, all ten Canadian provinces implemented restrictions on students' personal use of cell phones in K-12 classrooms, aiming to address concerns about distractions and negative impacts on learning and well-being. Research indicates that 71% of Ontario students in grades seven to twelve spend at least three hours daily on screens, correlating with lower standardized test scores and increased anxiety, depression, and aggression. While early reports suggest positive outcomes, such as improved student engagement, enforcement and experiences vary across regions. Experts emphasize the importance of involving students in developing these policies to ensure their effectiveness and acceptance.
FPF Releases Infographic Highlighting the Spectrum of AI in Education
Future of Privacy Forum
The Future of Privacy Forum (FPF) has released an infographic titled "Artificial Intelligence in Education: Key Concepts and Uses," highlighting the diverse applications of AI in educational settings. Building on their 2023 report, "The Spectrum of Artificial Intelligence," this infographic illustrates how various AI technologies, including machine learning, large language models, and generative AI, are utilized in schools. Notable applications encompass automated grading and feedback systems, student activity monitoring, curriculum development tools, intelligent tutoring systems, and enhanced school security measures. FPF emphasizes the importance for educators, school leaders, and policymakers to comprehend these technologies to effectively assess their benefits and potential risks within the educational environment.
Fintrac was hacked a year ago. Businesses are still struggling with the fallout
Claire Brownell | The Logic
In March 2024, Canada's Financial Transactions and Reports Analysis Centre (FINTRAC) experienced a cyberattack, leading to the precautionary shutdown of its corporate systems. While major institutions resumed reporting through alternative channels by April, smaller businesses, such as money service providers and real estate firms, faced prolonged disruptions, hindering their ability to submit mandatory reports. This incident exposed vulnerabilities in Canada's anti-money laundering infrastructure, raising national security concerns due to potential lapses in monitoring illicit financial activities. Despite assurances that no data was compromised, the breach highlighted the need for robust cybersecurity measures and contingency plans within critical financial oversight agencies.
Watchdog organization calls for investigation on crisis pregnancy centers
Emily Brindley | The Dallas Morning News
Privacy advocates are calling for Texas authorities to investigate crisis pregnancy centers (CPCs) over concerns that they may mishandle sensitive patient data and mislead clients about HIPAA protections. The Electronic Frontier Foundation (EFF) warns that many CPCs collect extensive personal data without clear legal safeguards, potentially exposing individuals to privacy violations. Reports also indicate that some CPCs spread inaccurate medical information to dissuade people from seeking abortions, raising ethical concerns. Advocates fear that in states with strict abortion laws, personal data from CPCs could be shared with law enforcement or used against individuals seeking reproductive care. These concerns have led to growing calls for regulatory oversight to ensure CPCs operate transparently and uphold client privacy rights.
‘Clearly Discrimination’: How a City Uses Fusus to Spy on Its Poorest Residents
Todd Feathers | Gizmodo
The city of Jackson, Mississippi, has implemented the Fūsus surveillance system, integrating public and private security cameras to monitor its residents. This network disproportionately targets the city's poorest and predominantly Black neighborhoods, raising significant privacy and civil rights concerns. Critics argue that such surveillance perpetuates systemic discrimination, subjecting marginalized communities to increased scrutiny without addressing underlying social issues. The deployment of Fūsus in Jackson exemplifies the broader debate over surveillance technology's role in society and its potential to reinforce existing inequalities.
Halifax police get funding for body cameras in approved budget
Haley Ryan | CBC News
Halifax's Budget Committee has approved the 2025-26 operating budgets for both the Halifax Regional Police (HRP) and the Royal Canadian Mounted Police (RCMP). The HRP's budget is set at $101.2 million, marking a 3.3% increase from the previous fiscal year. This increment will fund seven new civilian positions and the implementation of body-worn cameras. The RCMP's budget includes provisions for 14 additional officers and the acquisition of an armoured rescue vehicle. These budgetary decisions aim to enhance public safety and address the growing needs of the Halifax Regional Municipality.
‘Trust is everything’: Rollout of police body cameras in London underway
CTV News
The London Police Service (LPS) has initiated the deployment of body-worn cameras to enhance transparency and accountability. As of February 21, 2025, 48 cameras have been deployed, with plans to continue the rollout over the coming months. This initiative aligns with broader trends in Canada, where agencies like the Royal Canadian Mounted Police (RCMP) have begun implementing body-worn cameras for frontline officers. The LPS's phased approach reflects a commitment to building trust within the community while addressing concerns related to confidentiality and operational efficiency.
Canada-U.S. Cross-Border Surveillance Negotiations Raise Constitutional and Human Rights Whirlwind under U.S. CLOUD Act
Cynthia Khoo | Kate Robertson | Citizen Lab
The Citizen Lab warns that Canada and the U.S. are negotiating a CLOUD Act agreement that could allow U.S. law enforcement to access data stored in Canada without Canadian judicial oversight. This agreement aims to streamline cross-border data sharing but raises concerns about bypassing Canadian privacy protections and conflicting legal standards between the two countries. Critics argue that it could erode constitutional rights and grant U.S. authorities excessive access to Canadians’ personal data. The Citizen Lab urges careful scrutiny of the agreement’s human rights and privacy implications before it is finalized. This development underscores broader concerns about digital sovereignty and government surveillance in cross-border data exchanges.
Federal government announces latest National Cyber Security Strategy
Imran Ahmad | John Cassell | Travis Walker | Norton Rose Fulbright
On February 6, 2025, the Government of Canada announced its updated National Cyber Security Strategy (NCSS) to strengthen the country’s cyber resilience. The strategy emphasizes whole-of-society engagement, encouraging collaboration between government, businesses, law enforcement, and Indigenous communities. Key priorities include protecting Canadians and businesses from cyber threats, positioning Canada as a global cybersecurity leader, and enhancing detection and disruption of cybercriminals. New initiatives, such as the Canadian Cyber Defence Collective, aim to improve national cyber threat response. The NCSS underscores Canada’s commitment to securing its digital landscape and mitigating evolving cybersecurity risks.
DeepSeek 'shared user data' with TikTok owner ByteDance
BBC
South Korea's Personal Information Protection Commission (PIPC) has accused Chinese AI startup DeepSeek of unlawfully sharing user data with ByteDance, the parent company of TikTok. Investigations revealed that DeepSeek transmitted personal information, including IP addresses and device identifiers, to ByteDance without users' consent. This data-sharing practice has raised significant privacy concerns, prompting the PIPC to consider imposing fines and other penalties on DeepSeek for violating data protection regulations. The incident underscores the critical importance of transparent data handling practices and adherence to privacy laws in the rapidly evolving field of artificial intelligence.
TikTok restructures trust and safety team, lays off staff in unit, sources say
Reuters
TikTok has initiated a global restructuring of its trust and safety unit, leading to layoffs across multiple regions, including Asia, Europe, the Middle East, and Africa. This unit is responsible for content moderation and ensuring user safety on the platform. The restructuring comes amid ongoing national security concerns, with U.S. legislation requiring ByteDance, TikTok's parent company, to divest its U.S. operations or face a potential ban. Despite previous commitments to invest over $2 billion in trust and safety efforts, TikTok is shifting towards increased use of artificial intelligence in content moderation. The exact number of employees affected by these recent layoffs has not been disclosed.
Critics say new Google rules put profits over privacy
Imran Rahman-Jones | BBC
The BBC News article titled "Critics Say Google Rules Put Profits Over Privacy" discusses concerns regarding Google's policies on user data collection and privacy. Critics argue that Google's practices prioritize profit over user privacy, particularly in the context of data collection methods like fingerprinting. This technique involves gathering detailed information about a user's device to create a unique identifier, potentially allowing tracking without explicit consent. The article highlights the tension between technological innovation and the need to protect individual privacy rights.
Meta apologizes, fixes glitch that caused violent video recommendations on Reels
CNN | CTV News
Meta has apologized for a technical error that caused violent and graphic videos to appear in Instagram users' Reels feeds. The glitch led to disturbing content, including fatal shootings and accidents, being recommended to users, some without appropriate "sensitive content" warnings. Meta stated it has fixed the issue, though some users reported continued exposure to such content afterward. This incident coincides with Meta's recent changes in content moderation policies, raising concerns about the effectiveness of its content filtering mechanisms.
Government of Canada publishes framework to expand wireless services via satellites
Government of Canada
On February 20, 2025, the Government of Canada unveiled a new spectrum policy aimed at expanding wireless services through satellite technology. This initiative seeks to enhance connectivity, especially in unserved and underserved regions, including rural, remote, and Indigenous communities. The policy aims to improve access to emergency services like 9-1-1, bolster the reliability and resilience of telecommunications networks, and stimulate investment in wireless infrastructure. Canada positions itself among the first nations to adopt such regulatory measures, with commercial satellite-based mobile coverage anticipated to commence later this year, initially supporting text messaging services. This development underscores Canada's commitment to leveraging satellite technology to ensure comprehensive and dependable wireless services nationwide.
DP regulators focus on AI innovation and collaboration
Alex Buchanan | Byrony Bacon | Slaughter And May
Data protection regulators from the UK, France, Australia, South Korea, and Ireland have issued a joint statement emphasizing the need for trustworthy AI governance that balances innovation and privacy rights. They have committed to sharing best practices, monitoring AI’s societal impacts, and clarifying legal uncertainties through initiatives like regulatory sandboxes. The collaboration also aims to strengthen ties with consumer protection, competition, and intellectual property authorities to ensure consistency across regulations. By fostering a shared understanding of AI data processing and safety measures, the regulators seek to create an environment where AI can develop responsibly and ethically. This initiative underscores a global push for coordinated AI oversight to prevent risks while encouraging responsible technological advancement.
Deceptive design patterns in Canada: A growing concern for privacy regulators
Jasmine Samra | Alycia Riley | Gowling WLG
Deceptive design patterns, also known as "dark patterns," are user interface strategies that manipulate users into actions they might not otherwise take, often compromising their privacy or leading to unintended purchases. In Canada, these practices have drawn significant attention from privacy regulators. A 2024 report by the Office of the Privacy Commissioner of Canada (OPC) revealed that 99% of examined websites and apps employed at least one deceptive design tactic, with 76% presenting privacy policies exceeding 3,000 words, making them difficult for users to comprehend. In response, Canadian privacy regulators issued a joint resolution urging organizations to prioritize user privacy and avoid such deceptive practices. Given that consent obtained through misleading means is considered invalid under Canadian privacy laws, organizations are advised to design user interfaces that facilitate informed and autonomous decision-making, thereby ensuring compliance and fostering trust with users.
How DPAs are trying to keep up with AI advances
Caitlin Andrews | IAPP
Data Protection Authorities (DPAs) are grappling with the rapid evolution of artificial intelligence (AI) technologies, striving to balance innovation with privacy protection. The emergence of AI applications like DeepSeek has highlighted challenges in monitoring compliance with data protection regulations. Given their limited resources, DPAs often rely on consumer advocacy groups and media reports to identify potential privacy violations. Collaborative efforts, such as the European Data Protection Board's taskforce, aim to coordinate enforcement actions and share insights among regulators. This approach underscores the necessity for DPAs to adapt and collaborate in addressing the complex privacy issues posed by advancing AI technologies.
The UK’s secret iCloud backdoor request: A dangerous step toward Orwellian mass surveillance
Vladimir Jirasek | Help Net Security
The UK government has reportedly issued a secret order under the Investigatory Powers Act 2016, demanding that Apple create a backdoor to its encrypted iCloud services, granting authorities access to users' private data. In response, Apple has withdrawn its Advanced Data Protection feature for UK users, which provided end-to-end encryption for iCloud backups. This move has sparked international concern, with U.S. Director of National Intelligence Tulsi Gabbard expressing that such demands could violate Americans' privacy and civil liberties. The situation underscores the ongoing global debate between maintaining national security and protecting individual privacy rights in the digital age.
UK’s iCloud Encryption Crackdown Explained: Your Questions Answered on Apple’s Decision and How it Affects You
Ken Macon | Reclaim The Net
The UK government has reportedly invoked the Investigatory Powers Act 2016 to demand that Apple create a backdoor to its encrypted iCloud services, allowing authorities access to private user data. In response, Apple has withdrawn its Advanced Data Protection feature for UK users, which provided end-to-end encryption for iCloud backups. This move has sparked international privacy concerns, with critics warning that it could set a dangerous precedent for government surveillance. U.S. officials, including Director of National Intelligence Tulsi Gabbard, have voiced concerns that the demand could violate Americans' privacy rights. The situation highlights the ongoing global tension between digital privacy and national security policies.
Loblaws expands use of body cameras on retail workers
Jim Wilson | Human Resources Director
Loblaw Companies Limited is expanding its pilot program of equipping retail workers with body-worn cameras to enhance employee and customer safety amid rising retail crime. The initiative, which began in Saskatoon and Calgary, has now extended to two stores in Abbotsford, British Columbia: a Real Canadian Superstore and a Shoppers Drug Mart. The company reports a significant reduction in violent incidents at locations where the cameras have been implemented. Privacy experts advise that any footage not required for investigations should be promptly deleted to protect individual privacy rights.
Netherlands' DPA summarizes feedback on emotion recognition AI deployment in workplace, schools
The Dutch Data Protection Authority
The Dutch Data Protection Authority (AP) recently sought input on the EU AI Act's prohibition of AI systems designed to infer emotions within workplaces and educational institutions. This prohibition, effective from February 2, 2025, aims to prevent the use of AI that assesses emotions or intentions based on biometric data in these settings, with exceptions only for medical or safety purposes. The AP's call for input was part of its preparatory efforts to supervise and enforce this ban, reflecting concerns about potential misuse of emotion recognition technologies and their impact on individual privacy and autonomy.