Week of 2025-03-17
Information watchdog investigating top department's record keeping in 'green slush fund' appointment
Christopher Nardi | National Post
Sustainable Development Technology Canada (SDTC), established in 2001, was a Canadian foundation dedicated to funding clean technology initiatives. In November 2023, whistleblower allegations surfaced, accusing SDTC of mismanaging funds and fostering a toxic workplace environment. Subsequent investigations revealed that Annette Verschuren, then Chairperson of SDTC, had approved grants totaling over $200,000 to her own company, NRStor Inc., during the COVID-19 pandemic. This led to her resignation in November 2023. A report by the Auditor General in June 2024 highlighted governance lapses and improper fund allocations within SDTC. Consequently, the federal government dissolved SDTC as an independent entity, integrating its programs into the National Research Council Canada to ensure better oversight and accountability.
Nova Scotia NDP urging public to weigh in on the government’s ‘overreaching’ bills
Keith Doucette | CityNews
Nova Scotia's New Democratic Party (NDP) leader, Claudia Chender, is urging the public to participate in the upcoming Public Bills Committee meeting on Monday, where seven government bills will be reviewed. Among these is legislation proposing to lift longstanding bans on fracking and uranium mining, which Chender describes as "a suite of overreaching and concerning laws." She emphasizes the importance of public input to ensure that such significant policy changes undergo proper consultation. Premier Tim Houston has faced criticism over proposals affecting the independence of the auditor general and public access to government records, leading to his commitment to amend or withdraw certain contentious provisions. However, Chender notes that specific details of these amendments have yet to be disclosed, underscoring the need for public engagement in the legislative process.
B.C. watchdog launches police misconduct database
Lisa Steacy | CTV News
The Office of the Police Complaint Commissioner (OPCC) of British Columbia has launched an online database detailing substantiated allegations of police misconduct and corresponding disciplinary measures. This initiative aims to enhance transparency and accountability within the province's municipal police forces. The Discipline Decisions Digest allows the public to search through cases involving the 12 municipal police departments under the OPCC's jurisdiction, as well as the Metro Vancouver Transit Police, Stl'atl'imx Tribal Police Service, and the Combined Forces Special Enforcement Unit – BC. While the database does not disclose officers' names, it provides comprehensive information on the nature of the misconduct and the disciplinary actions taken. This development aligns with recommendations from previous reviews to improve consistency in disciplinary decisions and supports public confidence in policing.
Ottawa shakes up AI advisory group with star researchers and big companies
Murad Hemmadi | The Logic
Canada is advancing its AI regulatory framework through the Artificial Intelligence and Data Act (AIDA), which is part of Bill C-27 and aims to establish risk-based AI governance. The government has also launched the Safe and Secure AI Advisory Group and refreshed its Advisory Council on AI to provide guidance on AI risks and responsible adoption. A Voluntary Code of Conduct has been introduced to encourage ethical AI development while broader regulations are finalized. These initiatives reflect Canada's commitment to balancing innovation with safety and accountability in AI deployment. The country is positioning itself as a leader in ethical AI governance, ensuring public trust and responsible AI use.
Ireland adopts sectoral 'distributed model of implementation' for AI Act
Minister of State for Trade Promotion, Artificial Intelligence and Digital Transformation
Ireland has approved a distributed regulatory model for enforcing the EU Artificial Intelligence (AI) Act, assigning oversight to eight sectoral regulators, including the Data Protection Commission and the Central Bank of Ireland. Minister Peter Burke emphasized that this approach will ensure AI is developed safely and ethically while driving innovation and productivity. Minister Niamh Smyth highlighted that leveraging existing regulators will make compliance easier for businesses operating within the EU Digital Single Market. The government aims to position Ireland as a leader in AI governance, ensuring trust in AI systems while fostering growth in the sector. Additional regulatory bodies will be designated in the future to support the comprehensive implementation of the AI Act.
Who’s hacking CRA accounts?
Harvey Cashore | Eva Uguen-Csenge | Mark Kelley | CBC News
The Canada Revenue Agency (CRA) has experienced a series of account breaches, with thousands of Canadians reporting unauthorized access to their online profiles. Investigations reveal that cybercriminals are exploiting previous data breaches from other organizations, using stolen credentials to infiltrate CRA accounts—a tactic known as credential stuffing. This method is effective when individuals reuse passwords across multiple platforms. The compromised accounts have been linked to fraudulent activities, including unauthorized applications for COVID-19 relief benefits. The CRA has acknowledged these incidents and temporarily suspended online services to enhance security measures. Experts advise Canadians to adopt robust cybersecurity practices, such as using unique passwords for different accounts and enabling multi-factor authentication, to mitigate the risk of such attacks.
Recklessness as a Willful Violation of Privacy: B.C. Court of Appeal Decision has Implications for Private and Public Sector Organizations
Connor Bildfell | Curtis Chance | Juliet Watts | McCarthy Tetrault
The British Columbia Court of Appeal has recently ruled that reckless conduct can constitute a willful violation of privacy under the province's Privacy Act. This decision carries significant implications for both private and public sector organizations, as it broadens the scope of what may be considered a willful privacy breach. Organizations are now advised to exercise heightened diligence in handling personal information to avoid actions that could be deemed reckless, thereby mitigating potential legal risks associated with privacy violations.
NIST guidelines outline differential privacy best practices, pitfalls
Joseph Near | David Darais | Naomi Lefkovitz | Gary Howarth | Computer Security Resource Center
The National Institute of Standards and Technology (NIST) has released Special Publication 800-226, titled "Guidelines for Evaluating Differential Privacy Guarantees." This document aims to assist practitioners in understanding and assessing differential privacy—a mathematical framework that quantifies privacy loss when individual data is included in a dataset. The publication introduces a "differential privacy pyramid," outlining multiple factors for consideration, and highlights common pitfalls, referred to as "privacy hazards," that can occur when implementing differential privacy in practice. The guidelines are designed to help policymakers, business owners, product managers, IT technicians, software engineers, data scientists, researchers, and academics evaluate promises made (and not made) when deploying differential privacy, including for privacy-preserving machine learning.
The digital scandal that could bring down the Quebec government
Martin Patriquin | The Logic
The SAAQclic scandal has significantly impacted Quebec's government, leading to the resignation of Éric Caire, the Minister of Cybersecurity and Digital Technology. The project, intended to modernize the Société de l'assurance automobile du Québec's (SAAQ) digital services, faced severe issues, including cost overruns and implementation failures. Reports suggest that Caire was aware of these escalating costs and may have assisted in concealing a $222-million overrun. In response, Premier François Legault appointed former Charbonneau Commission prosecutor Denis Gallant to lead a public inquiry into the fiasco. This scandal has raised concerns about the government's transparency and project management, potentially affecting its standing with the public.
The 50-Year-Old Law That Could Stop DOGE in Its Tracks—Maybe
Eric Geller | Wired
The Department of Government Efficiency (DOGE), informally led by Elon Musk, is facing multiple lawsuits alleging violations of the Privacy Act of 1974 due to its extensive access to sensitive federal data. This Act, established post-Watergate, restricts how federal agencies collect, use, and share personal information. At least eight lawsuits have been filed, including those by federal employee unions and privacy advocacy groups, challenging DOGE's access to databases from agencies like the Office of Personnel Management (OPM) and the Department of the Treasury. Plaintiffs argue that DOGE staff, many with prior affiliations to Musk's private enterprises, lack legitimate need for such access and have not undergone standard security vetting. The outcomes of these legal challenges could significantly impact DOGE's operations and set precedents for federal data privacy practices.
AI should replace some work of civil servants, Starmer to announce
Rowena Mason | The Guardian
UK Prime Minister Keir Starmer has announced plans to integrate artificial intelligence (AI) into government operations to enhance efficiency and reduce costs. He emphasized that tasks which AI can perform better and faster should transition from civil servants to digital solutions, aiming to save an estimated £45 billion. The initiative includes recruiting 2,000 tech apprentices to bolster digital capabilities within the civil service. While this move seeks to modernize public services, it has raised concerns among trade unions about potential job losses and the need for genuine transformation rather than attributing inefficiencies solely to civil servants.
Ontario school boards clear hurdle in lawsuits against Meta, Snapchat, TikTok
Rianna Lim | CBC News
Several Ontario school boards have initiated a lawsuit against major social media companies—Meta, Snapchat, and TikTok—alleging that these platforms negatively impact student learning and well-being. The lawsuit claims that the platforms are designed to be addictive, leading to issues such as decreased academic performance and mental health challenges among students. The school boards are seeking compensation for the resources diverted to address these challenges and are advocating for the redesign of these platforms to prioritize student safety. Recently, an Ontario Superior Court judge dismissed the defendants' motion to strike the claim, allowing the case to proceed. This legal action reflects growing concerns about the influence of social media on youth and the responsibilities of tech companies in safeguarding user well-being.
Municipalities will delve into benefits of artificial intelligence during forum
The Trillium
The Federation of Northern Ontario Municipalities (FONOM) will host a conference on May 5-6, 2025, in North Bay, focusing on AI in municipal operations. Olya Sanakoev, Chief Technology Officer at Rogers Bank, will deliver a keynote on AI-driven municipal innovation, covering practical applications, ethical considerations, and resource-efficient adoption strategies. The event will gather municipal leaders, provincial ministers, and policymakers to explore how AI can enhance public services and efficiency. Ontario’s Information and Privacy Commissioner has stressed the need for transparency and fairness in AI adoption, while municipalities like Innisfil are already using AI for waste management and flood prevention. This conference reflects a growing movement toward integrating AI into governance while balancing innovation with ethical considerations.
Speed cameras resulted in 12,796 tickets issued last year
Tyler Clarke | The Trillium
In 2024, Greater Sudbury’s six automated speed enforcement (ASE) cameras issued 12,796 tickets, generating approximately $1.3 million in fines, with a net revenue of $753,003 after expenses. The funds are being reinvested into traffic safety initiatives, including expanding the bollard traffic-calming program and implementing gateway speed limits. Additionally, $500,000 has been allocated to the Roads and Transportation Asset Management Plan to assess and maintain city roads. Other Ontario municipalities, like Toronto and York Region, have also seen significant reductions in excessive speeding due to ASE programs. These results highlight the effectiveness of speed cameras in promoting safer driving and reducing traffic violations.
City of Brampton invests in cutting-edge technology to enhance public safety
Rowena Santos
Brampton is enhancing public safety by installing 360-degree cameras and license plate recognition technology at 50 traffic intersections to assist Peel Regional Police in crime investigations. So far, 19 intersections have been equipped, with the full rollout expected by year-end. These cameras do not issue tickets but provide crucial data to improve traffic and community safety. Additionally, Brampton has launched Ontario’s largest Automated Speed Enforcement (ASE) Processing Centre to deter speeding and enhance road safety. These investments highlight the city's commitment to using advanced technology to protect residents and improve public security.
Mozilla is already revising its new Firefox terms to clarify how it handles user data
Jay Peters | The Verge
Mozilla recently introduced its first Terms of Use for Firefox, which included language suggesting the company had broad rights over user data. This led to criticism and concerns among users. In response, Mozilla clarified that the intent was to outline the necessary permissions for Firefox's functionality, not to claim ownership of user data. The company has since revised the terms to more accurately reflect its data practices, emphasizing that it processes user data as described in the Firefox Privacy Notice and does not own users' content. Mozilla also addressed the complexities surrounding the legal definitions of "selling data," reaffirming its commitment to user privacy while ensuring Firefox remains commercially viable.
Google reports scale of complaints about AI deepfake terrorism content to Australian regulator
Byron Kaye | Reuters
Between April 2023 and February 2024, Google reported to the Australian eSafety Commission that it received 258 user reports alleging its AI software, Gemini, was used to create deepfake content related to terrorism or violent extremism. Additionally, there were 86 reports concerning AI-generated child exploitation material. These disclosures offer a rare glimpse into the potential misuse of AI technologies. The eSafety Commissioner, Julie Inman Grant, emphasized the necessity for companies to integrate effective safeguards into AI products to prevent the generation of harmful content. Google reiterated its policies against creating or distributing content that facilitates violent extremism, terrorism, child exploitation, or other illegal activities.
Federal privacy watchdog heads to court over Pornhub operator's consent practices
Jim Bronskill | CBC News
Canada's Privacy Commissioner, Philippe Dufresne, has initiated legal action against Aylo Holdings, the Montreal-based operator of Pornhub and other adult websites, seeking a Federal Court order to enforce compliance with Canadian privacy laws. This action follows a 2024 investigation that found Aylo allowed the sharing of intimate images without the direct consent of those depicted. Despite some changes to its privacy practices, Aylo's measures are still deemed insufficient to ensure meaningful consent from all individuals featured in its content. Aylo disputes these findings, asserting that it has implemented significant safeguards, including mandatory uploader verification and proof of consent from all participants. This case underscores the ongoing challenges in balancing online content sharing with individual privacy rights.
Do age-verification laws work? Not according to this study
Anna Lovine | Mashable
A New York University study found that age-verification laws restricting minors' access to adult content may be ineffective, as users often seek ways to bypass them. In states with such laws, searches for compliant sites like Pornhub dropped by 51%, while searches for non-compliant platforms like XVideos rose by 48.1%. Additionally, VPN searches increased by 23.6%, indicating that users are actively circumventing restrictions. Researchers warn that these laws push users toward less regulated platforms, potentially exposing them to greater privacy and security risks. The study suggests policymakers should reconsider enforcement strategies to avoid unintended consequences while still protecting minors online.
UK quietly scrubs encryption advice from government websites
Carly Page | Tech Crunch
The UK government has quietly removed encryption guidance from its official cybersecurity webpages, previously advising high-risk individuals to use tools like Apple’s Advanced Data Protection (ADP). This change coincides with reports that the UK Home Office issued a technical capability notice compelling Apple to provide backdoor access to encrypted iCloud data. In response, Apple has discontinued ADP for UK users, preventing new activations and eventually requiring existing users to disable it. Privacy advocates warn that these actions undermine digital security, making personal data more vulnerable to cyber threats. The move has sparked concerns about government overreach and the long-term risks of weakening encryption.