Week of 2025-02-12
Federal government advised Ontario to reroute Highway 413: documents
Emma McIntosh | Global News
Recently disclosed government documents suggest that Ontario’s Highway 413 project has been quietly rerouted, raising concerns about transparency in infrastructure planning. The newly proposed route diverges from earlier public plans, affecting different communities and potentially increasing costs. Critics argue that the reroute may have been influenced by political or developer interests rather than environmental and urban planning considerations. Advocacy groups and local officials are demanding greater transparency and consultation on the project’s impact. The Ontario government, however, maintains that the changes were made to optimize traffic flow and minimize environmental disruption.
Thousands of datasets from Data.gov have disappeared since Trump's inauguration. What's going on?
Cecily Mauran | Mashable
Following Donald Trump’s inauguration, several government datasets have disappeared from federal websites, raising alarms among researchers, journalists, and transparency advocates. The missing data spans multiple agencies, including climate change reports, healthcare statistics, and economic indicators, which were previously available to the public. Experts argue that these removals hinder research, policymaking, and public accountability, particularly in areas like environmental policy and social justice. While some agencies cite routine updates or security concerns, critics suspect a broader effort to limit access to politically sensitive information. Digital archivists and watchdog groups are now racing to preserve and restore lost datasets to ensure continued public access.
Some Census Bureau data now appears to be unavailable to the public
Steve Liesman | Jesse Pound | CNBC
Certain Census Bureau data sets that were previously accessible to the public now appear to be unavailable, raising concerns about government transparency and data accessibility. Researchers, policymakers, and journalists have reported difficulty accessing demographic, economic, and community statistics that were once crucial for analysis and decision-making. The Census Bureau has not provided a clear explanation for the removals, though some officials cite privacy concerns and security measures as reasons for restricting access. Critics argue that these changes could hinder social research, economic planning, and public accountability, particularly in areas such as voter demographics, income inequality, and public health trends. Advocacy groups are urging the Biden administration to restore public access to these vital data sets.
Police in Ejaz Choudry death tried to keep their identities secret. Now we know who they are
Shanifa Nasser | CBC
The Ontario Special Investigations Unit (SIU) has cleared police officers involved in the fatal shooting of Ejaz Choudry, a 62-year-old man suffering from schizophrenia, citing a lack of reasonable grounds for criminal charges. Choudry was shot in his home in Mississauga in June 2020 after officers responded to a mental health crisis call. The SIU report found that police used force believing Choudry posed an imminent threat, as he was armed with a knife and refused to drop it despite multiple warnings. The decision has sparked outrage from Choudry’s family and community advocates, who argue that police escalated the situation instead of using de-escalation tactics. Calls for mental health-focused responses to crises and police accountability reforms have intensified following the ruling.
Through a Glass, Darkly: Transparency and Military AI Systems
Branka Marijan | CIGI
A new CIGI report examines the transparency issues surrounding military AI systems, emphasizing the risks posed by opaque decision-making in autonomous weapons. The report highlights how governments and defense agencies are increasingly integrating AI into military operations, but lack clear frameworks for accountability and oversight. It warns that black-box AI models—which operate with minimal human understanding—could lead to unpredictable or unintended consequences in combat scenarios. The study calls for greater transparency in AI development, urging policymakers to establish international norms and regulations to ensure ethical and responsible use of AI in military applications.
If DeepSeek wants to be a real disruptor, it should go much further on data transparency
Ben Snaith | Neil Majithia | Elena Simperl | The Odi
A new ODI analysis critiques DeepSeek, the Chinese AI startup, for its lack of transparency in data practices, arguing that true disruption requires greater openness in AI training datasets. The blog post highlights how DeepSeek’s secrecy over its data sources and model training methods raises concerns about bias, regulatory compliance, and ethical AI development. The article suggests that companies serious about responsible AI innovation should provide clear documentation on dataset origins, licensing, and governance. It concludes that without significant improvements in transparency, DeepSeek risks facing trust and adoption barriers—especially in global markets where data provenance and AI accountability are increasingly scrutinized.
Google drops pledge not to use AI for weapons or surveillance
Nitasha Tiku | Gerrit De Vynck | The Washington Post
Google has quietly removed its commitment not to use artificial intelligence for weapons or surveillance, updating its AI principles for the first time since 2018. The previous policy explicitly prohibited pursuing AI applications for weapons, surveillance, or those likely to cause harm, but these restrictions are now absent. In a blog post, AI chief Demis Hassabis and senior executive James Manyika cited geopolitical competition and national security concerns as reasons for the change, emphasizing the importance of democratic countries leading in AI development. Google now states it will ensure AI use aligns with international law and human rights through human oversight and testing. The move follows past employee protests against Google's work with the Pentagon on Project Maven, which had led the company to distance itself from military AI applications.
EU sets out guidance on banning harmful AI uses
Tech Xplore
The European Union has released new guidance on artificial intelligence, aiming to establish clearer rules for AI development and deployment across various sectors. The guidance focuses on ensuring AI systems are safe, transparent, and aligned with EU values, particularly concerning privacy, fairness, and accountability. It provides detailed recommendations on risk assessment, data governance, and compliance measures for businesses using AI. The EU’s approach is seen as a step toward enforcing the AI Act, which seeks to regulate high-risk AI applications while promoting innovation. This move is part of the EU’s broader strategy to lead global AI regulation and set standards for responsible AI use.
EU ban on ‘unacceptable’ AI comes into force with crucial details unresolved
Masha Borak | Biometric Update
The European Union’s ban on “unacceptable” artificial intelligence applications has officially come into force, marking a significant step in AI regulation. The ban prohibits AI systems that pose a clear threat to fundamental rights, such as biometric surveillance in public spaces, emotion recognition in workplaces and schools, and AI-driven social scoring. However, key details on enforcement and exemptions, including how national governments will handle security-related AI applications, remain unresolved. Critics argue that gaps in the regulation could lead to inconsistent implementation across EU member states. The regulation is part of the broader AI Act, which aims to set global standards for ethical AI use.
Ontario’s new public sector cybersecurity and AI law now in force – what public AND private sector organizations need to know
Jaime Cardy | Dentons Data
Ontario’s new public sector cybersecurity and AI law has officially come into force, introducing new security and AI governance requirements for public sector organizations. The law mandates stronger cybersecurity measures, risk assessments, and clear guidelines for AI deployment in government services. It requires public institutions to develop AI use policies, ensure transparency, and mitigate potential biases in automated decision-making. While the legislation is primarily aimed at the public sector, private sector entities contracting with government agencies may also need to comply with its cybersecurity and AI governance rules. Organizations must now review their AI systems and data protection practices to ensure compliance with the law.
The benefit of PETs (and pseudonymisation): new UK and EU guidance
Bryony Bacon | Tayla Byatt | The Lens
The UK and EU have released new guidance on privacy-enhancing technologies (PETs) and pseudonymization, aiming to strengthen data protection while enabling innovation. PETs, including encryption, anonymization, and differential privacy, help organizations process data securely while reducing risks of exposure. The guidance highlights pseudonymization as a key tool for compliance with GDPR and the UK Data Protection Act, allowing businesses to analyze datasets without directly identifying individuals. Regulators emphasize that PETs should be built into systems from the start to enhance security and minimize data breaches. Organizations handling sensitive or large-scale personal data are encouraged to adopt these techniques to balance privacy with usability.
Major tech figures get into politics with launch of Build Canada
Catherine McIntyre | Murad Hemmadi | The Logic
A group of prominent Canadian technology leaders has launched Build Canada, a platform designed to generate policy ideas aimed at fostering growth, innovation, and prosperity in the country. The initiative is led by tech figures including Ben Parry, Daniel Debow, Lucy Hargreaves, and Melody Kuo, and is supported by entrepreneurs such as Tobi Lütke (Shopify), Michael Katchen (Wealthsimple), and Ivan Zhang (Cohere). Build Canada utilizes a combination of expert insights and artificial intelligence to research and draft policy proposals, which are then reviewed by policy experts before publication. The platform has already released policy memos on topics like expanding health record access and immigration reform, with plans to publish more in the coming weeks. While the initiative is non-partisan and not a lobbying group, it aims to influence policy discussions ahead of the upcoming federal election.
A 25-Year-Old With Elon Musk Ties Has Direct Access to the Federal Payment System
Vittoria Elliott | Dhruv Mehrotra | Leah Feiger | Tim Marchman | Wired
Elon Musk associate Marko Elez, a 25-year-old engineer, has been granted administrator access to U.S. Treasury payment systems, raising security concerns. These systems process crucial federal payments, including Social Security and tax refunds. A federal judge has temporarily blocked Musk’s Department of Government Efficiency (DOGE) from accessing these systems, citing cybersecurity risks. However, Treasury Secretary Scott Bessent and Senate-confirmed officials are exempt from the block. The situation remains controversial, with ongoing legal challenges and concerns over financial system integrity.
Musk's DOGE crew wants to go all-in on AI
Scott Rosenberg | Axios
Elon Musk's Department of Government Efficiency (DOGE) initiative is facing scrutiny over its AI-driven automation of federal processes, with concerns about safeguards and oversight. Critics warn that DOGE’s rapid implementation could weaken transparency and security, especially in sensitive areas like federal payments and AI governance. Supporters argue it will streamline bureaucracy and reduce government inefficiencies. The Biden administration has pushed for greater regulatory checks, while Musk and allies advocate for minimal government intervention. The debate continues as policymakers assess potential risks and benefits.
Kept in the Dark: Meet the Hired Guns Who Make Sure School Cyberattacks Stay Hidden
Mark Keierleber | The 74
A new investigation reveals that U.S. school districts are failing to inform parents when their children’s personal data is compromised in cyberattacks. Many schools lack transparency, withholding breach details even when sensitive student information is at risk. The report highlights cases where families only learned of breaches through news reports or third-party sources. Experts warn that poor communication and weak cybersecurity practices leave students vulnerable to identity theft and exploitation. Advocates are calling for stronger regulations to ensure schools disclose breaches promptly and improve data protection measures.
A roadmap for protecting our democracies in the age of AI
University of Ottawa
A new research roadmap from the University of Ottawa outlines strategies to protect democracies from AI-driven threats. The report highlights AI’s role in disinformation, electoral manipulation, and mass surveillance, urging governments to prioritize regulation and transparency. Researchers emphasize the need for international cooperation to counter AI misuse while preserving freedom of expression. The roadmap also calls for public education initiatives to improve AI literacy and combat misinformation. Policymakers are encouraged to develop ethical AI guidelines that align with democratic values.
Majority of Canadian companies are integrating artificial intelligence into finance: KPMG report
Jacqueline So | Lexpert
A KPMG report reveals that the majority of Canadian companies are integrating artificial intelligence (AI) into their financial operations. The study found that 74% of businesses are already using or planning to implement AI-driven solutions to enhance financial decision-making, risk assessment, and fraud detection. Despite adoption, data security and regulatory compliance remain top concerns for executives. The report also highlights a skills gap, with many organizations struggling to find qualified AI professionals. KPMG urges companies to invest in AI governance frameworks to ensure ethical and effective implementation.
World-leading AI trial to tackle breast cancer launched
UK Government
The UK government has launched a world-leading AI trial to improve breast cancer detection and diagnosis. The trial will use artificial intelligence to analyze mammograms, aiming to enhance early detection rates and reduce pressure on radiologists. The initiative, backed by £16 million in funding, is expected to improve diagnostic accuracy and speed, ultimately benefiting patient outcomes. AI tools will be tested across multiple NHS sites, with results helping to determine their long-term role in cancer screening programs. If successful, the technology could revolutionize breast cancer care and be expanded nationwide.
Teen Mental Health App Sent Kids’ Data Straight to TikTok
Todd Feathers | Gizmodo
A mental health app for teenagers was found to be sending users' sensitive data directly to TikTok, raising major privacy and child safety concerns. The app, intended to support teen mental health, shared information like device IDs and browsing activity with the social media giant without proper disclosure. Privacy advocates warn this violates children's data protection laws and could lead to targeted advertising or exploitation. The discovery underscores the risks of unregulated data sharing in health-related apps, prompting calls for stricter enforcement of privacy regulations. Experts urge parents to be cautious about the apps their children use, especially those claiming to offer mental health support.
Canadian police expand use of facial recognition with new Idemia contract
Lu-Hai Liang | Biometric Update
Canadian police are expanding their use of facial recognition technology through a new contract with IDEMIA, a global biometrics firm. The agreement will provide law enforcement with advanced facial recognition tools to aid in identifying suspects and solving crimes. While police argue the technology improves public safety and investigative efficiency, privacy advocates warn of potential misuse, bias, and surveillance concerns. The deal reflects a growing trend of biometric adoption in Canada, despite calls for stronger regulations on facial recognition use. Critics urge the government to establish clear oversight measures to prevent privacy violations and wrongful identifications.
Feds publish list of ‘sensitive technologies,’ new cybersecurity strategy
David Reevely | The Logic
The Canadian government has published a list of sensitive technologies as part of its new cybersecurity strategy, aiming to protect critical industries from cyber threats and foreign interference. The list includes AI, quantum computing, semiconductors, and advanced communications among key areas requiring heightened security measures. The strategy aligns with global efforts to strengthen national cybersecurity frameworks amid growing concerns about cyber espionage and data security risks. Officials stress the importance of collaborating with private sector partners to safeguard Canada’s technological and economic interests. The move reflects Canada’s broader push to modernize its cybersecurity defenses in an era of increasing digital threats.
Canada announces new strategy to increase country's cybersecurity
Dean Daley | Daily Hive National
The Canadian government has unveiled a new strategy to strengthen cybersecurity, aiming to protect businesses, critical infrastructure, and citizens from evolving cyber threats. The plan includes increased investment in cybersecurity resilience, enhanced collaboration with the private sector, and stronger regulatory frameworks. Officials emphasize the growing risks from state-sponsored cyberattacks, ransomware, and data breaches, which have intensified in recent years. The strategy also seeks to bolster public awareness and education on cybersecurity best practices. This initiative aligns with global efforts to counter cyber threats and safeguard digital ecosystems.
South Korea privacy watchdog to ask DeepSeek about personal information use
Reuters
South Korea’s privacy watchdog is investigating AI startup DeepSeek over concerns about its use of personal data. The Personal Information Protection Commission (PIPC) plans to question DeepSeek on whether it collected or processed personal information without proper consent. This follows broader global scrutiny of AI companies regarding data privacy, training datasets, and regulatory compliance. DeepSeek, a Chinese AI firm, recently gained attention for its advancements in large language models, sparking concerns over data sourcing and cross-border privacy protections. The investigation aligns with South Korea’s efforts to enforce strict privacy regulations in AI development.
French privacy watchdog to quiz DeepSeek on AI, data protection
Reuters
France’s privacy watchdog, CNIL, is investigating DeepSeek AI over concerns about its data protection practices. The Chinese AI startup is being questioned on how it collects and processes personal data, particularly in training its models. This inquiry follows a broader global trend of regulators scrutinizing AI firms for potential privacy violations. CNIL’s move reflects growing European efforts to enforce strict AI compliance under GDPR and upcoming AI Act regulations. DeepSeek has not yet publicly responded to the probe.
Chair of UK's competition regulator removed by government
Sarah Taafe-Maguire | Sky News
The UK government has removed Marcus Bokkerink, chair of the Competition and Markets Authority (CMA), over concerns that the regulator was hindering economic growth. Bokkerink, appointed in 2022, had been leading efforts to scrutinize major mergers and regulate digital markets, which some officials believed stifled business expansion. The move signals a shift in government priorities, emphasizing a more business-friendly regulatory approach. Critics argue the decision could weaken competition enforcement and favor large corporations. The government is now seeking a replacement to align the CMA’s work with economic growth objectives.
Canadian Cybersecurity Network releases 2025 State of Cybersecurity Report
Angelica Dino | Lexpert
The Canadian Cybersecurity Network has released its 2025 State of Cybersecurity Report, highlighting increased cyber threats, evolving attack methods, and regulatory challenges. The report emphasizes ransomware, supply chain vulnerabilities, and AI-driven threats as key risks facing Canadian organizations. It also calls for stronger cybersecurity policies, better public-private collaboration, and enhanced workforce training to combat growing digital threats. Additionally, the report notes rising compliance burdens as businesses adapt to new federal and provincial cybersecurity regulations. Experts stress the need for proactive investment in cybersecurity infrastructure to protect Canada’s digital landscape.
GeoSpy can now find your location from even an indoor photo
Joseph Cox | 404 Media
A powerful AI tool capable of geolocating photos in seconds has surfaced, raising concerns about its potential use by law enforcement, private investigators, and even stalkers. The tool, which can determine the exact location where a photo was taken, has no formal oversight or regulation, making it vulnerable to misuse. While some argue it could assist in criminal investigations and missing persons cases, privacy advocates warn of major risks to personal security and anonymity. Experts fear the technology could be weaponized for surveillance, doxxing, or harassment, amplifying concerns about AI-powered privacy invasions.
Apple ordered to open encrypted user accounts globally to UK spying
Dominic Preston | The Verge
Apple has announced that it will no longer offer fully encrypted iCloud backups in the UK due to the country’s Investigatory Powers Act (IPA), also known as the “Snoopers’ Charter.” The UK government pressured Apple to weaken its encryption under proposed IPA amendments, which could require tech companies to provide law enforcement with access to user data. Apple had previously resisted government demands for backdoor access, but the new policy change suggests it is complying with UK surveillance laws. Privacy advocates warn this move undermines digital security and sets a dangerous precedent for other governments to demand similar access.
Artificial Intelligence, Real Consequences? Legal Considerations for Canadian Employers Using AI Tools in Hiring
Andrew Shaw | Matthew De Lio on | Labour and Employment Law
Canadian employers using AI tools in hiring face growing legal risks as concerns about bias, privacy, and compliance with human rights laws increase. AI-driven recruitment must align with employment standards, anti-discrimination laws, and data privacy regulations to avoid legal challenges. Regulators are paying closer attention to how AI impacts hiring decisions, particularly regarding transparency and fairness. Employers should audit AI systems, ensure human oversight, and maintain clear documentation to mitigate risks. As AI regulations evolve, companies must stay updated on legal obligations to avoid potential penalties and litigation.