Week of 2025-4-7
FOI documents reveal St. Catharines Mayor Mat Siscoe used municipal tax dollars for Ford campaign event
Ed Smith | The Trillium
Documents obtained through a Freedom of Information request reveal that St. Catharines Mayor Mat Siscoe used municipal resources and staff to coordinate his attendance at a campaign event for Ontario Premier Doug Ford during the recent provincial election. This included utilizing his city-issued cellphone and email, as well as involving senior staff members, actions that appear to violate the city's Code of Conduct, which prohibits the use of city assets for election-related activities. The event, held on January 31, saw Siscoe and other local mayors publicly endorse Ford. Following the event, Siscoe's office received multiple emails from residents expressing concern over his participation and the use of public resources. Several complaints have been filed with the city's integrity commissioner regarding these actions.
'It's the Wild West': How AI is creating new frontiers for crime in Canada
CTV News
Artificial intelligence (AI) is increasingly being exploited by cybercriminals in Canada, leading to a surge in sophisticated scams and fraudulent activities. The Better Business Bureau (BBB) reports that AI tools, combined with the dark web, have become primary instruments for scammers, enabling them to craft highly convincing deceptions. For instance, AI-driven voice cloning technology allows fraudsters to mimic the voices of victims' relatives, making scams more persuasive and harder to detect. The Canadian Anti-Fraud Centre has observed a significant uptick in such AI-enhanced schemes, prompting authorities to adapt their investigative approaches to counter these evolving threats. Experts emphasize the need for public awareness and education on the potential misuse of AI, advocating for proactive measures to safeguard against these technologically advanced crimes.
CIBC signs federal government’s voluntary code of conduct for generative AI
The Globe and Mail
CIBC has become the first major Canadian bank to adopt the federal government's Voluntary Code of Conduct for Generative AI, underscoring its commitment to the responsible development and deployment of artificial intelligence technologies. This code, introduced by Innovation, Science and Economic Development Canada (ISED), outlines principles to ensure AI systems are safe, transparent, and uphold human rights. CIBC's adherence to these guidelines reflects its proactive approach to integrating ethical AI practices within its operations. The bank has been actively exploring AI applications, launching pilot programs last year to enhance customer service and operational efficiency. By aligning with the voluntary code, CIBC aims to foster trust and accountability in its AI initiatives, setting a precedent for other financial institutions in Canada.
Examining Canada’s AI Policy Network: Where Does the Power Lie?
Elia Rasky | CIGI
A new report from CIGI by researcher Elia Rasky explores how power is distributed within Canada's AI policy network. It finds that government departments and public research institutions dominate the policy-making landscape, while tech companies have moderate influence and civil society organizations (CSOs) are largely sidelined. This imbalance raises concerns about whether social and ethical perspectives are being adequately considered in shaping national AI strategies. Rasky recommends integrating CSOs into influential bodies like the Advisory Council on AI and decentralizing policy authority currently concentrated in ISED. These changes would help make Canada’s AI governance more inclusive, democratic, and reflective of public interest.
UK's first permanent facial recognition cameras installed in South London
Iain Thomson | The Register
The Metropolitan Police plans to install its first permanent live facial recognition (LFR) cameras in Croydon, South London, this summer, aiming to enhance crime-fighting efforts. These two cameras will be mounted on buildings and lamp posts along North End and London Road and will only be activated when officers are nearby to make immediate arrests if necessary. This initiative follows a two-year trial using mobile LFR units, which reportedly led to hundreds of arrests. However, privacy advocates, such as Big Brother Watch, have raised concerns, warning that this move could lead to an expansion of the surveillance state and infringe on individual privacy rights. They emphasize the need for legislative safeguards to regulate the use of such technology.
Who's protecting the 'beautiful, happy children' growing up online in influencer videos?
Natalie Stechyson | CBC News
As the popularity of child influencers grows, concerns about their exploitation have prompted calls for legislative action in Canada. Currently, no specific federal or provincial laws regulate the earnings or working conditions of child influencers, leaving them vulnerable to potential abuse. Advocates suggest modeling new regulations after existing child labor laws in entertainment, which mandate provisions like setting aside a portion of earnings in trust. Such measures aim to protect the financial interests and well-being of minors in the digital sphere.
Banning us from social media is ‘neither practical nor effective’, UK teenagers say
Rachel Hall | Sally Weale | The Guardian
The UK Youth Parliament's Youth Select Committee, composed of individuals aged 14 to 19, has expressed that banning teenagers from social media is neither practical nor effective in addressing youth violence. Instead, they advocate for stronger regulations to hold social media companies accountable for promoting violent and inappropriate content. The committee emphasizes the benefits of social media, such as learning opportunities and social connections, and notes that age-based bans are easily circumvented. They recommend involving young people in policymaking and suggest creating a youth advisory panel for Ofcom to ensure that youth perspectives are considered in online safety measures.
India wants backdoors into clouds, email, SaaS, for tax inspectors
Simon Sharwood | The Register
India's government has proposed legislation granting tax authorities the power to access private digital records, including emails, social media accounts, and cloud storage, by overriding access codes if necessary. This move has raised concerns about potential overreach and privacy violations. In Malaysia, the government is collaborating with Arm to invest $250 million over the next decade in developing local AI chip designs, aiming to train 10,000 semiconductor designers and boost the nation's tech exports. Meanwhile, NTT Communications in Japan reported unauthorized access to its systems, potentially compromising data of nearly 18,000 corporate customers, including contact details and service usage information. Additionally, Samsung's labor union staged its first-ever strike, seeking improved working conditions, though the action was limited to a single day and did not significantly impact production.
Microsoft wants Windows 11 installs to use a Microsoft Account — confirms removal of popular setup bypass
Zac Bowden | Windows Central
Microsoft has confirmed it will now require all new Windows 11 installations to use a Microsoft Account, removing the popular "bypassnro" command that previously allowed users to set up the OS offline with a local account. This change applies only to new installations and not to existing systems already configured with local accounts. While advanced users can still attempt complex workarounds like using an unattend.xml file, these options are no longer straightforward. Microsoft claims the move enhances security and user experience, but it has drawn criticism from users who prefer privacy and more control over their setups. The decision reflects a broader push to integrate Microsoft's cloud services more tightly into the Windows ecosystem.
Kink and LGBT dating apps exposed 1.5m private user images online
Joe Tidy | BBC
A major privacy lapse exposed nearly 1.5 million explicit images from five specialist dating apps developed by M.A.D Mobile, including BDSM People, Chica, Pink, Brish, and Translove. These photos, some sent privately or previously deleted, were left unprotected and unencrypted online, accessible to anyone with the link. Ethical hacker Aras Nazarovas discovered the vulnerability in January and repeatedly warned the company, but they only acted after being contacted by the BBC in March. The breach poses serious risks, particularly to users in countries hostile to LGBTQ+ communities, and raises concerns of potential extortion or targeted abuse. While M.A.D Mobile has now fixed the flaw, they haven’t clarified why the issue remained unresolved for so long, nor have they ruled out the possibility that other malicious actors accessed the data.
White House executive order seeks to eliminate 'information silos'
The White House
On March 20, 2025, President Trump signed an executive order aimed at eliminating government inefficiency and fraud by breaking down data silos across federal agencies. The order directs agency heads to ensure prompt access to unclassified records, systems, and data for officials pursuing efforts to identify waste, fraud, and abuse. Agencies must revise or rescind any internal rules that block information sharing and review regulations and classification policies that hinder transparency without serving national security. The order also mandates unfettered federal access to state-level data from federally funded programs, particularly emphasizing unemployment and payment records. Overall, it sets an aggressive timeline for implementation and reflects a broader commitment to data-driven governance and accountability.
Ontario Human Rights Commission calls for 'bold, systemic action' to tackle anti-Black racism in education
The Trillium
The Ontario Human Rights Commission (OHRC) has called for bold, systemic action to address anti-Black racism in the province's education system. This initiative aims to dismantle systemic barriers and promote equitable opportunities for Black students. The OHRC emphasizes the need for comprehensive strategies that include policy reforms, enhanced accountability measures, and community engagement to create an inclusive and supportive educational environment. The commission's recommendations highlight the importance of collaborative efforts among educators, policymakers, and communities to effectively combat racial discrimination in schools.
Elections Canada has been in touch with social media platforms about election misinformation
Darren Major | CBC News
Elections Canada is proactively engaging with social media platforms, including X (formerly Twitter) and TikTok, to combat misinformation ahead of the upcoming federal election. Chief Electoral Officer Stephane Perrault emphasized the importance of these collaborations in ensuring the integrity of electoral processes. Both platforms have shown a willingness to monitor and remove harmful misinformation related to civic and electoral procedures. This initiative reflects Elections Canada's commitment to safeguarding democratic processes in the digital age.
Canada Proud is dominating Facebook ahead of the election
James Temperton | The Logic
Canada Proud, a conservative advocacy group, is significantly influencing Facebook discussions leading up to the April 28 federal election. Their posts are reaching a vast audience, filling the void left by Meta's ongoing ban on news content in Canada. This development underscores the shifting dynamics of political discourse on social media platforms during the election period.
FTC concerned about privacy protections in 23andMe bankruptcy
Jane Godoy | Reuters
The U.S. Federal Trade Commission (FTC) has expressed concerns regarding the potential sale or transfer of personal data by 23andMe, following the company's recent bankruptcy filing. FTC Chairman Andrew Ferguson emphasized that any purchaser of 23andMe's assets should adhere to the company's existing privacy policies to protect consumer information. The genetic testing firm filed for bankruptcy protection on March 23, citing decreased demand for its ancestry kits. This development has raised alarms among officials, including California Attorney General Rob Bonta, who urged customers to delete their genetic data due to potential privacy risks. These concerns are amplified by a 2023 data breach that exposed personal information of nearly 7 million customers, further damaging the company's reputation and highlighting vulnerabilities in handling sensitive genetic data.
Traffic to 23andMe's Website Soars as Users Race to Delete DNA Data
Michael Kan | PC Magazine
Following 23andMe's bankruptcy filing on March 23, 2025, there has been a significant surge in users attempting to delete their genetic data from the company's platform. On March 24 alone, the website experienced a 526% increase in traffic, with approximately 1.5 million visits, including 376,000 directed at data deletion help pages. This reaction stems from growing concerns over the potential sale of sensitive genetic information during bankruptcy proceedings. Authorities, including the New York Attorney General, have advised customers to remove their data to safeguard their privacy. The Federal Trade Commission (FTC) has also emphasized that any purchaser of 23andMe's assets must adhere to existing privacy policies to protect consumer information.
With 23andMe filing for bankruptcy, what happens to consumers’ genetic data?
Rosehana Amin | Dave Dhillon | Lexology
In October 2023, 23andMe, a biotechnology company offering direct-to-consumer genetic testing services, experienced a significant data breach where hackers accessed sensitive personal data, including genetic information, of approximately 6.9 million customers. The breach involved the exposure of names, dates of birth, relationship labels, DNA-related analyses, ancestry reports, and self-reported locations. Hackers employed a technique known as credential stuffing, exploiting reused passwords from other breaches to gain unauthorized access to customer accounts. In response, on March 24, 2025, the UK's Information Commissioner's Office (ICO) issued a Notice of Intent to fine 23andMe £4.59 million for failing to implement adequate safeguards to protect this sensitive information. This action underscores the critical importance of robust data security measures, especially for organizations handling highly sensitive genetic data.
European Health Data Space Regulation enters into force
Jana Grieb | Lorraine Maisner-Boché | Caroline Noyrez | Katharina Hoffmeister | Lea Hachmeister | McDermott Will & Emery
On March 26, 2025, the European Health Data Space (EHDS) Regulation entered into force, aiming to revolutionize health data management across EU member states. This regulation enhances individuals' access to and control over their electronic health data and establishes a framework for the secure secondary use of such data in research and innovation. Healthcare providers are now required to collect and share electronic health records (EHRs) in a standardized format, facilitating cross-border healthcare services. Manufacturers of EHR systems must comply with new testing and documentation standards to ensure interoperability and security. The EHDS seeks to balance improved healthcare delivery with robust data protection, fostering advancements in medical research while safeguarding patient privacy.
Oracle Health breach compromises patient data at US hospitals
Lawrence Abrams | Bleeping Computer
In early 2025, Oracle Health (formerly Cerner) experienced a significant data breach affecting multiple U.S. healthcare organizations and hospitals. Unauthorized access to legacy servers occurred after January 22, 2025, leading to the exfiltration of patient data. Oracle Health became aware of the breach around February 20, 2025. The FBI is investigating the incident, which appears to be part of an extortion scheme targeting medical providers. This breach underscores the critical need for robust cybersecurity measures in protecting sensitive healthcare information.
NHS processor fined £3m after ransomware data breach
Ellie Ludlam | Louise Fullwood | Pinsent Masons
In August 2022, Advanced Computer Software Group Ltd, an IT provider for the NHS, suffered a ransomware attack that compromised the personal data of nearly 80,000 individuals. The breach disrupted critical services, including NHS 111, and exposed sensitive information such as medical records and home entry details for patients receiving care. The attackers gained access through a customer account lacking multi-factor authentication (MFA), highlighting significant security lapses. In response, the UK's Information Commissioner's Office (ICO) fined Advanced over £3 million for failing to implement adequate security measures. This marks the first time the ICO has penalized a data processor under the UK GDPR, underscoring the imperative for robust cybersecurity practices in handling sensitive health data.
Canadian police partner with AI in arms race against criminals. But at what cost?
Brieanna Charlebois | Chek News
Canadian law enforcement agencies are increasingly integrating artificial intelligence (AI) technologies into their operations to enhance efficiency and effectiveness. For instance, the RCMP's National Child Exploitation Coordination Centre employs AI to swiftly analyze vast amounts of digital evidence, such as surveillance footage and criminal records, aiding in the detection of child sexual abuse material. Similarly, police departments in Ontario utilize AI-driven facial recognition tools to match suspect images with mug shots, while those in New Brunswick are piloting AI systems to draft reports from body camera recordings. However, this growing reliance on AI has raised concerns among ethicists and civil liberties organizations. They caution that AI applications, particularly in predictive policing and facial recognition, may inadvertently perpetuate biases and infringe upon individual privacy rights. The absence of comprehensive legislative oversight further complicates the ethical deployment of AI in policing, highlighting the need for transparent policies and accountability measures.
Alberta takes steps to further privacy laws despite stagnation of federal reform efforts
Krista Schofer | Robyn MacDonald | Gowling WLG
Alberta is moving to strengthen its privacy laws, aligning more closely with global standards such as the EU’s GDPR and Canada’s proposed federal Consumer Privacy Protection Act (CPPA). The province is exploring updates to its Personal Information Protection Act (PIPA) to address modern data challenges, including the use of AI, biometrics, and cross-border data flows. Proposed changes include enhanced individual rights, stricter consent requirements, and mandatory privacy management programs for organizations. Alberta also aims to bolster enforcement powers, potentially introducing higher penalties for non-compliance. These efforts reflect a broader trend in Canada toward modernizing privacy frameworks to protect citizens in an increasingly digital economy.
The U.S. Wants Canada to Become A Police State
Ronald Deibert | McLeans
Citizen Lab director Ron Deibert, issues a stark warning about the growing threat of a surveillance-driven police state emerging in North America, particularly under a second Trump presidency. It details how Canada, in an effort to preemptively align with Trump’s aggressive border policies, has rapidly expanded its surveillance infrastructure—deploying drones, surveillance towers, helicopters, and advanced detection tools. Deibert argues that such technologies disproportionately harm vulnerable groups and often function with minimal oversight, echoing civil liberties concerns around bias, algorithmic discrimination, and privacy violations. The article connects the North American surveillance expansion with global authoritarian trends and unregulated spyware markets, noting that marginalized individuals—migrants, refugees, and dissidents—are at greatest risk. Deibert urges Canada not to emulate the U.S.’s trajectory but to bolster independent oversight, reinforce privacy rights, and support those targeted by digital authoritarianism.
Canada’s Surveillance Paradox: How Privacy Laws Fuel Racialized Monitoring
Tisya Raina | NATO Association
An article from the NATO Association of Canada highlights a paradox in Canada's approach to privacy laws, which, while designed to protect citizens, are reportedly being used to justify extensive surveillance measures that disproportionately affect racialized communities. The piece discusses how agencies like the Canada Border Services Agency (CBSA) utilize legal exemptions to collect biometric data, monitor social media, and track travel histories under the pretext of national security. This expansion of surveillance is said to lead to increased scrutiny and potential discrimination against non-Western travelers, refugees, and immigrants. The article calls for a reassessment of privacy laws to ensure they serve their intended purpose of safeguarding civil liberties without enabling systemic discrimination.
Is it safe to travel with your phone right now?
Gaby Del Valle | The Verge
U.S. Customs and Border Protection (CBP) agents are legally permitted to search travelers' electronic devices at airports and border crossings without a warrant, under the "border search exception." These searches can include both basic inspections of content and more invasive "advanced" searches involving external data extraction tools. While some courts, such as in New York, have recently pushed back by requiring warrants for phone searches, the rules vary by jurisdiction and remain largely in CBP’s favor nationwide. Legal experts and civil liberties groups warn that this authority poses serious privacy concerns, particularly as travelers may be compelled to unlock devices. Travelers are advised to take protective steps like limiting sensitive data, using encryption, and disabling biometric access.
Exclusive: Trump administration is pointing spy satellites at US border
Marisa Taylor | Jeffrey Dastin | Reuters
In March 2025, the Trump administration directed the National Geospatial-Intelligence Agency (NGA) and the National Reconnaissance Office (NRO) to utilize spy satellites for surveillance of the U.S.-Mexico border. This initiative aims to intensify efforts against illegal immigration and drug cartels, reflecting a broader strategy to militarize border enforcement. The deployment of such military-grade surveillance tools raises concerns about potential overreach and the inadvertent monitoring of U.S. citizens, prompting debates on legal and ethical safeguards. Additionally, defense contractors are in discussions to support this "digital wall" concept, integrating advanced technologies like artificial intelligence and sensor systems to enhance border security.
Blurred Lines: Civilian Oversight at Canada’s Digital Borders
Jamie Duncan | CIGI
The Canada Border Services Agency (CBSA) has historically operated without an independent civilian oversight body, distinguishing it from other major federal law enforcement agencies in Canada. The passage of the Public Complaints and Review Commission Act seeks to address this gap by establishing the Public Complaints and Review Commission (PCRC), tasked with investigating complaints against both CBSA and Royal Canadian Mounted Police (RCMP) personnel, as well as conducting program-level reviews of their operations. Given the CBSA's dual mandate encompassing law enforcement and national security, the PCRC faces unique challenges in ensuring effective oversight, particularly in the context of increasingly digitized border security measures. To enhance accountability, it's essential for the PCRC to develop investigative methodologies tailored to the border's complex environment and to establish a formal framework for collaboration with the National Security and Intelligence Review Agency (NSIRA).
Twitter (X) Hit by 2.8 Billion Profile Data Leak in Alleged Insider Job
Waqas | HackRead
In March 2025, reports emerged of a significant data breach involving approximately 2.8 billion user profiles from X (formerly Twitter), allegedly resulting from an insider's actions during company layoffs. The leaked data, totaling around 400GB, encompasses user IDs, screen names, profile descriptions, location settings, and other metadata, but notably excludes email addresses. Analysts suggest that the dataset may include inactive or bot accounts, given that X's active user base is significantly smaller. As of now, X has not issued an official response to these allegations.
Canada’s privacy regulator releases results of age assurance consultation
Joel R. McConvey | Biometric Update
In March 2025, Canada's Office of the Privacy Commissioner (OPC) released findings from its consultation on age assurance technologies, which are used to verify or estimate individuals' ages online. The consultation highlighted the diverse forms and applications of age assurance, emphasizing the need for clear definitions and policies. Key concerns included potential harms, such as privacy infringements and restricted access to online resources, particularly for marginalized groups. Stakeholders debated who should bear responsibility for implementing these systems and discussed the merits and risks of methods like age estimation versus age verification. The OPC intends to use this feedback to develop guidance that balances protecting young users online with upholding privacy rights.
Privacy Commissioner launches breach risk self-assessment tool for organizations
Office of the Privacy Commissioner of Canada
Privacy Commissioner of Canada Philippe Dufresne has unveiled a new web-based Privacy Breach Risk Self-Assessment Tool to assist businesses and federal institutions in determining whether a privacy breach poses a “real risk of significant harm” to individuals. The tool walks users through a series of questions to evaluate both the sensitivity of the compromised data and the likelihood of its misuse. Based on the outcome, organizations can decide on necessary next steps, including reporting to the Office of the Privacy Commissioner and notifying affected individuals, as required under PIPEDA. Significant harm may include identity theft, financial loss, reputational damage, and emotional distress. Dufresne emphasized that the tool responds to the growing scale and complexity of breaches, offering a structured and accessible way to manage data incidents responsibly.
Apple hit with $162 million French antitrust fine over privacy tool
Florence Loeve | Foo Yun Chee | Reuters
In March 2025, the French Competition Authority fined Apple €150 million ($162.4 million) for abusing its dominant position in mobile app advertising through its App Tracking Transparency (ATT) tool. Introduced in 2021, ATT allows iPhone and iPad users to control which apps can track their activity, a move that has been criticized by digital advertisers and mobile gaming companies for making targeted advertising more challenging and costly. The French regulator's decision marks the first antitrust penalty against Apple concerning ATT, highlighting concerns that the tool disproportionately affects smaller publishers reliant on third-party data. Despite the fine, Apple is not required to modify ATT but must publish the ruling on its website for a week. Apple expressed disappointment with the decision, emphasizing that no changes to ATT were mandated.
WhatsApp earns backing in EU privacy fine dispute
James Francis Whitehead | Courthouse News
In March 2025, Advocate General Tamara Capeta of the European Court of Justice issued a non-binding opinion supporting WhatsApp's right to challenge a €225 million fine imposed by Ireland's Data Protection Commission in 2021. The fine was increased following the European Data Protection Board's (EDPB) intervention, which WhatsApp contends exceeded the EDPB's authority. Capeta recommended that the case be referred back to the General Court for a decision on its merits. While the European Court of Justice often aligns with such opinions, a final ruling is anticipated in the coming months.
Madison Square Garden Bans Fan After Surveillance System IDs Him as a Critic of Its CEO
AJ Dellinger | Gizmodo
In March 2025, Frank Miller was denied entry to Radio City Music Hall, part of Madison Square Garden (MSG) properties, after being identified by the venue's surveillance system. Miller, a graphic designer, had previously created a "Ban Dolan" T-shirt in response to the 2017 incident involving Knicks owner James Dolan and former player Charles Oakley. Although Miller hadn't attended an MSG event in years, his association with the shirt led to his inclusion on a ban list. This incident underscores concerns about the use of facial recognition technology by private entities to monitor and potentially exclude individuals based on past criticisms or affiliations.
Ontario's information and privacy commissioner releases workplace surveillance report
Bernise Carolino | Lexpert
In March 2025, the Information and Privacy Commissioner of Ontario (IPC) released a comprehensive report examining modern workplace surveillance technologies and their implications. The report outlines how contemporary tools enable continuous monitoring of employees' locations, activities, biometrics, and even emotions, extending beyond traditional workplace boundaries. While such surveillance can enhance safety, training, and policy compliance, the report raises concerns about potential infringements on privacy, autonomy, and human rights. It emphasizes that the fusion of scientific management principles with advanced digital surveillance could blur work-life boundaries and intrude into employees' private lives. The IPC underscores the necessity for organizations to balance operational benefits with the protection of employee rights in the evolving digital workplace.
Nearly half of organisations entering employee info into GenAI: survey
Dexter Tilo | Human Resources Director
A recent survey by Cisco's 2025 Data Privacy Benchmark Study reveals that 46% of security and privacy professionals acknowledge inputting employee names or information into generative AI (GenAI) applications. This practice persists despite significant privacy concerns, with 64% of respondents fearing that data entered into GenAI tools could be exposed to the public or competitors. The increasing use of GenAI underscores the need for robust AI governance frameworks to manage risks and protect stakeholder interests. Experts advocate for balancing the opportunities presented by AI with potential privacy risks to ensure responsible deployment.
These Canadian founders are never going back to the office
Mihika Agarwal | The Logic
In April 2025, The Logic reported that several Canadian startup founders have opted to permanently close their physical offices, embracing remote work as the new standard. Paul Vallée, founder of Ottawa-based remote work platform Tehama, described returning to the company's office post-pandemic as akin to visiting "your grandmother’s house after she passed away," noting the space felt underutilized and obsolete. This shift reflects a broader trend among Canadian tech companies leveraging remote work to access a wider talent pool and reduce operational costs. However, this transition has also contributed to increased office vacancies, prompting major pension funds to reconsider their investments in commercial real estate.