Week of 2025-01-28

EDPB's 2024 coordinated enforcement offers window into right of access

Lexie White | IAPP

The European Data Protection Board (EDPB) has announced that its 2024 Coordinated Enforcement Framework (CEF) will focus on individuals' right of access under the General Data Protection Regulation (GDPR). The initiative aims to assess how organizations comply with access requests and whether they facilitate users in exercising their rights effectively. National data protection authorities across the EU will conduct joint investigations and share insights to ensure consistency in enforcement. The findings are expected to highlight compliance gaps, best practices, and potential areas for regulatory improvement. This enforcement push reflects the growing emphasis on user empowerment and transparency in data processing.

All porn sites must 'robustly' verify UK user ages by July

Tom Gerken | BBC

The UK’s media regulator, Ofcom, has announced that all websites hosting pornographic content, including social media platforms, must implement "robust" age verification measures by July under the Online Safety Act. These measures include requiring photo ID, credit card checks, or facial age estimation to prevent children from accessing explicit material, as research shows many are exposed as early as age nine. While supporters argue the rules create a safer internet, critics, including Pornhub’s parent company Aylo, warn that such verification methods will push users toward riskier, unregulated platforms. Ofcom has ruled out self-declared age verification, listing acceptable methods such as open banking and mobile network checks. Privacy advocates caution that the measures could introduce security risks and digital exclusion while failing to offer a foolproof solution for protecting children online.

Americans Use AI in Everyday Products Without Realizing It

Ellyn Maese | Gallup

A Gallup poll reveals that many Americans unknowingly use AI-powered products in their daily lives. Despite AI being integrated into smart assistants, navigation apps, and streaming services, a significant portion of users do not recognize that these tools rely on artificial intelligence. The survey highlights a gap in public awareness, with many respondents underestimating AI’s presence in common technologies. Experts suggest this lack of awareness could impact public perception and policymaking around AI regulation. As AI continues to evolve, increasing consumer understanding may become essential for addressing ethical concerns and fostering informed discussions on AI’s role in society.

Trump rescinds Biden's AI safety executive order

Caitlin Andrews | IAPP

In January 2025, President Donald Trump rescinded former President Joe Biden's Executive Order 14110, which focused on establishing safety measures and ethical guidelines for artificial intelligence (AI) development and use. Trump's new directive, titled "Removing Barriers to American Leadership in Artificial Intelligence," aims to promote AI innovation by eliminating perceived regulatory obstacles and emphasizes developing AI free from ideological bias. The order mandates a review of existing policies to identify and revoke those conflicting with its objectives, and it requires the creation of an action plan within 180 days to maintain U.S. leadership in AI, focusing on human flourishing, economic competitiveness, and national security. 

Police use of experimental biometrics needs oversight, guidance

Masha Borak | Biometric Update

A new report highlights the urgent need for oversight and clear guidance on law enforcement's use of experimental biometric technologies, such as facial recognition and voice identification. The study warns that without proper regulations, these tools could lead to biased outcomes, privacy violations, and wrongful accusations. It calls for independent audits, transparency requirements, and public accountability to ensure biometric technologies are used ethically and effectively. Law enforcement agencies are urged to adopt strict policies and limit their reliance on unproven biometric systems until comprehensive guidelines are in place. The report also emphasizes the importance of balancing public safety with individual rights to prevent misuse of these powerful tools.

Australian states back national plan to ban children younger than 16 from social media 

Rod McGuirk | The Independent

Meta is facing criticism from Australian Prime Minister Anthony Albanese and media organizations for its decision to stop paying for news content on Facebook. The tech giant announced that it will not renew agreements under Australia’s News Media Bargaining Code, which required digital platforms to negotiate payments with news publishers. Critics argue that Meta is undermining journalism and public access to reliable news, while media outlets fear financial losses. Meta defended its decision, stating that news content is a small fraction of user engagement on its platform. The Australian government is considering regulatory options to counter Meta’s move and ensure fair compensation for news providers.

UK to introduce digital driving licences to ‘transform public services’

Geneva Abdul | The Guardian

The UK government is set to introduce digital driver's licenses as part of a broader effort to modernize public services. The move aims to streamline access to government services by allowing citizens to use a secure digital ID for various transactions. The digital licenses will be available via a mobile app and will serve as an alternative to physical cards. Officials argue this shift will improve efficiency and reduce fraud, though privacy advocates have raised concerns about data security and potential misuse. The rollout is expected to begin later this year, with further expansion planned for other forms of digital identification.

Cyberattack affecting school boards spotlights the need for better EdTech regulation in Ontario and beyond

Michael J. S. Beauvais | Yan Shvartzshnaider | The Conversation

A recent cyberattack on school boards in Canada has highlighted the urgent need for stronger regulation of educational technology (EdTech). The breach, which affected student and staff data, underscores vulnerabilities in school systems that rely on third-party software. Experts argue that Ontario and other regions must implement stricter oversight and security measures to protect sensitive information. Current regulations do not adequately address the rapid adoption of EdTech, leaving schools exposed to potential data breaches. Moving forward, policymakers must prioritize cybersecurity in education to ensure student privacy and system resilience.

Big banks keep using screen scraping—despite the security risks

Claire Brownell | The Logic

Despite acknowledged security risks, major Canadian banks continue to employ screen scraping—a method where third-party applications collect customer data by mimicking user logins—to facilitate data sharing with financial technology (fintech) firms. This practice is widespread in Canada, with an estimated nine million Canadians having used services that rely on screen scraping. Recognizing the associated vulnerabilities, the Canadian government is moving towards eliminating screen scraping. Finance Canada has proposed phasing out this data-harvesting method in forthcoming open-banking legislation, aiming to replace it with more secure and efficient data-sharing frameworks. The persistence of screen scraping highlights the current gaps in secure data-sharing solutions between traditional banks and fintech companies. As Canada advances towards implementing open banking by early 2026, the focus is on developing standardized and secure methods for data exchange to enhance both innovation and consumer protection in the financial sector.

5,000 cancer patients in Quebec have lost access to their medical data

Martin Patriquin | The Logic

The McGill University Health Centre (MUHC) has shut down its OPAL app, a patient portal designed to provide access to medical records and test results, after concerns were raised about its security and data privacy compliance. The app, developed by researchers and patient advocates, aimed to improve patient engagement by allowing direct access to health information, particularly for oncology patients. However, the Quebec health ministry and the province’s data protection authority found that OPAL did not meet provincial data security standards, leading to its abrupt suspension. Critics argue that the shutdown represents a setback for digital health innovation in Quebec, highlighting ongoing challenges in integrating patient-centered technologies within the province’s centralized health system. The situation underscores broader concerns about balancing privacy regulations with the need for patient access to digital health tools.

HHS-OCR Announces Proposed Modifications to the HIPAA Security Rule

Tracy Shapiro | Haley Bavasi | Demian Ahn | Colin Black | Wilson Sonsini

The U.S. Department of Health and Human Services (HHS) Office for Civil Rights (OCR) has proposed modifications to the HIPAA Security Rule to enhance cybersecurity and patient data protection. The proposed changes focus on strengthening risk assessment requirements, mandating covered entities and business associates to implement multi-factor authentication (MFA) and conduct more frequent security evaluations. The rule also emphasizes incident response planning and the necessity of maintaining audit logs for at least six years. HHS aims to align HIPAA security standards with evolving cyber threats, particularly in the wake of increased ransomware attacks on healthcare organizations. Public comments on the proposal will be accepted before a final ruling is made.

Trump administration looks to expel Democrat members from PCLOB

Alex LaCasse | IAPP

The Trump administration is reportedly seeking to remove Democratic members from the Privacy and Civil Liberties Oversight Board (PCLOB), a U.S. government agency that oversees national security policies' impact on privacy and civil liberties. The move is seen as an effort to reshape the board’s composition and limit dissenting opinions on surveillance and data privacy matters. Critics argue that this could undermine bipartisan oversight and weaken privacy protections, particularly as government surveillance programs face renewed scrutiny. PCLOB, which has been central to discussions on FISA surveillance and AI-related privacy concerns, may see significant shifts in its role if the administration succeeds in its efforts. The decision could have broader implications for privacy law enforcement and national security policy.

Canada’s digital services tax is right in Trump’s crosshairs

David Reevely | The Logic

Canada's implementation of a Digital Services Tax (DST) has escalated trade tensions with the United States. The DST, enacted on June 28, 2024, imposes a 3% levy on revenue from digital services provided to Canadian users by firms with global annual income exceeding $1.1 billion and Canadian revenues over $20 million. This measure is projected to generate approximately $5.9 billion over five years. The U.S. government perceives the DST as disproportionately affecting American technology companies and has responded with threats of retaliatory tariffs. President Donald Trump has proposed imposing a 25% tariff on imports from Canada and Mexico, citing various grievances, including trade imbalances and policy disputes.

Predicting the “digital superpowers” we could have by 2030

Louis Rosenberg | Big Think

The article explores the potential digital superpowers humans could develop by 2030, driven by advancements in artificial intelligence, brain-computer interfaces, and quantum computing. It envisions a world where people may have enhanced cognitive abilities, including instant knowledge downloads and AI-powered decision-making assistants that function like a second brain. Augmented reality (AR) and virtual reality (VR) could allow seamless integration of digital and physical experiences, making real-world navigation, learning, and work highly efficient. Brain-computer interfaces (BCIs) may unlock telepathic communication and allow direct interaction with devices via thoughts alone. However, these advancements also raise ethical concerns, particularly regarding privacy, surveillance, and dependence on AI-driven enhancements.

Will you have to report paying a ransom? New UK rules proposed

Natalie Donovan | The Lens

The UK government is considering new mandatory reporting requirements for companies that pay ransoms to cybercriminals, as part of broader efforts to disrupt ransomware payments. The proposal, which is still under consultation, aims to increase transparency and deter payments, as paying ransoms often fuels further cybercrime. If implemented, businesses would have to disclose ransom payments to authorities, potentially facing penalties for non-compliance. However, concerns have been raised that such rules could complicate crisis management for organizations, especially if victims believe that disclosing payments might lead to legal or reputational risks. The government hopes that by tracking payments, it can develop better policies to combat cyber extortion while discouraging companies from paying attackers.

Looking ahead: the Canadian privacy and AI landscape without Bill C-27

Nic Wall | Molly Reynolds | Rosalie Jetté | Julie Himo | Lauren Nickerson | Mavra Choudhry | Torys

The failure of Bill C-27 to pass before the prorogation of Parliament has left Canada's privacy and AI regulation in limbo, raising questions about the future of data protection, AI governance, and digital rights. The bill, which aimed to modernize Canada's privacy laws and introduce AI-specific regulations, now faces an uncertain fate, particularly if a new government takes a different regulatory approach. Experts predict that, in the absence of C-27, provincial privacy laws may take on greater importance, with provinces like Québec already leading on stricter regulations. Businesses must navigate fragmented compliance requirements, while international trade agreements—such as Canada's adequacy decision with the EU—could be impacted by the lack of federal data protection reform. The regulatory vacuum also leaves AI governance unaddressed, potentially delaying clear oversight of AI systems and accountability measures in Canada.

CNIL releases 2025-28 strategic plan

CNIL

The French data protection authority (CNIL) has published its strategic plan for 2025-2028, focusing on AI governance, cybersecurity, and digital rights for minors. The plan aims to strengthen AI oversight, ensuring compliance with privacy regulations like the GDPR while fostering responsible AI innovation. CNIL is also prioritizing cybersecurity, particularly for critical infrastructures and personal data protection, in response to rising cyber threats. Another major initiative is enhancing protections for minors, addressing risks related to social media, online tracking, and digital well-being. The strategy also includes expanding regulatory enforcement and public guidance, ensuring that individuals and businesses better understand and comply with evolving data protection laws.

A New Jam-Packed Biden Executive Order Tackles Cybersecurity, AI, and More

Eric Geller | Wired

U.S. President Joe Biden’s latest executive order focuses on enhancing cybersecurity and AI governance to address national security risks and data privacy concerns. The order expands AI safety measures, requiring developers of powerful AI models to report safety tests and ensure compliance with federal security standards. It also strengthens cybersecurity rules for critical infrastructure, mandating improved incident reporting and threat detection. The initiative seeks to bolster data protection by enforcing stricter government procurement rules for software and cloud services. Additionally, the order promotes international cooperation on AI safety and cybersecurity standards, aligning the U.S. with global regulatory efforts.

Global cyber attacks jumped 44% last year

Emma Woollacott | IT Pro

A new report reveals that global cyber attacks surged by 44% in 2024, driven by ransomware campaigns, supply chain breaches, and AI-powered cyber threats. Attackers increasingly targeted critical infrastructure, financial institutions, and cloud services, exploiting zero-day vulnerabilities and weak cybersecurity defenses. The rise in state-sponsored attacks and hacktivist activities also contributed to the escalation, with geopolitical tensions fueling cyber espionage. Experts warn that cybercriminals are becoming more sophisticated, leveraging automation, deepfake technology, and AI-generated phishing scams. Organizations are urged to prioritize proactive security strategies, including enhanced threat detection, employee training, and stricter regulatory compliance to mitigate risks.

Trump administration fires members of cybersecurity review board in ‘horribly shortsighted’ decision

Lorenzo Franceschi-Bicchierai | Tech Crunch

The Trump administration has fired multiple members of the Cyber Safety Review Board (CSRB), a decision experts are calling “horribly shortsighted” given escalating cyber threats. The CSRB, established in 2021, was modeled after the National Transportation Safety Board and tasked with investigating major cyber incidents, including ransomware attacks, supply chain breaches, and nation-state hacking campaigns. Critics argue that removing key cybersecurity experts undermines national security at a time when cyber threats are at an all-time high, with a 44% increase in global attacks last year. Some believe the move signals a shift away from government-led cybersecurity oversight, possibly favoring a more private-sector-driven approach. The administration has not yet outlined plans for replacing the dismissed members or restructuring the board’s oversight role.

UN Security Council members meet on spyware for first time

Suzanne Smalley | The Record

The UN Security Council held a meeting on the growing threat posed by commercial spyware, highlighting concerns about state and non-state actors abusing these tools for surveillance and cyber espionage. Officials discussed how spyware, such as Pegasus and Predator, has been used to target journalists, dissidents, and government officials, often without accountability. The meeting underscored the need for stronger international regulations to prevent misuse while balancing legitimate security needs. Some members pushed for a global framework to curb the unchecked proliferation of spyware, emphasizing the risk to democratic institutions, privacy, and human rights. However, divisions remain on how to regulate the industry without stifling national security capabilities or legitimate cybersecurity tools.

Trump administration directs all federal diversity, equity and inclusion employees be put on leave

CBC News

The Trump administration is moving to dismantle diversity, equity, and inclusion (DEI) programs within the U.S. government, aiming to eliminate staff positions and funding associated with these initiatives. The administration argues that DEI programs promote divisiveness and ideological bias, a stance supported by conservative lawmakers and think tanks. Critics, however, warn that rolling back DEI efforts could lead to less diverse hiring, increased workplace discrimination, and setbacks in equity-driven policies. Some government agencies are expected to resist the changes, but Trump officials are reportedly exploring ways to enforce compliance, including budget cuts and executive orders. The move is part of a broader conservative push to limit DEI policies across education, corporations, and public institutions.

Privacy considerations upon the death of an employee

Alex Ferraté | IAPP

The death of an employee raises complex privacy issues related to their personal data, workplace communications, and access to company resources. Employers must carefully navigate legal obligations, ensuring compliance with privacy laws that govern the handling of personal information posthumously. Key considerations include restricting access to the employee’s email and files, determining who has the legal right to retrieve information, and balancing organizational needs with respect for the deceased’s privacy. Data retention policies should be reviewed to clarify when and how personal data can be deleted or transferred. Employers may also need to consult legal and HR professionals to ensure sensitive data is managed appropriately while respecting the privacy rights of the deceased and their family.

Previous
Previous

Week of 2025-02-07

Next
Next

Week of 2025-01-17