Week of 2025-02-07

What is DeepSeek, the Chinese AI startup that shook the tech world?

CNN | CTV News

DeepSeek, a Chinese AI startup, has quickly gained global attention after unveiling a powerful large language model (LLM) capable of competing with leading Western AI firms like OpenAI and Google DeepMind. The company recently introduced DeepSeek-V2, an advanced AI system trained on high-quality Chinese and English datasets, showcasing superior reasoning and coding abilities. This breakthrough comes amid China's growing investment in AI and efforts to reduce reliance on U.S. technology following sanctions restricting access to advanced semiconductor chips. DeepSeek's success has fueled discussions about China's AI ambitions and the potential for regional dominance in artificial intelligence development. However, challenges remain, including computational resource constraints and geopolitical tensions influencing AI research and collaboration.

China's DeepSeek AI is watching what you type

Kevin Collier | NBC News

Chinese AI startup DeepSeek AI has emerged as a major player in artificial intelligence, developing models that rival ChatGPT and other leading Western AI systems. However, data privacy and security concerns have been raised, particularly regarding how DeepSeek gathers and processes information. Given China's strict data laws and history of state surveillance, critics worry that user data could be subject to government access or misuse. Unlike OpenAI, which adheres to Western data protection frameworks, DeepSeek operates in a regulatory environment that lacks strong independent oversight. As DeepSeek continues expanding, questions about transparency, security, and AI governance remain, prompting calls for greater scrutiny on global AI competition and cross-border data flows.

Competition Bureau issues report summarizing feedback on artificial intelligence and competition

Government of Canada

The UN Security Council held a meeting on the growing threat posed by commercial spyware, highlighting concerns about state and non-state actors abusing these tools for surveillance and cyber espionage. Officials discussed how spyware, such as Pegasus and Predator, has been used to target journalists, dissidents, and government officials, often without accountability. The meeting underscored the need for stronger international regulations to prevent misuse while balancing legitimate security needs. Some members pushed for a global framework to curb the unchecked proliferation of spyware, emphasizing the risk to democratic institutions, privacy, and human rights. However, divisions remain on how to regulate the industry without stifling national security capabilities or legitimate cybersecurity tools.

Trump announces a $500 billion AI infrastructure investment in the US

Clare Duffy | CNN

Tech companies OpenAI, Oracle, and SoftBank are reportedly engaging with the Trump administration to influence U.S. AI policy and investment strategies. The discussions center on increasing federal support for AI development, particularly in areas like semiconductors, cloud computing, and national security applications. Oracle and SoftBank are advocating for incentives and policy shifts to bolster AI infrastructure, while OpenAI is seeking assurances around AI regulation and access to advanced chips. The administration's stance on AI investment could shape U.S. competitiveness against China and influence regulatory approaches for emerging AI technologies. However, concerns about favoritism and corporate influence in shaping AI policy remain prominent.

Pope warns Davos summit that AI could worsen ‘crisis of truth’

Graeme Wearden | The Guardian

Pope Francis addressed the World Economic Forum in Davos, warning that AI could deepen the global "crisis of truth" if not regulated responsibly. He emphasized that AI’s ability to generate misinformation and manipulate reality poses serious ethical risks to democracy and social cohesion. The Pope urged leaders to prioritize human dignity, fairness, and transparency in AI development and implementation. He also called for international cooperation to establish safeguards against AI-driven deception and exploitation. His speech underscores growing concerns that unchecked AI advancements could accelerate societal fragmentation and misinformation.

International AI Safety Report 2025

UK Government

The UK government has published its 2025 International AI Safety Report, outlining key risks and strategies for mitigating AI-related threats globally. The report emphasizes concerns over misuse of AI in cyberattacks, misinformation, and autonomous weaponry, urging stronger international cooperation to establish safety standards. It highlights the need for transparency in AI development, ethical safeguards, and regulatory oversight to prevent harm. The UK calls for collaboration with global leaders, researchers, and tech companies to ensure AI benefits society while minimizing risks. This report follows ongoing efforts to position the UK as a leader in AI governance and safety regulations.

Use of biometric data: Québec’s Privacy Commissioner keeps the bar high

Caroline Deschênes | lona Bois-Drivet | Antoine Rancourt | Marc-Alexandre Hudon | Langlois

Quebec’s Commission d’accès à l’information (CAI) continues to enforce strict regulations on biometric data collection and use, emphasizing high privacy standards under Law 25. The CAI mandates that organizations must demonstrate necessity and proportionality when handling biometric information, particularly in authentication and identification processes. Recent decisions highlight the requirement for explicit consent, secure storage, and clear justification for collecting such data. Businesses must also conduct privacy impact assessments (PIAs) before implementing biometric technologies. These regulations align with global concerns over biometric security risks, data breaches, and privacy rights, reinforcing Quebec’s leadership in data protection and AI governance.

New Federal Children’s Privacy Requirements Are Not Child’s Play: FTC Amends COPPA Rule, Imposing New Obligations on Child-Directed Services

Libby Weingarten | Tracy Shapiro | Rebecca Weitzel Garcia | Wilson Sonsini

The Federal Trade Commission (FTC) has amended the Children’s Online Privacy Protection Act (COPPA), introducing stricter requirements for child-directed online services and apps. The new rules impose stronger data minimization obligations, limiting the collection and retention of children’s personal information. Behavioral advertising restrictions have been reinforced, preventing companies from tracking children's online activity without verifiable parental consent. Additionally, the amendments require enhanced security measures and transparency, ensuring parents have greater control over their children's data. These changes reflect growing concerns over child privacy, AI-driven content personalization, and online safety, signaling tighter regulatory scrutiny on digital platforms catering to minors.

Companies Breathe Easy: FTC Declines to Classify Children's Avatars as Personal Information—For Now

Barry M. Benjamin | Meghan K. Farmer | Tatum Andres | Kilpatrick

The Federal Trade Commission (FTC) has decided not to classify children’s avatars as personal information under the Children’s Online Privacy Protection Act (COPPA), unless they contain identifiable data. The ruling means animated representations of children, such as gaming avatars or AI-generated profiles, will not automatically require parental consent for collection and use. However, if an avatar reveals a child’s likeness, voice, or other identifiable traits, it may still fall under COPPA’s protections. Privacy advocates argue that this decision leaves room for potential exploitation, while industry players welcome it as a balanced approach to innovation and compliance. The ruling highlights ongoing debates over how emerging technologies, like AI-driven personalization, interact with children’s privacy laws.

Federal government using AI to tackle Phoenix backlog as it tests replacement system

The Canadian Press | CTV News

The Canadian federal government is leveraging artificial intelligence (AI) to help process the massive backlog of payroll issues caused by the problematic Phoenix pay system while testing a new replacement system. Public Services and Procurement Canada (PSPC) confirmed that AI is being used to assist with case prioritization and categorization, aiming to speed up resolutions for affected employees. The Phoenix system, which has caused payroll errors for thousands of federal workers since its rollout in 2016, is being phased out in favor of a new HR and pay system currently in development. Officials emphasize that AI is not making final payroll decisions but is being tested to improve efficiency. While AI offers potential benefits, unions and privacy advocates are closely monitoring its implementation to ensure accuracy and fairness.

Canada’s Got Tech Talent: Examining tech jobs in Canada’s federal government

Angus Lockhart | André Côté | DAIS

A new DAIS report examines the state of tech jobs in Canada’s federal government, highlighting recruitment challenges, workforce gaps, and hiring inefficiencies. The report finds that while the government employs thousands of tech professionals, it struggles to compete with private-sector salaries and modernize hiring processes to attract top talent. Outdated job classifications and lengthy hiring timelines have hindered efforts to fill critical technology roles, particularly in cybersecurity, AI, and data management. The report calls for streamlined hiring, better pay structures, and improved training and career development programs to ensure the government can keep up with rapid digital transformation. Experts warn that without these changes, Canada risks falling behind in public-sector innovation and digital service delivery.

Man convicted of drunk-driving a drone in Sweden’s first case of its kind

Miranda Bryant | The Guardian

A Swedish man has been convicted of drunk driving a drone, marking the country’s first case of its kind. The 50-year-old was caught operating the drone while intoxicated, with authorities determining that his blood alcohol level impaired his ability to fly the device safely. Swedish law classifies drone piloting under the same regulations as other motorized vehicles, meaning intoxicated operation is considered illegal. The case highlights the increasing legal scrutiny on drone usage, particularly regarding public safety and responsible operation. Experts suggest this conviction could set a precedent for stricter drone regulations in Sweden and beyond.

Canada's privacy commissioner in talks with PowerSchool over data breach

Josh Recamara | Insurance Business Magazine

Canada’s Privacy Commissioner is in discussions with PowerSchool following a data breach that impacted multiple school boards across the country. The breach, which exposed student and staff information, has raised concerns about data security in the education sector. PowerSchool, a widely used school management software, has acknowledged the incident and claims that the compromised data has been deleted by the unauthorized party. However, privacy experts are urging stronger cybersecurity measures to prevent future breaches. The Office of the Privacy Commissioner is monitoring the situation closely and may launch a formal investigation.

New York Legislature Passes Health Information Privacy Bill

David Stauss | Ashton Harris | Husch Blackwell

The New York Legislature has approved a new health information privacy bill, aiming to enhance protections for patient data amid growing concerns over data security in the healthcare sector. The bill introduces stricter regulations on how health data is collected, stored, and shared, particularly by digital health platforms and third-party service providers. It also strengthens patient consent requirements and expands enforcement powers for state regulators. Privacy advocates have welcomed the legislation, citing the increasing risks posed by cyber threats and data breaches in healthcare. If signed into law, the bill would align New York’s health data protections more closely with existing federal HIPAA regulations while adding state-specific safeguards.

'This app became my best friend': Mourning is human. New grief apps want to 'optimise' it for you

Lindsay Lee Wallace | BBC

New AI-driven apps are increasingly being used to process and analyze grief, offering users digital tools to navigate loss. These apps leverage machine learning to track emotional patterns, suggest coping mechanisms, and even generate chatbot conversations simulating interactions with lost loved ones. While some find comfort in these technologies, critics warn that reducing grief to data points risks oversimplifying complex emotions and raises privacy concerns regarding sensitive user data. Experts argue that while AI can provide support, it cannot replace human connection and traditional therapeutic methods. The growing use of these tools reflects a broader trend of digitalizing mental health care, prompting discussions about their long-term ethical implications.

The state of US reproductive privacy in 2025: Trends and operational considerations

Kate Black | IAPP

The legal landscape for reproductive privacy in the U.S. has become increasingly complex following post-Roe v. Wade regulations and state-level restrictions. Organizations handling health-related data, including fertility tracking apps, telehealth providers, and data brokers, must navigate heightened compliance obligations due to state privacy laws and federal enforcement trends. Data minimization and enhanced security measures are now critical to protect sensitive reproductive health information from misuse or legal exposure. Businesses are advised to reassess data-sharing practices, particularly in states with strict abortion bans, where law enforcement may seek access to digital records. The report emphasizes the importance of privacy-by-design approaches, strong encryption, and clear user consent policies to mitigate legal and ethical risks in this evolving regulatory environment.

Sturgeon Lake Cree unhappy over proposed AI centre

Chris Stewart | APTNNews

The Sturgeon Lake Cree Nation in Alberta is considering legal action against a proposed Artificial Intelligence (AI) centre near its territory, citing concerns over land rights, environmental impact, and lack of consultation. The project, backed by government and corporate interests, aims to develop AI-driven resource management tools, but the Nation argues it was not properly consulted despite the potential effects on its sovereignty and traditional lands. Community leaders stress that Indigenous perspectives and consent must be central to any development on their territory. The case highlights broader tensions between Indigenous land rights and AI-driven economic projects, raising questions about ethical AI deployment, data sovereignty, and environmental protection. Legal experts suggest the Nation could challenge the project under constitutional and treaty rights protections.

Union of BC Indian Chiefs calls for ‘concrete’ changes to ATIP laws 

Jeremy Appel | Alberta Native News

The Union of BC Indian Chiefs (UBCIC) is demanding urgent reforms to Canada’s Access to Information and Privacy (ATIP) laws, citing ongoing issues with transparency and accountability in government dealings with Indigenous communities. The organization argues that Indigenous nations face significant barriers when requesting information from federal agencies, particularly regarding land rights, resource management, and historical records. UBCIC leaders stress that current ATIP laws are outdated, slow, and often result in excessive redactions or outright refusals, limiting Indigenous groups’ ability to exercise self-determination. They are calling for legislative amendments to ensure faster response times, greater access to historical records, and stronger enforcement mechanisms to hold government agencies accountable. The push for reform aligns with broader efforts to decolonize information governance and uphold Indigenous rights under Canada’s Truth and Reconciliation commitments.

As gridlock grinds Toronto to a halt, here's what the city could learn from Seattle's traffic cameras

Nicole Brockbank | Angelina King | CBC News

Toronto officials are considering expanding automated enforcement measures to tackle the city's growing traffic congestion, taking cues from Seattle’s successful strategies. Seattle has implemented automated traffic cameras to enforce rules such as blocking intersections, illegal turns, and bus lane violations, which has improved traffic flow and public transit efficiency. In Toronto, where gridlock has worsened, officials are evaluating automated enforcement for similar infractions, especially targeting drivers blocking intersections or misusing bike and bus lanes. Advocates argue that camera enforcement is a cost-effective and consistent way to improve compliance, but critics raise privacy concerns and fairness issues, particularly regarding who receives tickets and how fines are structured. As Toronto explores these options, Seattle’s model offers a roadmap, showing how technology can ease congestion without requiring more police resources.

Inquiry calls for federal watchdog to fight foreign meddling done on social media

Laura Osman | The Logic

A federal inquiry into foreign interference in Canadian elections is set to examine how social media platforms have been used to spread misinformation and influence voters. The probe will focus on disinformation campaigns, platform policies, and how digital tools are weaponized by foreign actors to disrupt Canada’s democratic process. Experts argue that online misinformation has become a growing national security concern, as recent elections have seen unverified claims, deepfakes, and coordinated influence operations. The inquiry will review platform responses, including whether tech companies have adequately addressed harmful content. The findings could lead to stricter regulations on digital platforms and new measures to safeguard Canadian democracy against foreign manipulation.

Spain's leader wants the EU to ‘make social media great again.’ Here's how

Joseph Wilson | Jamey Keaten | ABC News

Spanish Prime Minister Pedro Sánchez is urging the European Union to take stronger action in regulating social media platforms, arguing that disinformation, online abuse, and extremist content are eroding public trust and social cohesion. Speaking at a conference in Madrid, Sánchez emphasized that the EU must “make social media great again” by enforcing stricter content moderation policies, transparency measures, and accountability for tech companies. His government is particularly concerned about the spread of false information, cyberbullying, and the negative impact of algorithms on democracy. While Sánchez supports digital innovation and free expression, he insists that regulations should ensure a safer and more responsible digital space. The proposal aligns with the EU’s Digital Services Act, which already seeks to curb harmful content, but Sánchez suggests even bolder action may be needed to protect citizens and institutions.

UK ICO Publishes its 2025 Strategy for Online Tracking

Hunton

The UK Information Commissioner’s Office (ICO) has unveiled its 2025 strategy for regulating online tracking technologies, aiming to enhance privacy protections and enforce compliance with data protection laws. The plan prioritizes greater transparency in how user data is collected and processed, particularly in relation to cookies, behavioral advertising, and tracking technologies used by websites and digital platforms. The ICO intends to strengthen enforcement against companies failing to meet regulatory standards while also providing guidance for businesses on lawful data practices. A key focus will be on protecting children’s privacy, aligning with existing rules under the Children’s Code. Additionally, the ICO is exploring alternative models for online advertising, such as privacy-preserving technologies, to balance business innovation with consumer rights. The strategy reflects growing concerns over excessive data collection and tracking, reinforcing the UK's commitment to stronger digital privacy protections.

CJEU Finds Customers’ Title Is Not Necessary Data For The Purchase Of A Train Ticket

Kristof Van Quathem | Alix Bertrand | Covington

The Court of Justice of the European Union (CJEU) has ruled that requiring customers to provide their title (e.g., Mr., Ms., Dr.) when purchasing a train ticket is not necessary under EU data protection law. The case stemmed from a challenge under the General Data Protection Regulation (GDPR), which mandates that personal data collection must be limited to what is strictly necessary for a given purpose. The court found that a customer’s title does not impact the service provided and cannot be deemed essential for completing a ticket purchase. The ruling reinforces the principle of data minimization, requiring companies to justify the collection of any personal information beyond what is functionally required. This decision may influence other sectors that request similar non-essential data, pushing businesses to reassess their data collection practices to remain compliant with GDPR.

Politicization of intel oversight board could threaten key US-EU data transfer agreement

Suzanne Smalley | The Record

The growing political tensions surrounding the Privacy and Civil Liberties Oversight Board (PCLOB) in the U.S. may jeopardize the EU-U.S. Data Privacy Framework (DPF), a crucial agreement enabling transatlantic data flows. The Biden administration’s PCLOB appointees were recently removed by the Trump administration, raising concerns that the board could lose its independence and become less effective in overseeing surveillance reforms required by the EU. The European Commission approved the DPF in 2023 under the condition that U.S. intelligence agencies implement stronger privacy protections and provide EU citizens with recourse against data misuse. If the PCLOB’s credibility weakens, the EU may reassess or even invalidate the DPF, potentially leading to disruptions in transatlantic commerce and legal uncertainty for businesses relying on EU-U.S. data transfers. This situation echoes past cases, such as the invalidation of Privacy Shield in the Schrems II ruling, highlighting ongoing EU skepticism about U.S. privacy safeguards.

The shadow data market: Privacy risks lurking in forgotten information

Jennifer Dickey | IAPP

A "shadow data market" is emerging, where forgotten or unregulated data—such as old customer records, abandoned cloud storage, or leaked datasets—is collected, sold, and exploited without proper oversight. Unlike traditional data breaches, shadow data refers to information that organizations fail to track or delete, making it vulnerable to unauthorized access, resale, or misuse. Third-party data brokers and malicious actors can aggregate these records, potentially leading to identity theft, fraud, and privacy violations. Experts warn that poor data governance, inadequate deletion policies, and a lack of transparency allow this market to thrive. Organizations are urged to implement better data lifecycle management, conduct regular audits, and ensure secure disposal of outdated information to mitigate risks. The report underscores the need for stronger regulations to curb the unregulated exchange of sensitive personal data.

Trump uses mass firing to remove independent inspectors general at a series of agencies

The Associated Press | CTV News

Former U.S. President Donald Trump has carried out a mass firing of independent inspectors general (IGs) across multiple federal agencies, raising concerns about government oversight and accountability. These IGs serve as watchdogs, investigating waste, fraud, and abuse within federal agencies, and their sudden removal has sparked criticism from lawmakers and ethics experts. The move follows a pattern seen in Trump’s first term, where he dismissed several IGs who were involved in high-profile investigations. Critics argue that these firings weaken institutional safeguards and allow political interference in agency oversight. Some Democrats and former officials have called for an inquiry into whether these removals were politically motivated, while Trump allies defend the decision as necessary for ensuring "loyalty" within government agencies.

To foster greater trust in artificial intelligence, we need better regulators

Clifton van der Linden | The Globe and Mail

A recent Globe and Mail commentary highlights the urgent need for stronger governance frameworks to foster public trust in artificial intelligence (AI). The article argues that many Canadians remain skeptical of AI, largely due to concerns about bias, transparency, and ethical oversight. It calls for clearer regulations, independent oversight bodies, and industry-wide accountability measures to ensure AI systems operate fairly and responsibly. The piece also stresses the importance of public engagement and education, as well as greater corporate responsibility in disclosing how AI models function and are trained. Without these safeguards, the article warns that AI adoption could face increasing public and regulatory resistance, ultimately stalling innovation and economic progress.

FBI’s warrantless ‘backdoor’ searches ruled unconstitutional

Emma Roth | The Verge

A U.S. court has ruled that the FBI’s warrantless “backdoor” searches of Americans' communications violate the Fourth Amendment, marking a significant pushback against mass surveillance. The decision centers on the FBI's use of Section 702 of the Foreign Intelligence Surveillance Act (FISA), which allows intelligence agencies to collect communications of foreign targets but has also been used to search Americans' data without a warrant. The ruling deemed these searches unconstitutional, citing privacy violations and overreach by law enforcement. Privacy advocates hail the decision as a major victory, while the Biden administration argues that restricting such searches could hinder national security efforts. The ruling is expected to intensify debates over reforming surveillance laws ahead of Section 702’s reauthorization deadline.

Governments call for spyware regulations in UN Security Council meeting

Lorenzo Franceschi-Bicchierai | Tech Crunch

At a UN Security Council meeting, multiple governments called for stronger regulations on commercial spyware, highlighting its role in human rights abuses and global surveillance concerns. The discussion follows increasing reports of authoritarian regimes and even democratic governments using spyware to monitor journalists, activists, and political opponents. Nations like France and the United States pushed for greater oversight and international cooperation to prevent misuse, while some countries resisted broad restrictions, citing national security needs. Critics argue that without clear global regulations, spyware will continue to erode privacy and civil liberties. The debate underscores growing tension between security interests and digital rights, with calls for transparency and accountability in surveillance technology.

California’s proposed rule on AI used in employment decisions is a big deal

Robert Wennagel | Constangy

California has proposed new regulations to oversee the use of AI in employment decisions, aiming to ensure fairness and transparency. The rule would require employers and AI vendors to conduct bias audits before using AI-driven hiring tools and provide detailed disclosures about how these technologies impact job applicants. It also introduces record-keeping requirements to track AI-related hiring outcomes and prevent discrimination. Critics argue that compliance could be burdensome for businesses, while supporters say it is necessary to mitigate bias and promote accountability. If implemented, this could set a national precedent, influencing how AI is used in recruitment and workplace decision-making across the U.S.

Happy Privacy Day: Emerging Issues in Privacy, Cybersecurity, and AI in the Workplace

Joseph J. Lazzarotti | Damon W. Silver | Jackson Lewis

On Data Privacy Day 2025, experts highlighted the growing challenges businesses face in privacy, cybersecurity, and AI governance. Key concerns include AI-driven workplace monitoring, the risks of employee data breaches, and the increasing use of biometric authentication for security purposes. Employers are urged to balance innovation with compliance, ensuring that AI tools do not infringe on workers' privacy rights. Additionally, new global privacy regulations require businesses to enhance data security protocols and provide transparency in their handling of employee information. As AI adoption accelerates, organizations must implement responsible AI policies to prevent discrimination, ensure ethical data use, and avoid legal risks.

Previous
Previous

Week of 2025-02-12

Next
Next

Week of 2025-01-28