Week of 2025-01-17
Ontario government files appeal against order for Ford to release cellphone records
Isaac Callan | Colin D’Mello | Global News
Ontario Premier Doug Ford has appealed a judicial review decision that ruled his office violated the province's Freedom of Information and Protection of Privacy Act (FIPPA). The case revolves around Ford's refusal to disclose his government-issued cellphone number, which had been requested by a public citizen. The Information and Privacy Commissioner (IPC) of Ontario had originally ruled that the cellphone number was subject to disclosure under FIPPA, but Ford's office argued it was exempt due to privacy concerns. The judicial review upheld the IPC's decision, stating that the number was used for government business and should be disclosed. However, Ford's office has now escalated the case, claiming the ruling could set a dangerous precedent for the privacy of public officials. Critics argue that the ongoing legal battle reflects a lack of transparency in the Ford government. The IPC has reiterated the importance of access to information for public accountability. The appeal process could have significant implications for privacy and transparency laws in Ontario.
Corporate integrity and moral obligation: Examining key Canadian trends in disclosure and privilege
François Fontaine | Frédéric Plamondon | Charles-Antoine Péladeau | Rachelle Powell Bergmann | LExpert
A recent analysis highlights evolving Canadian trends in corporate disclosure, integrity, and privilege, as businesses face increasing legal and ethical scrutiny. Regulators and courts are emphasizing transparency, particularly in areas like ESG (environmental, social, and governance) reporting and whistleblower protections. Companies are under growing pressure to disclose non-financial risks and demonstrate accountability, particularly as these factors increasingly influence investor and public trust. The article also explores legal privilege, noting that businesses must carefully navigate between transparency and protecting sensitive communications under attorney-client privilege. Recent rulings show courts taking a closer look at privilege claims, balancing them against public interest in corporate accountability. Whistleblower protections are becoming a focal point, with stronger measures being implemented to encourage internal reporting and shield whistleblowers from retaliation. The growing emphasis on corporate integrity suggests companies must proactively establish compliance frameworks that address both legal obligations and broader ethical expectations, ensuring transparency while maintaining necessary confidentiality. These trends underscore the shifting landscape for corporate governance and legal strategy in Canada.
Supreme Court considers Texas age-verification requirement for pornography websites
Melissa Quinn | CBS News
The U.S. Supreme Court is set to review the legality of age verification requirements for accessing pornographic websites, a move that could have significant implications for online privacy and free speech. The case revolves around state laws, such as one recently passed in Louisiana, mandating that adult websites implement robust age-verification mechanisms to prevent minors from accessing explicit content. Supporters argue these measures are necessary to protect children from harmful material, while critics caution that they could violate constitutional rights and create privacy risks. Opponents of the laws emphasize that requiring users to submit personal information, such as IDs, to access websites raises serious concerns about data security and potential misuse. Advocacy groups also highlight the potential for such laws to infringe on First Amendment protections by restricting access to lawful content. The case will likely examine the balance between safeguarding children online and preserving individuals' rights to privacy and free expression. The outcome could set a precedent affecting online content regulation nationwide. Legal experts anticipate the Court's decision will address how far governments can go in enforcing age restrictions on digital platforms without encroaching on constitutional protections. This case marks a critical juncture in the ongoing debate over internet regulation and individual rights.
What the Supreme Court hearing about age verification could mean for you
Anna Lovine | Mashable
The Free Speech Coalition v. Paxton case involves a legal challenge to Texas's age verification law for accessing adult content online, raising concerns about free speech, privacy, and government overreach. The law requires adult websites to implement strict age verification measures and display health warnings about the risks of pornography. Opponents, including the Free Speech Coalition, argue that the law infringes on First Amendment rights by restricting access to lawful content and creates significant privacy risks for users who must provide personal information to verify their age. Critics also highlight the law's potential chilling effect on constitutionally protected speech, as both users and website operators may avoid engaging with adult content due to fear of surveillance or exposure. Additionally, they argue the law imposes burdens on smaller adult content providers who may lack the resources to implement such measures. Proponents of the law, however, maintain that it is necessary to protect minors from harmful material and promote public health. The court hearing is expected to explore the constitutional implications of the law, particularly whether it strikes a reasonable balance between protecting minors and upholding the rights of adults to access legal content. The case has broader implications for the regulation of online speech and could set a precedent for similar laws being enacted in other states. The outcome will likely influence ongoing debates about how governments can regulate digital platforms while respecting free speech and privacy.
Which AI Companies Are the Safest—and Least Safe?
Harry Booth | Time
A recent Time article highlights concerns over the lack of transparency in safety practices among leading AI companies like OpenAI, Meta, and Anthropic. Despite calls for accountability, these companies have not provided clear details on how they ensure their AI systems are safe for public use. The companies often cite confidentiality and competition as reasons for withholding information about their safety measures. Critics argue that without greater transparency, it becomes difficult to assess the risks posed by advanced AI technologies, including issues like misinformation, bias, and potential misuse. Experts emphasize the need for independent oversight and clear safety benchmarks to build public trust and prevent harmful consequences. The report underscores the broader issue of balancing innovation with the ethical responsibility to ensure AI systems are safe, fair, and aligned with societal values. As governments and regulators increasingly scrutinize AI, the push for transparency in safety protocols is expected to grow.
AI Agents with More Autonomy Than Chatbots Are Coming. Some Safety Experts Are Worried
Webb Wright | Scientific American
An article in Scientific American explores the rise of AI agents—autonomous systems capable of performing complex tasks with minimal human oversight—and their increasing ubiquity in everyday life. AI agents, such as chatbots, virtual assistants, and recommendation engines, are rapidly advancing in capabilities, allowing them to perform tasks like coding, customer service, and even creative endeavors. Unlike traditional AI systems, which require user input for every action, agents can make decisions, take initiative, and adapt to new situations, often by collaborating with other agents. The article highlights both the promise and risks of AI agents. On the one hand, they offer the potential to improve productivity, provide personalized services, and solve intricate problems. On the other hand, concerns arise over security vulnerabilities, ethical implications, and unintended consequences, especially as agents grow more autonomous. Experts emphasize the need for robust regulation and transparency to ensure AI agents align with human values and avoid misuse. As these systems become more integrated into industries and daily life, their impact on society is expected to be profound and far-reaching.
Is it still 'social media' if it's overrun by AI?
Jenna Benchetrit | CBC News
A CBC article highlights Meta's introduction of AI-generated characters across its platforms like Facebook and Instagram, a move intended to shape the future of social media interaction. These AI-driven avatars are designed to engage users through customized content creation, real-time conversations, and entertainment, reflecting Meta's push to integrate artificial intelligence more deeply into its ecosystem. The company envisions these characters as tools to enrich user experiences and foster more immersive social media environments. However, the integration of AI-generated personas raises ethical and privacy concerns, particularly regarding data collection and the potential misuse of personal information in interactions with these virtual entities. Critics also question the transparency and societal impact of such AI-driven features, cautioning against their potential to blur lines between authentic and artificial connections online. Meta's move comes as tech companies increasingly explore AI to redefine digital engagement and expand monetization opportunities.
CCIA releases training data transparency template
CCIA
The Computer & Communications Industry Association (CCIA) announced the introduction of a global training data transparency template to address growing concerns about the ethical use of data in artificial intelligence. This template, developed by key players in the digital sector, is designed to standardize the disclosure of AI training datasets, aiming to provide clearer insights into how AI models are trained and the sources of their data. The initiative seeks to promote accountability and trust by offering organizations a structured way to share information about the data's origins, usage permissions, and potential biases, ensuring compliance with global regulatory frameworks. Proponents argue that the template will help address transparency gaps and mitigate risks of data misuse, particularly as governments and industries grapple with regulating AI. Critics, however, question whether voluntary adoption will be enough to ensure widespread compliance and prevent ethical violations. This move reflects the sector's attempt to preempt stricter regulatory oversight while addressing public and governmental demands for transparency in AI development.
Ambient AI technology making its way into the market
Julian Chokkattu | Wired
A new AI wearable called Omi, designed to monitor conversations and provide real-time assistance, is raising concerns about privacy and ethics. Developed by Bee AI, Omi functions as an always-listening device intended to help users recall forgotten names, detect conversational tones, and even offer suggestions during discussions. The device relies on advanced natural language processing to analyze speech and context. While proponents highlight its potential for improving communication and social interactions, critics argue that the continuous recording and processing of conversations pose significant privacy risks. The wearable’s constant data collection has sparked debates about surveillance, data security, and the potential misuse of sensitive information. Bee AI claims that Omi operates with strict privacy safeguards, but experts caution that the normalization of such devices could erode personal boundaries and introduce new challenges in regulating AI-enabled wearables. The product reflects the growing tension between innovation and ethical considerations in AI technologies.
Back to the future: How AI is shaping the next digital economy
Government of Canada
The Government of Canada’s Horizons team published a report examining how artificial intelligence (AI) is reshaping the digital economy and its implications for society. The report highlights how AI is revolutionizing industries by enabling more efficient decision-making, predictive analytics, and automation, thus driving economic growth. However, it warns of risks such as data privacy concerns, algorithmic bias, and job displacement due to automation. The study also emphasizes the need for robust governance frameworks to address ethical and social concerns, particularly around transparency and accountability in AI systems. It suggests that fostering digital literacy and reskilling programs will be critical to preparing the workforce for AI-driven economic shifts. Additionally, the report underscores the importance of global collaboration to ensure equitable access to AI technologies and prevent a widening of digital divides. The findings urge policymakers to balance innovation with safeguards that prioritize public trust and fairness.
Unlocking data collaboration: A study on data sharing practices and developing standard data licence terms to promote access and social good
Thomas Carey-Wilson | Gefion Thuermer | Elena Simperl | Lee Tiedrich | ODI
The Open Data Institute (ODI) has released a report exploring data-sharing practices and proposing standard data license terms to enhance access and promote social good. The study identifies barriers to effective data collaboration, including legal uncertainties, lack of trust, and inconsistent licensing frameworks, which hinder the potential benefits of shared data. The report highlights the importance of clear, standardized data license terms to simplify collaboration, protect intellectual property, and ensure fair use while fostering innovation. It emphasizes the role of trust-building mechanisms, such as transparent data governance policies and ethical considerations, to encourage responsible sharing. The study concludes by advocating for cross-sector collaboration and international alignment to unlock the potential of data-driven initiatives in addressing societal challenges. It underscores the need for governments, private organizations, and civil society to work together in creating a more open and equitable data-sharing ecosystem.
7% of Kuwait residents miss biometric registration deadline, face service freeze
Masha Borak | Biometric Update
Kuwait has imposed a service freeze on residents who missed the biometric registration deadline, impacting 7% of the population. The government had mandated biometric registration as part of efforts to enhance national security and streamline access to government services. Residents who failed to comply with the registration are now unable to access key services, including renewing civil IDs or obtaining government benefits. Critics have raised concerns over the logistical challenges and lack of awareness surrounding the registration process, which may have contributed to non-compliance. Officials have not announced whether there will be extensions or alternative measures for those affected. The situation underscores the challenges governments face in implementing large-scale biometric initiatives.
Banning children from social media won't solve the problems we're facing
Christopher Dietzel | Kaitlynn Mendes | Hamilton Spectator
The article discusses the City of Hamilton's efforts to modernize its privacy practices and address recent concerns about data breaches and transparency. Following a series of incidents involving unauthorized access to municipal data, Hamilton is reviewing its policies to ensure compliance with provincial regulations. The city aims to balance innovation and data security while maintaining public trust. Key measures include increased investment in cybersecurity and clearer communication protocols regarding privacy violations. Officials emphasize the importance of safeguarding resident data amid rising risks associated with digital transformation.
Texas sues Allstate, alleging it violated data privacy rights of 45 million Americans
Suzanne Smalley | The Record
The article highlights Texas Attorney General Ken Paxton's lawsuit against Allstate, accusing the insurance company of mishandling consumer data related to its vehicle-tracking program. The lawsuit alleges that Allstate collected sensitive data from customers without adequate transparency or safeguards, potentially violating Texas consumer protection and privacy laws. Paxton emphasizes that such practices undermine public trust and consumer rights. The case reflects growing concerns about the intersection of data privacy and emerging technologies, particularly in the automotive sector. Texas seeks penalties and reforms to ensure stronger data protection for residents.
Allstate used GasBuddy and other apps to quietly track driving behavior
Kevin Purdy | ArsTechnica
The article discusses a lawsuit filed against Allstate by the Texas Attorney General, alleging the company used third-party apps to track drivers' behavior without proper transparency or consent. The lawsuit claims Allstate collected sensitive data, including location and driving habits, through its telematics programs, violating consumer privacy laws. It also alleges that Allstate failed to clearly disclose how this data was being used or stored. Texas Attorney General Ken Paxton argues that such practices infringe on Texans' privacy rights and has called for penalties and stricter safeguards. This case underscores rising concerns about privacy in data-driven insurance models.
B.C. court approves class-action lawsuit about privacy over Home Depot receipts
CBC News
A class-action lawsuit against Home Depot has been allowed to proceed by the British Columbia Supreme Court, alleging the company shared customer data with Meta without proper consent. The case focuses on customers who provided their email addresses for purchase receipts, claiming their data was then shared with Meta for targeted advertising without adequate disclosure or permission. Home Depot denies wrongdoing, arguing it informed customers about data usage in its privacy policy. The court ruled there is a sufficient basis for the lawsuit to move forward, emphasizing privacy and informed consent concerns. This case highlights growing scrutiny of corporate data-sharing practices and customer privacy rights.
OECD identifies five key government innovation trends
Richard Johnstone | Global Government Forum
The OECD has identified five key trends shaping government innovation in its 2023 report. These trends include the adoption of artificial intelligence to enhance public service delivery, particularly in predictive analysis and personalized citizen engagement. The report also emphasizes the rise of co-creation and citizen participation in policymaking, allowing for more inclusive and effective governance. Digital transformation is another significant trend, with governments prioritizing the development of smart cities and open data initiatives to improve transparency and efficiency. Additionally, sustainability and climate resilience are becoming central to innovation strategies as governments work to meet environmental challenges. Lastly, governments are focusing on building trust and integrity in institutions, addressing issues such as misinformation and ethical use of technology. These trends underscore a growing commitment to leveraging innovation for improved governance and societal impact.
‘Mainlined into UK’s veins’: Labour announces huge public rollout of AI
Robert Booth | The Guardian
The UK Labour Party has announced an ambitious public rollout of artificial intelligence (AI) technologies, aiming to integrate AI "into the veins of the nation." The initiative promises to enhance public services, including healthcare and transportation, while modernizing governmental operations. Labour leaders highlighted plans to ensure ethical oversight, transparency, and robust data protections to mitigate concerns around privacy and bias. Critics, however, argue that the proposed rollout lacks sufficient safeguards and risks creating a dependency on private sector partnerships. The announcement marks a significant step in the UK’s approach to leveraging AI for public benefit, while sparking debate on how best to balance innovation with accountability.
Ottawa’s efforts to create digital ID for citizens stalled: report
Samuel Forster | Canadian Affairs
A recent report highlights delays in Ottawa’s efforts to implement a national digital ID system for Canadian citizens. The initiative, aimed at providing secure and efficient online access to government services, has faced setbacks due to technical challenges, privacy concerns, and a lack of clear coordination across federal and provincial governments. Critics argue the delays risk leaving Canada behind as other nations advance their digital ID systems. Supporters emphasize the potential benefits, including improved service delivery and fraud prevention, but stress the need for robust privacy safeguards. The federal government has yet to confirm a revised timeline for launching the system.
School software hack hits school boards across six Canadian provinces
Dorcas Marfo | CTV News
A cyberattack on software used by school boards across six Canadian provinces has compromised personal information of students, parents, and staff. The hack targeted Edsby, a widely-used education platform, exposing sensitive data such as names, email addresses, and potentially more detailed information. Impacted school boards have begun notifying affected individuals, and investigations are underway to assess the extent of the breach. Experts warn this incident underscores the increasing vulnerability of educational institutions to cyberattacks. Authorities, including privacy commissioners, are monitoring the situation as cybersecurity experts work to secure systems and prevent further data misuse.
Public school students’ medical records may have been involved in PowerSchool data breach, spokesperson says
Celeste Percy-Beauregard | Hamilton Spectator
A data breach involving PowerSchool, an educational software platform, may have exposed the medical records of students in the Hamilton public school system. The breach, which occurred in late December, has affected several school boards across Canada and has raised concerns over the extent of the sensitive information accessed. A spokesperson for the Hamilton-Wentworth District School Board indicated that while investigations are ongoing, the breach potentially included data like medical records and personal details of students and staff. PowerSchool claims the data has been deleted and not publicly shared, but cybersecurity experts warn about potential long-term risks. The incident highlights vulnerabilities in educational data systems and the need for stronger safeguards to protect personal information.
What to Know About the HHS HIPAA Security Standards Proposal
Kathryn Rattigan | Conor Duffy | Robinson & Cole
The U.S. Department of Health and Human Services (HHS) has proposed significant updates to the HIPAA Security Rule to address evolving cybersecurity challenges. Key proposed changes include the introduction of more detailed risk assessments, requirements for multifactor authentication (MFA), and stronger encryption standards for protected health information (PHI). The updates also emphasize regular employee cybersecurity training and periodic reviews of security practices to ensure ongoing compliance. These changes aim to address modern threats, such as ransomware and advanced phishing attacks, and ensure the protection of sensitive health data. The proposal reflects a broader push to modernize cybersecurity frameworks within the healthcare industry.
Ontario rolling out police dashboard to monitor people out on bail for firearms charges
Rochelle Raveendran | CBC News
Ontario police forces are adopting a new dashboard system for monitoring bail compliance, aiming to enhance public safety and streamline how they track high-risk individuals released on bail. The dashboard consolidates information from multiple databases, giving officers quick access to details on suspects' bail conditions, past violations, and real-time alerts. While proponents argue that the tool improves accountability and efficiency, critics worry about potential privacy concerns and the accuracy of the data used. Civil liberties advocates also caution against over-surveillance and the risk of disproportionately targeting marginalized communities. The system reflects broader efforts in Canada to balance public safety with rights and privacy in the justice system.
OPP commissioner defends beefed up border security program
Shawn Jeffords | CBC News
The Ontario Provincial Police (OPP) commissioner defended a controversial border security program that has faced scrutiny over its effectiveness and privacy implications. The program, designed to curb human trafficking and smuggling at border crossings, uses surveillance tools and intelligence-sharing with other agencies. Critics argue the initiative lacks transparency and could lead to privacy breaches and overreach. The commissioner emphasized the program's role in combating organized crime and safeguarding communities while maintaining compliance with legal standards. Despite these assurances, calls persist for greater oversight and public accountability regarding the program’s operations and data use.
OPP investigate cyber incident affecting Kingston police IT systems
Aliyah Marko | Toronto Star
The Ontario Provincial Police (OPP) is investigating a cyber incident that affected the Kingston Police’s IT systems. The breach, which was detected recently, has disrupted certain police operations, though the extent of the impact has not yet been fully disclosed. Kingston Police confirmed they are working closely with cybersecurity experts and law enforcement to determine the scope and origin of the incident. While sensitive details about the breach remain confidential, officials assured the public that critical emergency services have not been compromised. The investigation highlights ongoing concerns about cybersecurity vulnerabilities within municipal systems.
Oakville to Launch Automated Speed Enforcement Program by End of January
Shazia Nazir | Hamilton Spectator
The town of Oakville, Ontario, is set to launch its automated speed enforcement (ASE) program by the end of January. This initiative aims to enhance road safety by using cameras to detect and ticket speeding vehicles in designated community safety zones and near schools. The program will initially focus on 17 sites, with plans to rotate the cameras among these locations to maximize coverage. Drivers exceeding the speed limit in these areas will receive fines but not demerit points, as the violations are linked to vehicle ownership rather than individual drivers. Oakville officials hope the program will deter speeding and improve safety for vulnerable road users, particularly children.
Telcos removing Huawei equipment left in the lurch after Trudeau kills cyber bill
David Reevely | The Logic
Canadian telecommunications companies are facing uncertainty following Prime Minister Justin Trudeau's decision to halt the passage of a cybersecurity bill that would have provided financial support for the removal of Huawei equipment from their networks. The bill aimed to back efforts to replace hardware from the Chinese telecom giant, which Canada banned in 2022 over national security concerns. Without the legislation, telcos are now left covering the costs themselves, which could reach billions of dollars. Industry executives have expressed frustration, emphasizing the financial and operational strain of meeting government-mandated deadlines for equipment removal without promised support. The halt has raised concerns about Canada's cybersecurity preparedness and its broader policy direction on critical infrastructure security.
CIGI Final Submission to the Public Inquiry into Foreign Interference in Federal Electoral Processes and Democratic Institutions in Canada
Wesley Wark | Aaron Shull | CIGI
The Centre for International Governance Innovation (CIGI) submitted its final recommendations to Canada’s public inquiry on foreign interference in federal elections and democratic institutions. The submission highlights vulnerabilities in Canada’s electoral processes, particularly the role of foreign actors in spreading disinformation and manipulating public opinion through digital platforms. CIGI advocates for robust measures to enhance transparency, such as stricter regulations on online political advertising, mandatory disclosures of campaign funding sources, and stronger data privacy protections for voters. The organization also emphasizes the need for international cooperation to address the cross-border nature of interference. Additionally, CIGI calls for public education campaigns to build media literacy and resilience against disinformation among Canadians. These recommendations aim to safeguard the integrity of Canada’s democracy in an increasingly digital and interconnected world.
The U.S. and Canada quietly agreed to share personal data on permanent residents crossing the border
Christopher Nardi | National Post
The U.S. and Canada have entered into a new agreement that enables the sharing of personal data about permanent residents and foreign nationals between the two countries. The deal is intended to enhance border security and prevent fraudulent immigration claims by verifying individuals’ travel and residency history. While Canadian officials assert the agreement will only share "limited biographic information," privacy advocates have raised concerns about transparency and the potential misuse of data. Critics worry that the deal lacks robust oversight and safeguards to ensure the data is used only for its intended purposes. The agreement highlights ongoing tensions between security priorities and personal privacy in cross-border data-sharing arrangements.
The hidden risks of Neuralink
Karina Vold | Jesse Hall | Amelia Kush | IAI
Neuralink, Elon Musk’s brain-machine interface company, raises significant ethical, legal, and safety concerns, as discussed in this article. The technology, which aims to connect human brains to computers, presents risks such as potential exploitation of personal neurological data and inadequate safeguards for privacy. Critics argue that Neuralink's rapid advancement in a regulatory grey zone could lead to insufficient testing and oversight, putting human subjects at risk. There are also concerns about data security, as the sensitive neural data collected by such devices could be hacked or misused. The article highlights the broader implications of commercializing brain-machine interfaces without robust ethical and regulatory frameworks.
The Supreme Court might let the U.S. ban TikTok unless it’s sold. Here’s what to know.
Lindsay Whitehurst | AP News
The Supreme Court is set to hear a case challenging Montana's ban on TikTok, marking a significant legal test for state-level restrictions on the app. The case revolves around First Amendment concerns, as TikTok argues that the ban infringes on users' rights to free expression. Montana officials defend the ban, citing national security risks tied to TikTok's Chinese ownership and concerns over data privacy. The outcome could establish precedents affecting how far states can go in regulating tech platforms based on security concerns. If upheld, this decision could pave the way for broader state or federal actions against TikTok or similar apps.
Google will face mobile phone privacy class action, possible trial
Johnathan Stempel | Global News
A class-action lawsuit has been filed against Google in Canada, accusing the tech giant of privacy violations tied to its mobile devices. The claim alleges that Google collects users' personal data, including sensitive information like location, without proper consent and shares it for advertising purposes. The lawsuit seeks compensation for affected users and a court order requiring Google to improve its data-handling practices. Google has not yet issued a response to the legal action. This case highlights ongoing concerns about privacy and data use by major tech companies.
Candy Crush, Tinder, MyFitnessPal: See the Thousands of Apps Hijacked to Spy on Your Location
Joseph Cox | 404 Media
A recent investigation has revealed that thousands of popular apps, including Candy Crush, Tinder, and MyFitnessPal, were exploited to track users' locations without their knowledge. This was facilitated by code embedded in the apps, allowing third parties to monitor users' movements for commercial purposes. The apps in question often did not adequately inform users about the extent of location data being collected or how it was being shared. Privacy advocates argue that this practice underscores the need for stronger regulations on app transparency and user consent. The findings raise concerns about widespread location data exploitation and the potential misuse of sensitive information.
DOJ Finalizes Rule Establishing New National Security Cross-Border Data Regulatory Regime
Hunton
The U.S. Department of Justice (DOJ) has finalized a rule implementing Executive Order 14117, establishing a regulatory framework for cross-border data transfers involving national security concerns. This rule aims to restrict the transfer of Americans' sensitive personal data to foreign adversaries, including China, Russia, and North Korea, to mitigate national security risks. The rule applies to a broad range of sensitive data, such as health, financial, and location information, and outlines penalties for noncompliance. While exemptions exist for certain industries like financial services, critics argue that the rule may impose significant compliance burdens on businesses handling cross-border data. This marks a pivotal step in the U.S. government's efforts to regulate cross-border data flows and enhance data security.
Tech lobby group launches think tank with $10M from Jim Balsillie
Murad Hemmadi | The Logic
The Canada Shield think tank has been launched to address growing national security threats, focusing on countering foreign interference and enhancing the resilience of Canadian institutions. The organization plans to work with policymakers, academics, and the private sector to develop solutions for cyber threats, disinformation, and other forms of interference. Canada Shield emphasizes the need for a whole-of-society approach, acknowledging that these issues span government, industry, and civil society. Initial reports from the think tank highlight the importance of robust cybersecurity measures and the urgent need for public awareness campaigns to combat misinformation. The think tank aims to fill critical policy gaps and foster collaboration to protect Canada’s sovereignty and democratic systems.
Laws to disrupt ransomware payments considered in the UK
Stuart Davey | Pinsent Masons
The UK government is considering new laws aimed at disrupting ransomware payments as part of its broader strategy to combat cybercrime. The proposals, which are still in consultation, may include measures to restrict or criminalize the payment of ransoms to hackers. Proponents argue that such legislation would reduce the financial incentives for ransomware attacks, which have surged in recent years, targeting critical infrastructure and private companies. Critics, however, caution that banning ransom payments could put businesses in a difficult position if they are unable to recover critical data or systems. The government aims to strike a balance between discouraging cybercriminal activity and protecting organizations from irreparable harm.
The EU Fined Itself for Breaking Its Own Data Privacy Law
AJ Dellinger | Gizmodo
The European Union has fined itself €200,000 for violating its own data privacy regulations under the General Data Protection Regulation (GDPR). The incident involved the European Data Protection Board (EDPB), which oversees GDPR compliance, mishandling sensitive personal information. This unusual case highlights the challenges of ensuring compliance, even within regulatory bodies tasked with enforcing data protection laws. Critics argue the fine demonstrates the need for greater accountability and transparency among governing institutions. While the EU's self-regulation may set an example, it also underscores the complexities of adhering to GDPR standards across all organizations, including governmental entities.
GM banned from selling your driving data for five years
Andrew J. Hawkins | The Verge
General Motors (GM) has been banned by the U.S. Federal Trade Commission (FTC) from selling drivers' personal data to insurance companies. The FTC determined that GM had been collecting and selling detailed driving data without adequately informing customers or securing their consent. This data included sensitive information such as driving habits, speed patterns, and geolocation. Critics argue that this practice raises significant privacy concerns, especially when used to adjust insurance rates. The ruling highlights the growing scrutiny on automakers and tech companies regarding the monetization of user data and sets a precedent for stricter regulations on the use of such data in the auto industry.
Prorogation’s Digital Impact: Canada’s Digital Bills Set to Die on the Order Paper
Christopher Ferguson | Dongwoo Kim | Fasken
The article discusses the impact of Parliament's recent prorogation on digital policies and legislation in Canada. With Prime Minister Justin Trudeau's resignation and the resulting suspension of Parliament, key legislative initiatives, including privacy reforms and digital policies like Bill C-27, face significant delays or may fail to proceed. The prorogation halts committee work and debate, putting proposed laws on data protection, artificial intelligence, and the digital charter at risk of dying on the order paper. This disruption creates uncertainty for businesses, particularly as Canada’s adequacy status with the European Union hinges on updating its privacy laws. The article underscores the potential for regulatory stagnation and its implications for Canada's digital economy.
Enforcing the right to disconnect
Geoffrey Lowe | First Reference
The article examines the concept of the "right to disconnect," which refers to employees’ ability to disengage from work-related communications, such as emails or calls, outside of regular working hours. It highlights the increasing adoption of this principle in workplaces to address issues of overwork, stress, and burnout, particularly in a remote or hybrid work environment. The article discusses legislative efforts in various jurisdictions, including Ontario's Working for Workers Act, which requires certain employers to have a written policy on disconnecting from work. Challenges in enforcing this right are also considered, such as balancing organizational needs with employee well-being and defining clear boundaries. Ultimately, the article underscores the importance of fostering workplace cultures that respect work-life balance.
A new year in Canadian workplace law
Curtis Armstrong | Norton Rose Fulbright
The article outlines key developments and anticipated changes in Canadian workplace law for 2025. It highlights the growing influence of artificial intelligence in employment decisions, prompting regulatory scrutiny over its ethical use and potential bias. Employers are also expected to navigate evolving standards around workplace harassment and diversity, equity, and inclusion (DEI) initiatives. Additionally, the article discusses the increasing importance of remote and hybrid work policies, especially in light of legislative trends like the "right to disconnect." Finally, it emphasizes the need for employers to stay proactive in adapting to new rules around wage transparency, privacy, and employee data protection.
Number of digital gig workers in Canada surged by 44% in 2024
Catherine McIntrye | The Logic
The number of digital gig workers in Canada increased by 44% in 2024, reflecting a significant shift in the labor market. This surge highlights the growing reliance on platforms for flexible, task-based employment, driven by demand for delivery services, remote work, and digital skills. However, this rise also underscores challenges, including lack of job security, benefits, and regulatory protections for gig workers. Policymakers are exploring reforms to address these concerns, focusing on fair wages, improved labor conditions, and the classification of gig workers as employees or contractors. The trend suggests an ongoing transformation of the Canadian workforce, influenced by technological and economic changes.