Week of 2024-11-18

Ontario court directs Liquor Control Board to release shoplifting data despite security concerns

Angelica Dino | Canadian Lawyer Magazine

In November 2024, the Ontario Court of Appeal mandated the Liquor Control Board of Ontario (LCBO) to disclose shoplifting data, rejecting the LCBO's claims that such disclosure would compromise economic interests and public safety. This decision originated from a 2019 Freedom of Information request by Toronto Star reporters seeking annual shoplifting statistics from 2008 onward and monthly theft reports for each Toronto LCBO location starting January 2018. The LCBO had denied the request, citing exemptions under the Freedom of Information and Protection of Privacy Act (FIPPA), arguing that releasing the information could harm its competitive position and make certain stores more vulnerable by revealing security patterns. The Court of Appeal's ruling emphasizes the importance of transparency and public access to information, even when organizations express concerns about potential security risks. 

Canada just launched its own AI safety institute

Murad Hemmadi | The Logic

The Canadian AI Safety Institute (CAISI) is a newly established organization dedicated to addressing the ethical and safety challenges associated with artificial intelligence (AI) technologies. Founded by a coalition of AI researchers, ethicists, and policymakers, CAISI aims to develop frameworks and guidelines that ensure AI systems are designed and implemented responsibly. The institute focuses on promoting transparency, accountability, and fairness in AI applications, particularly in sectors such as healthcare, finance, and public services. By fostering collaboration among stakeholders, CAISI seeks to mitigate potential risks posed by AI, including biases, privacy violations, and unintended consequences, thereby contributing to the development of AI technologies that align with societal values and human rights.

UK policing minister kicks off debate on live facial recognition

Masha Borak | Biometric Update

On November 13, 2024, UK Policing Minister Diana Johnston announced a series of discussions on police use of live facial recognition (LFR) technology, inviting regulators and civil society groups to participate. While acknowledging LFR's potential to enhance public safety, Johnston emphasized the importance of addressing concerns related to misidentification, misuse, and impacts on human rights and privacy. The debate highlighted the absence of dedicated legislation governing facial recognition in the UK, with current deployments relying on a combination of common law, the Police and Criminal Evidence Act, and the UK General Data Protection Regulation. This initiative reflects the government's commitment to balancing technological advancements in law enforcement with the protection of individual rights. 

Facial recognition deployments must factor in risk v. reward

Joel R. McConvey | Biometric Update

A recent article in the National Security Journal by Nicholas Dynon examines public acceptance of facial recognition technology (FRT) across various applications. The study highlights that while FRT is generally accepted in settings like airport customs, its use in retail environments often faces public backlash. Dynon suggests that this disparity indicates a need for security consultants and practitioners to consider public acceptability when advising on FRT deployments. He proposes a framework that evaluates potential deployments based on a 'reward proximity' versus 'perceived risk' trade-off, aiming to guide more informed decisions regarding the appropriate use of FRT. 

Twelve Ontario school boards sue social media giants for addictive app design

David Reevely | The Logic

In March 2024, four major Ontario school boards—the Toronto District School Board, Peel District School Board, Toronto Catholic District School Board, and Ottawa-Carleton District School Board—filed lawsuits against social media companies Meta (owner of Facebook and Instagram), TikTok, and Snapchat. The boards allege that these platforms are designed to be addictive, negatively impacting students' mental health and learning. They are seeking over $4 billion CAD in damages to address the educational disruptions caused by these platforms. This legal action mirrors similar lawsuits in the United States, where numerous school districts have taken action against social media companies for their impact on youth mental health.

Youth social media: Why proposed Ontario and federal legislation won’t fix harms related to data exploitation

Teresa Scassa | The Conversation

Recent legislative efforts in Ontario and at the federal level aim to address the negative impacts of social media on youth, particularly concerning data exploitation. However, experts argue that these measures may not effectively mitigate the harms associated with data exploitation. The proposed laws focus on content moderation and age verification but often overlook the underlying business models of social media platforms that thrive on extensive data collection and targeted advertising. Critics suggest that without addressing these core issues, such legislation may fall short in protecting young users from data exploitation and its associated risks.

10 privacy violations in the federal government’s proposed changes to the Canada Elections Act

Sara Bannerman | The Conversation

The federal government's proposed amendments to the Canada Elections Act, encapsulated in Bill C-65, have raised significant privacy concerns. The bill seeks to expand the collection and sharing of personal information by political parties, including sensitive data such as ethnicity, religion, and political opinions. Notably, it would permit parties to access and utilize this information without explicit consent from individuals, potentially leading to intrusive profiling and targeted political messaging. Critics argue that these changes could undermine voter trust and infringe upon fundamental privacy rights, emphasizing the need for stringent safeguards to protect personal data within the electoral process.

UK ICO Publishes Report on Genomics

Hunton Andrews Kurth

On November 7, 2024, the UK's Information Commissioner's Office (ICO) released a report addressing data privacy concerns in genomic technology. The report emphasizes the necessity of a "privacy-by-design" approach in developing genomic innovations and invites organizations to collaborate through the ICO's Regulatory Sandbox to ensure compliance. It examines challenges such as determining when genomic data is considered personal information, managing third-party data sharing, anonymization difficulties, risks of bias and discrimination, data minimization, purpose limitation, and the integration of artificial intelligence. The ICO encourages ongoing engagement with stakeholders across various sectors to navigate these complexities effectively. 

NHS patients dying because of problems sharing medical records, coroners warn

Chaminda Jayanetti | The Guardian

Coroners in England and Wales have issued 36 warnings this year regarding inadequate sharing of NHS patient information, highlighting cases where patients died because clinicians lacked access to crucial medical details. Issues such as incompatible IT systems and restricted access to records have led to staff being unaware of important patient information. For instance, a three-year-old boy with Down's syndrome died from a streptococcal infection after an NHS 111 adviser, unaware of his condition, failed to recommend immediate hospital care. In another case, an 11-year-old died due to miscommunication during the handover from ambulance to A&E staff, exacerbated by incompatible IT systems. These incidents underscore the urgent need for improved information-sharing protocols within the NHS to prevent further tragedies.

Canadian government creates new review commission for RCMP and CBSA

Jonalyn Cueto | Canadian Lawyer Magazine

In October 2024, the Canadian government enacted legislation establishing the Public Complaints and Review Commission (PCRC), an independent body overseeing the Royal Canadian Mounted Police (RCMP) and the Canada Border Services Agency (CBSA). This initiative aims to enhance accountability and transparency within these agencies. The PCRC replaces the former Civilian Review and Complaints Commission for the RCMP, extending its oversight to include the CBSA for the first time. The government has allocated $112.3 million over six years, followed by $19.4 million annually, to support the commission's operations. The legislation also mandates the collection and analysis of demographic and race-based data on complainants to identify and address systemic issues within law enforcement.

RCMP national security unit monitored 'threats' linked to Wet'suwet'en anti-pipeline activism, records show

Brett Forester | CBC News

Recent reports have revealed that the Royal Canadian Mounted Police (RCMP) national security unit has been monitoring activities related to Wet'suwet'en anti-pipeline protests, categorizing them as potential threats to critical infrastructure. This surveillance has raised concerns among sociologists and civil rights advocates about the broad scope of such monitoring and its implications for civil liberties. The RCMP's actions have been criticized for potentially overstepping boundaries, leading to debates about the balance between national security and the right to protest.

Federal Court of Appeal finds lengthy and complex privacy policies breached meaningful consent

Nadia Effendi | Frédéric Wilson | Laura M. Wagner | Simon Du Perron | Patrick J. Leger | BLG

On November 1, 2024, the Federal Court of Appeal in Canada ruled that Facebook’s lengthy and complex privacy policies failed to obtain meaningful consent from users, violating the Personal Information Protection and Electronic Documents Act (PIPEDA). The court found that consent must meet an objective standard, meaning the average user should be able to clearly understand the nature, purposes, and consequences of the collection, use, or disclosure of their personal information. This decision emphasizes the need for companies to draft clear and concise privacy policies to comply with legal requirements for informed consent.

U.S. May Support ‘Global Surveillance’ Treaty Hated by Everyone but Authoritarian Governments

Todd Feathers | Gizmodo

The United States has announced its support for a United Nations cybercrime convention, despite concerns from digital rights groups and officials about potential misuse by authoritarian regimes. The Biden administration emphasized the importance of collaborating with allies to positively influence the treaty's development. This decision follows extensive internal deliberations and consultations with foreign partners. The administration plans to implement measures to address risks associated with the treaty, including engaging with nongovernmental organizations to monitor its application. However, the treaty's ratification faces challenges, as it requires approval by a two-thirds majority in the Senate, a difficult threshold in a divided Congress. 

Canadian privacy regulators pass resolution to address privacy-related harms resulting from deceptive design patterns

Office of the Privacy Commissioner of Canada

On November 13, 2024, Canada's federal, provincial, and territorial privacy regulators issued a joint resolution addressing the growing concern over deceptive design patterns that undermine user privacy. These patterns, often found in websites and mobile apps, manipulate users into making choices that may not align with their best interests, particularly affecting children and youth. The resolution urges organizations to:

  • Implement Privacy-by-Design Principles: Integrate privacy considerations into the design framework, especially focusing on the best interests of young users.

  • Limit Data Collection: Collect only personal information necessary for specific, explicit purposes.

  • Enhance Transparency: Use clear and accessible language to inform users about data practices, thereby building trust.

  • Regularly Review Design Elements: Continuously assess and improve website and app designs to reduce deceptive practices and support informed user choices.

  • Adhere to Privacy Principles: Select design elements that respect user autonomy and avoid fostering negative habits or behaviors.

This initiative follows a 2024 sweep by the Global Privacy Enforcement Network (GPEN), which found that 97% of websites and apps reviewed globally employed at least one deceptive design pattern, with 99% of Canadian platforms exhibiting similar issues. Notably, platforms targeting children were more likely to use manipulative tactics, such as emotive language, to influence privacy-related decisions.

Ontario inks nearly $100M deal with Starlink for remote internet access

David Reevely | The Logic

The Ontario government has entered into a nearly $100 million agreement with Starlink, a satellite internet provider, to enhance connectivity in remote and underserved areas of the province. This initiative aims to deliver high-speed internet access to communities that have traditionally faced challenges in obtaining reliable service. By leveraging Starlink's satellite technology, Ontario seeks to bridge the digital divide, ensuring that residents in rural and remote regions can access essential online services, education, and economic opportunities.

 

International Network for Digital Regulation Cooperation (INDRC) and

Office of the Privacy Commissioner of Canada

On November 8, 2024, the International Network for Digital Regulation Cooperation (INDRC) and the Organisation for Economic Co-operation and Development (OECD) co-hosted a workshop to discuss the interplay between their regulatory domains concerning new technologies and digital innovation. The event aimed to explore how closer cooperation between agencies can deliver more coherent regulatory responses and improve public confidence. Participants shared perspectives on challenges posed by rapid digital transformation, data-driven markets, and emerging technologies, including AI-generated content and Large Language Models (LLMs). The INDRC, established in June 2023, comprises digital regulation coordination bodies from Australia, Canada, Ireland, the Netherlands, and the United Kingdom. Members resolved to continue their dialogue on knowledge sharing and practical cooperation on cross-cutting issues and emerging areas of regulatory concern. 

Canada’s privacy commissioner opens investigation into World Anti-Doping Agency

Jim Bronskill | Toronto Star

Canada's Privacy Commissioner has initiated an investigation into the World Anti-Doping Agency (WADA) following allegations of privacy violations. The probe focuses on WADA's handling of athletes' personal data, particularly concerning consent and data security measures. This action underscores the importance of safeguarding sensitive information within international sports organizations and ensuring compliance with Canadian privacy laws. 

HR's rapid AI adoption outpacing policy development

Dexter Tilo | Human Resources Director

A recent survey by Traliant reveals that while 94% of HR professionals in the United States are utilizing some form of artificial intelligence (AI) in their operations, only 60% have established an AI Acceptable Use Policy. This gap indicates that AI adoption is outpacing the development of corresponding policies, potentially exposing organizations to risks. Notably, 31% of respondents have not communicated any guidelines to employees regarding proper AI usage, and 21% have not provided any training on acceptable AI practices. To address these concerns, the report recommends that organizations:

  • Establish clear AI acceptable use policies: Define the ethical and responsible use of AI within the organization.

  • Provide regular training: Educate employees on proper AI usage to ensure compliance and mitigate risks.

  • Conduct AI inventories: Assess and document all AI tools and applications in use to maintain oversight.

  • Implement transparency policies: Ensure that AI processes and decisions are transparent to stakeholders.

  • Create mechanisms for reporting misuse: Enable employees to report any AI misuse or ethical concerns safely.

By taking these steps, organizations can better align AI adoption with robust policy frameworks, promoting ethical practices and reducing potential risks.

Workplace monitoring is still out of control – but some staff are fighting back

Emma Woollacott | ITPro

Workplace monitoring has become increasingly prevalent, with one in five employees now subject to activity tracking. This trend is particularly notable in the technology sector, where one-third of workers are monitored, followed by 30% in finance and 21% in retail. Despite 80% of managers believing that surveillance enhances productivity, studies indicate that such practices can lead to higher stress levels, diminished job satisfaction, and increased distrust among employees. In response, some workers are employing countermeasures like mouse jigglers to simulate activity and evade detection. This dynamic underscores the need for organizations to balance oversight with respect for employee privacy to maintain a healthy workplace environment.

CRA launched 'witch hunt' against whistleblowers who exposed millions in bogus refunds, sources say

Harvey Cashore | Daniel Leblanc | CBC News

Recent reports indicate that the Canada Revenue Agency (CRA) has initiated internal investigations to identify whistleblowers who disclosed information about fraudulent tax refunds. These disclosures revealed that the CRA had been deceived into issuing millions of dollars in bogus refunds to scammers. Employees have expressed concerns about potential retaliation, describing the agency's actions as a "witch hunt." This situation has raised questions about the CRA's commitment to transparency and its handling of internal fraud detection and reporting mechanisms.

Previous
Previous

Week of 2024-11-25

Next
Next

Week of 2024-11-11