Week of 2024-09-20

People who request access to information need better privacy protection from the federal government

Matt Malone | Luke Conkin | PolicyOptions

The article from Policy Options discusses the need for significant reforms to Canada's Access to Information (ATI) system. Current issues include insufficient privacy protections for requesters, especially when their personal information is shared among government departments, which can lead to intimidation or biased treatment. There are also concerns that the system's slow and fragmented processes fail to meet increasing public demand for transparency. The authors argue for merging the offices of the Information Commissioner and Privacy Commissioner, introducing more robust anonymity protections, and improving interdepartmental cooperation in handling requests to ensure timely responses. These reforms aim to strengthen privacy rights and improve the effectiveness of the ATI Act, which has become outdated in dealing with modern information management challenges.

Too much demand and too few staff, N.W.T.'s information and privacy commissioner tells MLAs

Jocelyn Shepel | CBC News

N.W.T. Information and Privacy Commissioner Andrew Fox has called for more staff in the Access and Privacy Office (APO), which handles access-to-information requests. Despite a drop in the number of requests in 2023/2024, the office has struggled to meet deadlines, with 22 cases of delayed responses. Fox praised the staff but noted they are under-resourced and overburdened, working with the same staffing levels since 2021. He argued that simply extending response timelines without additional resources would not solve the problem. The Access to Information and Protection of Privacy Act is up for review within the next 18 months, offering a chance to address these challenges.

Meta to push on with plan to use UK Facebook and Instagram posts to train AI

Matthew Weaver | The Guardian

Meta plans to proceed with its controversial initiative to use public Facebook and Instagram posts from UK users to train its artificial intelligence (AI) models, despite privacy concerns. This practice, effectively prohibited by EU privacy laws, will not use private messages or data from users under 18. The UK's Information Commissioner’s Office (ICO) stated it has not given formal approval but will monitor the situation after Meta introduced changes, including an easier opt-out option for users. Privacy campaigners, such as the Open Rights Group and None of Your Business (NOYB), have criticized Meta's actions, accusing the company of using users as "involuntary test subjects." While the plan remains paused in the EU, Meta argues that training AI models on user data will help reflect British culture and benefit UK businesses​

California Legislature Passes Landmark AI Safety Legislation

Matthew Shapanka | August Gweon | Covington Blog

In September 2024, California passed landmark legislation, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), which introduces new regulations for the development and deployment of AI. This legislation focuses on ensuring safety and security in AI technologies, particularly frontier AI models. The bill mandates annual risk assessments, third-party audits, and the implementation of strong security measures to protect against risks such as misuse or unintended consequences of AI systems. California's efforts aim to lead the way in creating a framework for responsible AI development, setting an important precedent as discussions about AI regulation continue to evolve globally.

This law follows a broader trend of increasing AI regulation, building on concepts from the White House's 2023 AI Executive Order. California's proactive stance positions it at the forefront of AI governance in the U.S. and potentially globally, as other states and countries consider how to regulate AI technologies.

US 11th Circuit judge explores AI's role in interpreting multi-word legal terms

Bernise Carolino | Canadian Lawyer Magazine

In a recent concurring opinion from the U.S. 11th Circuit Court of Appeals, Judge Kevin Newsom explored the potential of using large language models (LLMs), such as ChatGPT, to help interpret multi-word legal terms. The case involved the phrase "physically restrained" in the context of a robbery, and Judge Newsom tested the capabilities of LLMs to see if they could assist in defining this term. While LLMs produced slightly varying responses, the core meaning remained consistent with conventional interpretations. Newsom argued that while these tools should not replace traditional methods like dictionaries, they could serve as supplementary aids in understanding legal text, especially in cases where legal phrases have nuanced meanings.

This exploration signals a broader potential role for AI in the legal field, though Newsom emphasized that AI outputs are not a substitute for human judgment in legal interpretation. The judge's experiment highlights how AI might assist in clarifying the ordinary meaning of terms, but its integration into legal processes would require careful oversight and use in tandem with established legal tools

Who’s who in the genAI supply chain: ICO publishes draft guidance on controllership

Cindy Knott | Slaughter and May

The UK's Information Commissioner's Office (ICO) has released draft guidance to clarify roles within the generative AI (genAI) supply chain, focusing on determining who acts as the data controller, processor, or joint controller. This guidance is crucial for establishing responsibility, particularly in complex AI ecosystems where multiple parties are involved. The ICO aims to ensure compliance with data protection laws, especially regarding the use of personal data in AI model training. The draft emphasizes that organizations must clearly define their role in the data handling process and implement proper safeguards to protect users' privacy. This effort is part of a broader series of consultations by the ICO to address emerging AI-related issues and to provide clarity on how AI-driven technologies should be regulated.

Six Ways An Organization Can Benefit from an Internal Generative AI Use Policy

Caroline Poirier | Bennett Jones

Bennett Jones outlines six key benefits of implementing an internal generative AI use policy for organizations. These policies help manage risks and ensure compliance with existing legal frameworks, such as data protection laws and intellectual property regulations. The key benefits include:

1. Mitigating AI unreliability

2. Ensuring data privacy

3. Addressing bias

4. Protecting intellectual property

5. Enhancing cybersecurity

6. Aligning with ESG goals

New PIA for US Secret Service’s use of facial recognition raises questions

Anthony Kimery | Biometric Update

The U.S. Department of Homeland Security issued a Privacy Impact Assessment (PIA) for the Secret Service's use of facial recognition technology, raising privacy concerns regarding the collection and use of personal data. While the technology assists in investigating financial and cyber crimes, it remains unclear how it is used to detect threats against high-profile protectees, such as the president. The Secret Service limits the use of facial recognition to authorized investigations and does not use it for real-time public surveillance. However, concerns persist about inadequate integration of facial recognition and social media monitoring in preventing threats. Despite a 2019 pilot program, the Secret Service has paused some uses of facial recognition, citing privacy and operational concerns.

Age estimation tech faces an uphill battle in Australia

Masha Borak | Biometric Update

Australia is conducting an AU$6.5 million Age Assurance Technology Trial to test whether age estimation technologies, including biometric methods, can effectively regulate children's access to age-restricted content. The trial aims to address concerns about the accuracy of determining whether users are 18, 16, or 13 years old. While the UK’s regulator Ofcom expressed doubts about the technology's accuracy for specific age groups, industry leaders like Yoti argue that "broadly effective" age checks are sufficient. Australian regulators remain divided on the appropriate age for restrictions, with proposals ranging from bans on children under 13 to under 16. The trial will also examine public acceptance of these technologies, with results expected in Spring 2025 to inform further regulations under the Online Safety Act.

Instagram rolls out teen accounts with privacy, parental controls

Reuters

Instagram recently rolled out a new account type specifically designed for teenagers, which includes enhanced privacy and parental control features. This move comes as regulatory scrutiny intensifies globally over how tech platforms handle younger users. The new features aim to protect teens by limiting who can message them and reducing visibility of their profiles to strangers. Parental tools will also allow for greater oversight, giving parents the ability to monitor activity and set limits on their teens' use. This is part of a broader push by Instagram to ensure safer online environments for younger users amid mounting concerns over privacy and exposure to harmful content. The rollout highlights ongoing efforts to balance user safety with digital freedom.

TD Bank fined $28 million for sharing inaccurate and negative data on customers

Suzanne Smalley | The Record

TD Bank has been fined $28 million by the Consumer Financial Protection Bureau (CFPB) for sharing inaccurate and negative data about customers with credit reporting agencies. The bank's actions included incorrect reporting of delinquencies and bankruptcies, which impacted customers' access to credit, housing, and employment. Despite knowing about potentially fraudulent accounts, TD Bank continued to provide faulty information without investigating or correcting errors. Nearly $8 million of the fine will go to affected customers, with the rest being a civil penalty.

Mapping the Political Influence Industry in Canada

Cassie Cladis | Influence Industry

The case study on political influence mapping in Canada explores the operations of lobbying firms, consultants, and think tanks that shape policy decisions. It highlights how these entities influence government actions, often through direct contact with officials or media campaigns. The research provides insights into key players in Canada’s influence industry, their strategies, and the regulatory environment governing their activities. This study aims to enhance transparency and accountability in how political influence is wielded in Canada.

23andMe to pay $30 million in genetics data breach settlement

Sergiu Gatlan | Bleeping Computer

23andMe has agreed to pay $30 million in a settlement following a data breach that exposed the genetic data of millions of users. The breach involved hackers accessing sensitive information, including ancestry and health data, which was subsequently put up for sale online. The settlement aims to compensate affected users and improve the company's data security practices. This breach has raised concerns about the security of personal genetic information in the growing direct-to-consumer DNA testing market.

B.C. releases progress report on online safety efforts

Government of British Columbia

The Government of British Columbia has announced funding for a new initiative aimed at improving access to justice. This program will enhance legal services, including increased support for family law, criminal defense, and Indigenous justice initiatives. The government is working to expand legal aid and other resources to ensure that British Columbians, particularly those in rural and underserved communities, can access the legal help they need. This initiative is part of a broader commitment to making the justice system more equitable and efficient.

Out of the Shadows: The CPPA’s Guide to Avoiding Dark Patterns

Andrew Folks | Technology Law

The Canadian Privacy Protection Act (CPPA) has released guidance to help organizations avoid using "dark patterns"—deceptive design choices that manipulate users into actions they wouldn't normally take. These patterns may mislead users into sharing more personal information or agreeing to unfavorable terms. The CPPA's guide provides strategies to ensure transparency and fairness in digital interactions, promoting user autonomy and better compliance with privacy laws. It highlights the importance of ethical design to build trust and avoid regulatory scrutiny.

UK’s privacy watchdog takes credit for rise of ‘consent or pay’

Natasha Lomas | Tech Crunch

The UK's Information Commissioner's Office (ICO) has claimed responsibility for the rise of "consent or pay" systems, where users are required to provide data consent or pay for access to content without tracking. This shift aligns with GDPR rules, emphasizing transparency and giving users control over how their data is used. Critics argue that such models may limit access to free content. The ICO sees this trend as a step toward improved user privacy and consent practices.

LinkedIn trains AI models on user data ahead of changes to privacy policy

Aninda Chakraborty | Tech Monitor

LinkedIn is updating its privacy policy to allow the platform to train its AI models using user data, including profile information and activity on the site. These changes, expected to take effect by the end of 2024, align with LinkedIn's broader strategy of enhancing its AI-driven tools, such as job recommendations and personalized content. The move has sparked privacy concerns as users question how their data will be used and shared under these new terms.

Sask. privacy commission investigates cases of info accessed by Saskatoon Police, pharmacy student

Andrew Benson | Global News

A privacy investigation in Saskatchewan revealed that a student on a placement with the Saskatoon Police improperly accessed personal information from a police database. The provincial privacy commissioner flagged concerns about the police service’s internal data controls and recommended tightening security measures. The Saskatoon Police Service has accepted responsibility and is working to implement improved safeguards to prevent similar breaches in the future.

Long-overdue Australian privacy law reform is here – and it’s still not fit for the digital era

Katherine Kemp | The Conversation

Australia's long-awaited privacy law reforms have finally arrived but still fall short of addressing the complexities of the digital era. While the reforms introduce stronger data protection measures and tougher penalties for breaches, critics argue they don't go far enough in regulating how companies collect and use personal data in a rapidly evolving tech landscape. The new laws also fail to address key areas like artificial intelligence and biometrics adequately, raising concerns about their relevance in the digital age.

Brazilian Data Protection Authority Regulates International Data Transfers

Hunton Andrews Kurth

Brazil's data protection authority (ANPD) has introduced regulations governing international data transfers under the General Data Protection Law (LGPD). The new rules aim to ensure that data transfers outside Brazil comply with appropriate safeguards, such as standard contractual clauses, global corporate rules, or certifications. This regulation is intended to strengthen Brazil's data protection framework by aligning it with international standards and ensuring that personal data remains protected when transferred abroad.

UK datacentres to be designated critical infrastructure

PA Media | The Guardian

The UK government is considering new cyber protection measures to designate data centers as critical national infrastructure. This follows concerns over increasing cyber threats and the importance of securing vital digital services that underpin various sectors, including healthcare and finance. Strengthening these defenses would involve stricter regulations and enhanced security requirements for data centers, ensuring that the country's digital backbone is safeguarded against potential attacks.

Security measures fail to keep up with rising email attacks

Help Net Security

Email-based cyberattacks have significantly increased in 2024, with threat actors employing more sophisticated tactics to exploit vulnerabilities. Attackers are increasingly using phishing schemes and business email compromise (BEC) to bypass traditional security measures. Many organizations struggle to keep up with evolving threats, particularly as AI-generated content allows for more convincing attacks. Security experts emphasize the need for improved email filtering systems, employee training, and proactive monitoring to mitigate these risks.

PwC plans to track employees' location while at work. Is this practice legal in Canada?

Christl Dabu | CTV News

PwC plans to track the location of its employees while at work, raising legal and privacy concerns in Canada. Under Canadian law, employers can collect personal data like location if there is a valid reason and if employees are informed of the purpose. However, the practice must comply with privacy legislation, such as ensuring transparency, limiting data collection to what's necessary, and allowing employees to opt out where possible. Legal experts emphasize balancing operational needs with employee privacy rights.

Previous
Previous

Week of 2024-09-27

Next
Next

Week of 2024-09-13