Week of 2024-09-13
Ontario Superior Court makes Metrolinx disclose anonymous complainant’s identity
Bernise Carolino | Canadian Lawyer Magazine
The Ontario Superior Court has ordered Metrolinx to disclose the identity of an anonymous complainant following allegations of fraud and criminal activities against the applicants. The court granted a Norwich order, stating that the need to uncover the truth and pursue defamation and other tort claims outweighed the complainant’s privacy interests. Metrolinx had previously argued that revealing the complainant's identity would undermine public trust in its privacy policies, but the court disagreed, citing public interest and the serious nature of the allegations.
AI worse than humans in every way at summarising information, government trial finds
Cam Wilson | Crikey
The Australian Securities and Investments Commission (ASIC) conducted a government trial using AI to summarize documents, and the results showed that AI performed worse than humans in every respect. The trial, conducted by Amazon using Meta's Llama2-70B model, involved summarizing five submissions from a parliamentary inquiry. These AI-generated summaries were compared to those created by ten ASIC staff, who were given similar prompts. The results, assessed by reviewers unaware of whether summaries were AI- or human-generated, found that human summaries significantly outperformed AI across all criteria. AI struggled with understanding emphasis, nuance, and context, and sometimes introduced irrelevant or incorrect information. While the report acknowledged that newer AI models may improve over time, it concluded that AI should augment human tasks rather than replace them entirely. Greens Senator David Shoebridge expressed concern about transparency regarding the use of AI for public submissions and emphasized the need for human oversight in these tasks. In summary, this trial reinforces that human critical thinking and analysis remain superior to AI, at least in tasks like document summarization, where accuracy, context, and nuance are key.
Democratizing AI: Principles for Meaningful Public Participation
Michele Gilman | Data & Society
The article "Democratizing AI: Principles for Meaningful Public Participation" highlights the need for public engagement in the design and deployment of AI systems, emphasizing that such participation adds legitimacy and accountability to decision-making processes. It argues that involving the people most affected by AI systems can avert harmful impacts, improve system outcomes, and foster trust. Drawing on lessons from public participation in other fields, the article underscores the importance of carefully structured mechanisms to avoid counterproductive outcomes.
Americans Are Uncomfortable with Automated Decision-Making
Catalina Sanchez | Adam Schwartz | Electronic Frontier Foundation
A national survey by Consumer Reports revealed that Americans are uncomfortable with the growing use of artificial intelligence (AI) and algorithmic decision-making in areas like job applications, loans, and facial recognition. Nearly three-quarters of respondents expressed unease with AI screening job interviews, and a similar majority disliked its use in financial or housing decisions. The survey also found that most people want to know what data is being used by AI to make decisions about them, and desire a way to correct it. The findings highlight concerns about losing control over personal data and emphasize the need for transparency and accountability as AI continues to be adopted in critical decision-making processes. The Electronic Frontier Foundation (EFF) calls for strict legal protections to ensure privacy, fairness, and the right to appeal decisions made by AI.
Australia: Government lays groundwork for online age limit preventing children from accessing social media
Claudia Long | ABC
The Australian federal government is moving to impose a minimum age for social media use, with children up to 16 potentially banned from platforms like Facebook and Instagram. Prime Minister Anthony Albanese has expressed preference for a higher limit in the 14-16 range, citing concerns about social media's impact on children's mental and physical health. This follows similar moves by South Australia to restrict social media access for children under 13. The legislation will be informed by a national trial of age verification technologies. While the push has bipartisan support, critics, including the Greens, raise privacy concerns and suggest education on harms rather than outright bans.
B.C. researcher trying to tap into youth attitudes towards AI and privacy
Jessica Durling | Kelowna Capital News
A researcher from Vancouver Island University is studying how youth perceive AI and privacy issues. The project seeks to understand young people's attitudes toward AI's increasing presence in daily life and how they view privacy in the digital age. The aim is to inform future policy and educational programs to ensure young people's voices are heard in discussions about technology's role in their lives. This initiative is part of broader efforts to examine the intersection of AI, privacy, and society.
Ford seeks patent for tech that listens to driver conversations to serve ads
Suzanne Smalley | The Record
Ford Motor Company has filed a patent application for technology that would monitor in-car conversations and collect data, including vehicle location and driving habits, to serve targeted ads to drivers. This "in-vehicle advertisement presentation" system would use audio signals from conversations, historical data, and route predictions to deliver ads based on a driver’s presumed destination, such as when going shopping. The system aims to optimize ad placement through both audio and visual means via the car's human-machine interface (HMI). However, the patent does not clarify how the collected data would be protected.
Ford emphasized that submitting patents is part of standard business practice and doesn’t necessarily reflect concrete product plans. The company has faced criticism in the past over patents that raised privacy concerns, such as a system for monitoring the speed of nearby cars or technology to repossess vehicles for missed payments.
Google’s AI Will Help Decide Whether Unemployed Workers Get Benefits
Todd Feathers | Gizmodo
Nevada plans to launch a first-of-its-kind system where Google’s generative AI will assist in determining whether unemployed individuals are eligible for benefits. This AI will analyze transcripts from unemployment appeals hearings and provide recommendations to human referees, who will make the final decision. The project aims to reduce a backlog of claims that have built up since the pandemic. While it promises to expedite the process, concerns have been raised about the potential for AI errors, such as generating inaccurate or misleading conclusions, which could impact high-stakes decisions. Ultimately, a human will review each AI recommendation to mitigate such risks.
Data Privacy Advocates Raise Alarm Over NYC’s Free Teen Teletherapy Program
Michael Elsen-Rooney | Yahoo
Data privacy advocates have raised concerns over New York City's new Teenspace teletherapy platform, which provides free mental health services to teenagers. The platform, operated by Talkspace under a $26 million contract with the city's Health Department, has come under scrutiny for potentially violating state and federal privacy laws that protect student data. Critics, including the New York Civil Liberties Union (NYCLU), worry that the system may not have adequate safeguards to prevent misuse or sharing of sensitive data, particularly since it collects information from vulnerable minors. Despite assurances from city officials that user data will be kept confidential and destroyed after 30 days if unused, questions remain about the transparency and accountability of the program. The debate highlights the tension between the need for accessible mental health services and the importance of safeguarding personal data.
RCMP covertly collected electronic evidence in 32 cases over a recent five-year period
Jim Bronskill | CTV News
The RCMP has revealed that it covertly collected electronic evidence in 32 cases over a five-year period from 2017 to 2022. This was part of investigations into national security, organized crime, and other serious criminal matters. The tools used by the RCMP included on-device software that could intercept encrypted communications, log keystrokes, and activate device cameras or microphones. These methods were employed only after obtaining court authorization and when other investigative techniques proved ineffective. The use of such technologies has raised concerns regarding privacy and legal oversight, prompting discussions about ensuring that these powerful tools are applied responsibly and ethically in law enforcement operations.
Facebook did not adequately protect user data in Cambridge Analytica scandal, FCA rules
Jessica Mach | Canadian Lawyer Mag
The Federal Court of Appeal (FCA) ruled that Facebook violated Canadian privacy law during the Cambridge Analytica scandal by failing to adequately protect user data. The court found that Facebook did not obtain meaningful consent from users whose data was exposed, particularly friends of users who downloaded a personality quiz app, This Is Your Digital Life. This data, including information from over 600,000 Canadians, was later used by Cambridge Analytica for psychographic profiling ahead of the 2016 U.S. presidential election. The FCA's decision overturned a previous federal court ruling, emphasizing that Facebook's privacy practices and user consent mechanisms were inadequate under the Personal Information Protection and Electronic Documents Act (PIPEDA). The ruling underscores the responsibility of large tech platforms to respect Canadian privacy laws, even in complex international data-sharing cases.
Texts, social media a ‘minefield’ for people going through divorce: lawyer
N/A | CHCH
In a recent interview, Toronto family lawyer Sarah Boulby highlighted the increasing role of text messages and social media in divorce and custody cases. Digital communications are often presented as evidence, making them a "minefield" for those going through these legal proceedings. Texts, emails, and social media posts, particularly those made in the heat of the moment, can heavily influence key decisions, such as custody arrangements or financial disputes. Lawyers now advise clients to be cautious with their digital communications, as even seemingly harmless posts can be used to challenge credibility or claims made in court. Courts have imposed limits on how much digital evidence can be introduced to avoid overwhelming the legal process, but these communications continue to play a critical role in modern divorce cases.
Privacy commissioner again calls for more staff and training
Emily Blake | Cabin Radio
The NWT's Information and Privacy Commissioner, Andrew Fox, emphasized the need for more staffing and training within the territorial government to meet legal obligations regarding access to information. In his latest annual report, covering April 2023 to March 2024, Fox highlighted a persistent issue: government bodies failing to respond to access to information requests in a timely manner, with some responses being months overdue. He criticized the lack of adequate resources, noting that while the government's privacy office was created to streamline information requests, its staffing has not increased despite growing demand. Fox stressed that improving service standards would require both more staff and better training. Additionally, while privacy breaches have decreased, Fox noted the importance of continuous training to prevent mishandling of sensitive data. However, Fox's recommendations are non-binding, leaving his office with limited power to enforce changes.
PwC tells employees it will use location data to police ‘back-to-office’ rule
Lianne Kolirin | CNN
PwC (PricewaterhouseCoopers) has introduced a new policy for its UK employees that will take effect on January 1, 2024, requiring staff to spend at least three days a week in the office or with clients. To enforce this policy, PwC will track employees' working locations using location data, which will be shared with staff on a monthly basis. This change aims to ensure clarity and consistency in how the hybrid work model is applied across the business. PwC emphasizes the importance of face-to-face interaction for maintaining client relationships and fostering a productive learning and coaching environment. While the company continues to offer hybrid work flexibility, this policy shift reflects a broader push to bring more employees back into office environments following the remote work trends accelerated by the pandemic.
Professional Regulation and Privacy Rights: A Case Comment on York Region District School Board v. Elementary Teachers’ Federation of Ontario, 2024 SCC 22
Gavin Fior | Weir Foulds
The Supreme Court of Canada's decision in York Region District School Board v. Elementary Teachers’ Federation of Ontario (2024 SCC 22) affirmed that the Canadian Charter of Rights and Freedoms applies to public school boards in Ontario. The ruling highlighted teachers' right to protection against unreasonable search and seizure in the workplace, under Section 8 of the Charter. The court criticized the failure of a lower arbitrator to adequately engage in a full Charter analysis and emphasized that privacy rights in the workplace should be assessed contextually.
‘Working for workers’ means more work for employers
Allan Wells | Steven Dickie | Sam Ip | Léonicka Valcius | Osler
The Working for Workers legislation, particularly the Working for Workers Four Act and the proposed Working for Workers Five Act, introduces significant changes to Ontario's Employment Standards Act (ESA). These new requirements include mandatory disclosures in job postings, such as salary ranges and the use of artificial intelligence (AI) in the hiring process. Employers must also remove any requirement for Canadian work experience and disclose whether the job posting is for an actual vacancy. Additionally, new regulations aim to address transparency for job candidates by obliging employers to inform interviewees about the status of their application within 30 days.
The Ministry of Labour is soliciting feedback on these changes, with a focus on ensuring these new rules strike a balance between protecting job seekers and minimizing administrative burdens for employers. Concerns raised include the potential challenges posed by AI disclosures and whether the proposed salary range caps are practical.
These legislative changes reflect Ontario's ongoing efforts to promote fairer labor practices but may increase compliance complexities for employers. Public consultations are ongoing until September 20, 2024.