Week of 2025-01-10

Airbnb successful on appeal contesting OIPC Decision to disclose hosts personal addresses

Roshni Veerapen | Harper Grey

The British Columbia Court of Appeal has upheld a decision that Airbnb hosts' addresses and license information, shared with the City of Vancouver, are considered personal information protected under the Freedom of Information and Protection of Privacy Act (FIPPA). This ruling came after the Office of the Information and Privacy Commissioner (OIPC) had initially ordered the City to disclose these details, classifying them as "contact information" rather than personal data. The Court found the OIPC's interpretation unreasonable, emphasizing the need for a contextual analysis of privacy implications. Consequently, the matter has been sent back to the OIPC for reconsideration, with instructions to properly assess the privacy concerns associated with disclosing hosts' personal addresses.

AI tools may soon manipulate people’s online decision-making, say researchers

Dan Milmo | The Guardian

Researchers at the University of Cambridge warn that AI tools could soon manipulate online decision-making by predicting and influencing human intentions. This emerging "intention economy" involves AI assistants understanding and forecasting user motivations, with companies potentially bidding for this information to sway consumer behavior or political opinions. The study emphasizes the need for regulation to prevent exploitation of personal motivations, which could undermine free elections, press freedom, and market competition. Dr. Jonnie Penn from Cambridge's Leverhulme Centre for the Future of Intelligence highlights the shift from an attention-based internet economy to one focused on human intentions, raising concerns about the ethical implications of such practices.

Apple urged to withdraw 'out of control' AI news alerts

Zoe Kleinman | Liv McMahon | Natalie Sherman | BBC

Apple’s AI-powered notification summaries, designed to condense breaking news alerts on iPhones, have faced backlash after generating inaccurate and misleading claims. Errors include false information about high-profile events and misrepresented content from trusted news sources like the BBC and New York Times. Critics, including the National Union of Journalists (NUJ) and Reporters Without Borders (RSF), argue the feature poses a significant misinformation risk and have called for its removal. Apple has pledged a software update to clarify when notifications are AI-generated but maintains the feature is optional and in beta testing. This controversy highlights broader challenges in ensuring accuracy and trustworthiness in generative AI applications, which have faced similar issues across tech platforms.

B.C. court upholds ban against U.S. company collecting people's data

Jeremy Hainsworth | Business In Vancouver

The British Columbia Supreme Court has upheld a ruling by the province's privacy commissioner that prohibits U.S.-based Clearview AI from collecting images of British Columbians without consent. Clearview AI had amassed a database exceeding three billion facial images, including those of B.C. residents, by scraping publicly available photos from platforms like Facebook, YouTube, and Instagram. These images were then used to provide facial recognition services to various clients, including law enforcement agencies in B.C. The court found that Clearview's practices violated provincial privacy laws, emphasizing that the collection and use of personal information without individuals' consent is illegal, even if the data is publicly accessible online. 

Apple to pay $95m to settle Siri 'listening' lawsuit

Imran Rahman-Jones | BBC

Apple has agreed to pay $95 million to settle a class-action lawsuit alleging that its virtual assistant Siri recorded users without consent and shared voice recordings with advertisers. Apple denies these claims, stating that Siri data has never been sold or used for marketing purposes and that privacy protections are built into the technology. The settlement addresses accusations that Siri unintentionally recorded users without activation and allegedly enabled targeted advertising. Claimants, who owned Siri-enabled devices between 2014 and 2019, may receive up to $20 per device, while attorneys could collect approximately $30 million in fees. By settling, Apple avoids litigation and larger financial risks, though it continues to deny any wrongdoing.

Italian digital identity provider suffers data breach, 5.5M customers affected

Lu-Hai Liang | Biometric Update

InfoCert, a major provider of digital identity services in Italy, experienced a data breach exposing the personal data of 5.5 million customers, including names, tax codes, phone numbers, and email addresses. The compromised data was reportedly advertised for sale on the dark web. InfoCert attributes the breach to a third-party supplier’s systems and asserts that its own infrastructure, service credentials, and passwords remain secure. The company has launched an investigation and plans to report the incident to relevant authorities. This breach underscores ongoing cybersecurity challenges in the digital identity sector, particularly as reliance on such systems grows globally.

3 GTA school boards say student info may have been exposed in 'cyber incident'

CBC News

Three school boards in the Greater Toronto Area, including the Toronto District School Board (TDSB), reported a potential data breach involving PowerSchool, a platform storing student and staff information. The breach occurred between December 22 and 28, 2024, exposing unspecified data, although PowerSchool claims the accessed data has been deleted and not shared publicly. The Information and Privacy Commissioner of Ontario has been notified, and affected school boards are assessing the scope of the incident, promising to inform impacted individuals if sensitive information is confirmed compromised. Cybersecurity expert David Shipley highlights potential risks, including exposure of sensitive information like medical records or school bus stops, which could pose physical safety concerns. Shipley notes that PowerSchool’s large customer base makes it an attractive target for organized cybercrime, stressing the need for robust cybersecurity measures despite limited school board IT funding.

Edtech giant PowerSchool says hackers accessed personal data of students and teachers

Carly Page | Tech Crunch

PowerSchool, a prominent K-12 educational technology company serving over 60 million students globally, experienced a data breach between December 22 and 28, 2024, affecting schools across the U.S. and Canada. Hackers accessed its internal customer support portal using compromised credentials, potentially exposing sensitive information such as names, addresses, Social Security numbers, medical records, and grades of students and staff. PowerSchool stated that the breach impacted only a subset of schools and has since been contained. The company paid a ransom and received assurances that the stolen data has been deleted without further replication or dissemination. Despite the breach, PowerSchool continues to provide services as normal to its customers. 

Private student information may have been stolen in N.L. school security breach

Abby Cole | CBC News

The Newfoundland and Labrador English School District (NLESD) has reported a cybersecurity breach potentially compromising private student information. The breach, discovered on December 28, 2024, involved unauthorized access to the district's information systems, with the possibility that personal data, including student names, addresses, birthdates, and academic records, may have been accessed or stolen. The NLESD is collaborating with cybersecurity experts and law enforcement to investigate the incident and has implemented additional security measures to prevent future breaches. Parents and guardians are advised to monitor their children's personal information for any suspicious activity and to follow guidance provided by the district on protecting against potential identity theft. This incident is part of a broader trend of cyberattacks targeting educational institutions, highlighting the need for robust cybersecurity measures to protect sensitive information. Educational institutions are encouraged to regularly update their security protocols and provide training to staff and students on recognizing and preventing cyber threats.

The government can’t ensure artificial intelligence is safe. This man says he can.

Ruth Reader | Politico

Dr. Brian Anderson, CEO of the Coalition for Health AI (CHAI), is spearheading efforts to establish private-sector-led assurance labs to vet AI tools for healthcare, aiming to bridge regulatory gaps left by limited government oversight. CHAI, supported by tech giants like Microsoft and Google and leading health systems such as the Mayo Clinic, plans to certify labs in 2025 to evaluate the safety and efficacy of medical AI technologies. While the Biden administration backed the initiative, critics warn that outsourcing oversight to private entities risks conflicts of interest and could disadvantage startups while prioritizing industry giants. With the incoming Trump administration potentially reshaping AI regulation, Anderson faces challenges in securing bipartisan support for his model. Despite concerns, Anderson emphasizes the urgency of creating frameworks to address AI's rapid pace of innovation, with CHAI also providing tools like "model cards" to enhance transparency and trust in AI-driven healthcare solutions.

Preparing RCMP body-cam evidence for court will be monumental task, prosecutor says

Allyson McCormack | CBC News

The RCMP is implementing body-worn cameras for 90% of frontline officers within a year, aiming to enhance transparency, trust, and evidence collection. However, concerns have arisen over the significant workload these cameras will generate for prosecutors and police. Shara Munn, president of the New Brunswick Crown Prosecutors Association, warns of a "huge influx of work," citing the challenges of reviewing and disclosing extensive video evidence. Police officials also highlight the administrative burden, including the time-consuming process of redacting footage to make it court-ready. Experts, while recognizing the potential benefits for accountability, caution that without additional resources for police and prosecutors, the system risks delays, stayed charges, and public safety concerns. The program, supported by $238.5 million in federal funding, underscores the need for strategic planning and adequate staffing to manage the increased data load effectively.

B.C. police street recording not privacy invasion, judge rules

Jeremy Hainsworth | Vancouver Is Awesome

A British Columbia Supreme Court judge has ruled that Vancouver Police Department's (VPD) video surveillance of Karina Papenbrock-Ryan in a public area did not violate her privacy or Charter rights. Justice Bruce Elwood found that the footage, captured by a public safety trailer (PST) equipped with cameras, showed Papenbrock-Ryan in a public setting without revealing biographical information or personal details. The court noted that while Papenbrock-Ryan did not consent to being recorded by the VPD, the footage was not distributed, reviewed, or preserved beyond a single screenshot. Papenbrock-Ryan sought a judicial declaration requiring warrants for mass public surveillance, but the judge found her expectation of privacy in a public place to be unreasonable. The ruling underscores that privacy concerns arise more from the misuse or publication of surveillance data than from its collection in public spaces.

Facebook and Instagram get rid of fact checkers

Liv McMahon | Zoe Kleinman | Courtney Subramanian | BBC

Meta has announced the replacement of its independent fact-checking system on Facebook and Instagram with a community-driven model similar to X's "community notes." CEO Mark Zuckerberg justified the decision as a move away from "political bias" and toward "free expression," aligning with President-elect Donald Trump’s criticism of the company's prior content moderation. While Meta asserts this shift will reduce censorship and errors in content moderation, critics, including campaigners against hate speech, view it as an attempt to align with the incoming administration's political priorities. The change will initially apply only in the U.S., where Meta acknowledges it may "catch less bad stuff" but aims to reduce wrongful content removals. The move reflects a broader industry trend towards deregulation of speech on platforms, a stark contrast to recent regulatory pushes in the UK and EU for greater oversight of online content.

The future of global data flows in an uncertain world

Eduardo Ustaran | IAPP

Ustaran’s article draws parallels between the anti-war message of Nena’s “99 Red Balloons” and the current state of global digital fragmentation, warning of the unintended consequences of restrictive international data transfer policies. It highlights how governments increasingly restrict data flows, citing privacy, economic sovereignty, and national security concerns, with examples such as the U.S. targeting data brokers and China tightening its Personal Information Protection Law. In Europe, regulatory interpretations following the Schrems II decision have led to an absolutist stance on data protection, limiting global collaboration despite changes in U.S. laws to address European concerns. The author argues that isolationism in data policy creates distrust and barriers in a world that depends on cross-border communication. A pragmatic approach, emphasizing proportional safeguards over zero-risk demands, is proposed to ensure global data flows remain secure and beneficial without stifling connectivity.

Bill C-27 awaits fate after Canada's prime minister resigns

Alex LaCasse | IAPP

Canadian Prime Minister Justin Trudeau's resignation and the prorogation of Parliament until March 2025 have jeopardized the future of Bill C-27, a proposed omnibus bill addressing privacy reform and AI regulation. If an election is called due to a nonconfidence vote, the bill could die on the order paper, requiring reintroduction by a future government. Experts note that while C-27 is imperfect, starting over would be a setback, with some advocating for severing the AI-focused AIDA component to expedite privacy reforms. A potential Conservative government, currently favored in polls, might deprioritize privacy and AI legislation in favor of aligning regulatory approaches with the U.S., particularly under President-elect Donald Trump’s laissez-faire stance on digital policy. This could strain Canada’s relationships with allies like the EU, which favors stricter AI governance. Experts warn that failing to modernize privacy laws or introduce AI-specific safeguards could jeopardize Canada’s trade relations and adequacy agreements, highlighting the high stakes for the incoming administration.

Amazon is ending remote work. Its employees hope the company reconsiders

Laura MacNaughton | CBC News

Amazon's recent mandate requiring employees to return to the office full-time has sparked discontent among some workers, who question the data supporting this decision. Employees, including system development engineer CJ Felli, argue that remote work enhances productivity and work-life balance. A letter signed by over 500 Amazon employees criticized the move, citing a lack of data-driven justification, despite Amazon's emphasis on data in its business practices. The shift reflects a broader trend among companies like Dell and AT&T, which are also phasing out hybrid work models. However, flexibility remains a top priority for many workers, with surveys indicating that hybrid arrangements are a key factor in attracting talent. Critics, including Felli, argue that enforcing in-office attendance could lead to dissatisfaction and retention issues, especially in a competitive labor market. Meanwhile, companies like Calgary fintech Gigadat, which transitioned to full-time in-office work early, highlight the challenges of fostering collaboration and engagement in remote settings.

Previous
Previous

Week of 2025-01-17

Next
Next

Week of 2025-01-03