Week of 2024-10-04

Smart TVs are like “a digital Trojan Horse” in people’s homes

Scharon Harding | Ars Technica

The streaming industry is gaining unprecedented power to surveil and manipulate consumer behavior through vast data collection capabilities. Streaming platforms track users' viewing habits, preferences, and even emotional responses to content, allowing for highly targeted advertisements and personalized recommendations. This has sparked concerns about privacy and the ethical implications of such extensive data collection, as consumers may not fully understand the extent to which their behavior is being monitored and influenced.

Legal action underway to force Canadian Forces to release propaganda documents

David Pugliese | Ottawa Citizen

A legal action is being taken to compel the Canadian Forces to release documents related to propaganda and influence operations. This comes after concerns were raised about the military’s domestic influence activities, prompting demands for greater transparency. Critics argue that withholding these documents undermines public accountability, while the military has cited national security concerns for keeping them classified. The case underscores tensions between transparency and security in Canada’s defense operations.

2023-2024 ATIP Annual Reports tabled in Parliament

Office of the Privacy Commissioner of Canada

The Office of the Privacy Commissioner of Canada's 2023-2024 annual report highlights the organization's efforts to enhance transparency, address privacy concerns, and improve access to information. The report emphasizes the increasing complexity of privacy issues due to evolving technologies and the need for stronger privacy protection measures. It also details the number of access to information requests received and processed, along with the challenges faced in meeting growing demands for accountability and data protection.

Ruling bars public release of “300 far-right groups” list funded by Liberal gov

Cosmin Dzsurdzsa | True North Nation

The article discusses a secret list created by academic Barbara Perry and Ontario Tech University, which reportedly identifies over 300 groups in Canada considered to be part of the "far-right." The list was commissioned by the federal government and remains confidential. Critics argue the list lacks transparency and accountability, as groups or individuals included may be unaware and have no recourse to challenge their inclusion.

License Plate Readers Are Creating a US-Wide Database of More Than Just Cars

Matt Burgess | Ohruv Mehrotra | Wired

License plate readers (LPRs) are being used to gather more than just vehicle data; they are now capturing political affiliations through bumper stickers, decals, and other signage on cars. This raises concerns about potential surveillance and privacy violations, as individuals could be monitored based on their political or personal beliefs. Critics argue that the technology could be misused, leading to profiling or discrimination. The increased use of LPRs highlights the growing intersection between surveillance technology and personal freedoms.

California Establishes AI Transparency Act

Joseph J. Lazzarotti | Kevin B. Hambly | Workplace Privacy Report

California has enacted the AI Transparency Act, requiring companies to disclose when artificial intelligence (AI) is used in interactions with consumers. The law mandates transparency in AI-driven services, ensuring consumers are informed when their data is processed by AI technologies. This move aims to bolster consumer protection, addressing growing concerns over privacy and accountability in AI use. The legislation reflects California’s leadership in tech regulation and sets a precedent for AI governance.

One year later, AI code signatories happy with decision but want more company

Tara Deschamps | BNN Bloomberg

One year after signing Canada's voluntary AI code of conduct, companies have expressed satisfaction with their decision, though many are pushing for more stringent guidelines. The signatories advocate for clearer, enforceable rules to address ethical concerns around AI use, data privacy, and consumer protection. The code aims to promote responsible AI development, but participants believe further government action is necessary to enhance trust and accountability in the growing AI sector.

A courts reporter wrote about a few trials. Then an AI decided he was actually the culprit

Simon Thorne | Nieman Lab

A court reporter who covered multiple trials found himself wrongfully identified as a criminal by an AI system, highlighting serious flaws in AI's decision-making processes. The error raised concerns about the over-reliance on AI technology in criminal justice, particularly when algorithms misinterpret data or make incorrect associations. This case underscores the potential dangers of AI in sensitive areas, where inaccuracies can have damaging consequences for innocent individuals.

France appoints first AI minister

Jack Aldane | Global Government Forum

France has appointed its first Minister for Artificial Intelligence, signaling a significant shift in the country's commitment to AI development. The new role is aimed at overseeing AI policy, regulation, and innovation, ensuring that France remains competitive in the global AI race. The minister's responsibilities will include guiding ethical AI use, data protection, and fostering collaboration between the public and private sectors to advance AI research and implementation.

Civil Rights Commission Releases Report on the Federal Government’s Use of Facial Recognition Technology

Electronic Privacy Information Center

The U.S. Commission on Civil Rights has released a report analyzing the federal government's use of facial recognition technology (FRT). The report raises concerns about potential civil rights violations, especially regarding racial bias, accuracy, and privacy. It calls for stronger oversight and clearer policies to ensure FRT is used responsibly, particularly in law enforcement and public surveillance. The commission urges federal agencies to prioritize transparency and safeguard against the misuse of this powerful technology.

Students adapt Meta's smart glasses to dox strangers in real time

Mickey Carroll | Sky News

A group of students adapted Meta's smart glasses to identify and reveal personal information about strangers in real-time, raising concerns about privacy and the misuse of wearable technology. The project showcased the potential risks of integrating AI with wearable devices, highlighting the dangers of "doxing"—publicly exposing personal data without consent. This incident has sparked debates over the ethical implications of such technology and the need for stronger safeguards to prevent privacy violations.

Wicket follows through with stadium-wide express biometric concessions in Cleveland

Joel R. McConvey | Biometric Update

Wicket has expanded its biometric technology at Cleveland stadium, enabling express concessions using facial recognition. This system allows fans to purchase food and drinks without needing cash, cards, or mobile devices. The facial recognition tech links to users' payment methods, streamlining transactions for quicker service. This marks a significant step in adopting biometric solutions for large-scale venues, offering convenience while raising questions about privacy and data security.

Opinion: To Help Rebuild Public Trust in Government, Harness AI

Anthony Ilukwe | Center for International Governance Innovation

The article from CIGI argues that governments can rebuild public trust by harnessing artificial intelligence (AI) to improve transparency, efficiency, and decision-making processes. By using AI responsibly, governments can address challenges in service delivery and policy implementation, while also engaging citizens more effectively. The article emphasizes that ethical guidelines, accountability, and privacy protections are essential to ensuring AI's beneficial use in governance.

Half a billion will regularly use digital identity wallets within 2 years: Gartner

Chris Burt | Biometric Update

According to Gartner, it is projected that over half a billion people will regularly use digital identity wallets within the next two years. These wallets are becoming a key component in digital transactions and identity verification, providing a secure method to store and present personal identification. The rise in adoption is driven by growing demand for streamlined, privacy-focused solutions across industries such as finance and government services.

Digital ID Isn't for Everybody, and That's Okay

Alexis Hancock | Electronic Frontier Foundation

The EFF article argues that digital IDs should not be universally imposed, emphasizing the importance of individual choice. While digital identification systems offer benefits like convenience and security, they can also raise privacy concerns and may not be suitable for everyone, particularly marginalized communities. The piece stresses the need for equitable solutions that prioritize privacy, transparency, and user consent over a one-size-fits-all approach.

School Monitoring Software Sacrifices Student Privacy for Unproven Promises of Safety

Bill Budington | Electronic Frontier Foundation

The EFF article critiques school monitoring software for compromising student privacy while offering unproven claims of enhancing safety. These tools often track students' online activities in and out of school, raising concerns about data misuse and over-surveillance. Despite promises of increased security, the article argues that there is little evidence to support their effectiveness in preventing harm, and calls for better privacy protections and a more balanced approach to student safety.

CDT Research Reveals Widespread Tech-Powered Sexual Harassment in K-12 Public Schools

Center for Democracy & Technology

The Center for Democracy and Technology (CDT) research reveals that technology-powered sexual harassment is prevalent in U.S. K-12 public schools. The report highlights how digital tools, such as school-issued devices and online platforms, have been misused to target students, exacerbating issues like cyberbullying and inappropriate content sharing. The study calls for schools to implement stronger safeguards, better oversight, and comprehensive policies to protect students from harassment while balancing privacy concerns.

Ex-Colorado county clerk gets 9-year prison sentence for voting data scheme in wake of 2020 election

CBC News

Tina Peters, a former Colorado elections clerk, was sentenced to four months of home detention and community service for her involvement in an election data breach. Peters was found guilty of obstructing a government operation by illegally accessing and copying sensitive election data. Her actions were part of efforts to promote false claims about the 2020 U.S. presidential election. Peters has become a prominent figure in election conspiracy circles, facing multiple legal challenges related to her actions.

Remember That DNA You Gave 23andMe?

Kristen V. Brown | The Atlantic

The Atlantic article discusses concerns surrounding 23andMe's handling of user DNA data after reports that the company may be selling genetic data to third parties, raising significant privacy issues. Users are questioning whether their sensitive personal information is being adequately protected or if it could be misused by commercial or governmental entities. The controversy underscores broader concerns about data privacy in the age of genetic testing, with consumers potentially unaware of how their information might be shared or sold.

Want your DNA profile from the RCMP's DNA bank? You can't have it, a new ruling finds

Christopher Nardi | Ottawa Citizen

A recent ruling determined that individuals cannot access their personal DNA profiles stored in the RCMP's national DNA databank. The case involved a woman who requested her profile after it was collected as part of a criminal investigation, but the court decided that such information is not subject to access under privacy laws. The decision raises concerns about privacy rights and individuals' control over their genetic information.

Lapsi is rebooting the stethoscope as a health tracking data platform

Natasha Lomas | Tech Crunch

Lapsi is reinventing the traditional stethoscope by turning it into a health-tracking data platform. The new device goes beyond listening to heartbeats, integrating AI and data analytics to monitor and track various health metrics over time. This innovation aims to provide both patients and healthcare providers with real-time, actionable insights into cardiovascular health, enhancing diagnostics and preventive care. Lapsi's platform represents a step toward more data-driven, continuous health monitoring in medical practices.

Privacy regulator probing I-MED for handing over private medical data used to train AI

Cam Wilson | Crickey

Australia's privacy regulator, the Office of the Australian Information Commissioner (OAIC), is investigating concerns over iMed and Harrison.ai's handling of patient medical scan data. The issue involves questions about how patient data is being used for AI training and whether appropriate consent was obtained. The case raises broader concerns about privacy in the health sector and the use of personal data in developing AI technologies. The OAIC's investigation could lead to stricter regulations on how medical data is handled in AI development.

California enacts car data privacy law to curb domestic violence

Dan Levine | Kristina Cooke | Reuters

California has enacted a new car data privacy law aimed at curbing domestic violence. The law restricts the sharing of data from connected vehicles, such as location information, which could be misused by abusive partners. It emphasizes protecting survivors of domestic abuse by ensuring that sensitive data cannot be easily accessed or exploited. This legislation is part of broader efforts to safeguard personal privacy and security in the evolving landscape of connected technology.

Chicago stops using controversial ShotSpotter gunshot detection system

Suzanne Smalley | The Record

Chicago has ended its use of ShotSpotter, a gunshot detection system, following concerns over its effectiveness and potential civil rights issues. Critics have argued that the system led to over-policing in certain neighborhoods and failed to reliably identify gunshots. The decision reflects a growing skepticism about the use of surveillance technology in law enforcement and its impact on communities.

Calgary police release race-based data, spurring calls for policy changes around data collection

Lily Dupuis | CBC News

A new report analyzing race-based data from Calgary police interactions reveals that Black and Indigenous individuals are disproportionately subject to street checks and use-of-force incidents. The findings raise concerns about systemic racism and bias within the police force, prompting calls for reforms and better accountability measures. Calgary police have acknowledged the issue and committed to addressing the disparities. The report is part of broader efforts to improve transparency and reduce racial inequalities in policing.

IPC trilogy considering encryption-based, non-extractive cyber attacks

Jaime Cardy | Dentons Data

Dentons' article discusses how the IPC Trilogy case is considering cyberattacks involving encryption-based, non-extractive techniques. These types of attacks, unlike traditional ones, don't steal data but instead lock systems or encrypt information, making it inaccessible. This poses unique challenges for legal and regulatory frameworks that typically focus on data extraction. The discussion highlights the need for updated policies to address these evolving cyber threats and ensure robust protection against all forms of cyberattacks.

Privacy commissioner finds Saskatoon police members snooped system for 'personal reasons'

Bre McAdam | Saskatoon Pheonix

A report by Saskatchewan's privacy commissioner found that Saskatoon police officers improperly accessed private information from a police database for personal reasons. The investigation revealed that officers snooped on individuals, violating privacy protocols. The findings raise concerns about the internal controls and oversight of sensitive data within the police force. The commissioner has recommended stricter measures to prevent future misuse of the system and to safeguard personal information.

BCCA sends notice issue back to BC OIPC

Dan Michaluk | All About Information

The BC Court of Appeal has sent a notice issue back to the Office of the Information and Privacy Commissioner for British Columbia (BC OIPC) for further examination. The case involves disputes about the interpretation and application of privacy laws, highlighting the need for additional review by the OIPC. This decision emphasizes the ongoing complexity of privacy-related cases and the role of the BC OIPC in resolving such issues.

NIST Drops Password Complexity, Mandatory Reset Rules

Dark Reading

NIST has updated its guidelines, dropping requirements for mandatory password complexity and frequent resets. The changes reflect a shift towards prioritizing longer, user-friendly passwords over complex, hard-to-remember ones. NIST now recommends password length and security measures such as multi-factor authentication (MFA) for stronger protection. This update aims to reduce user frustration and improve overall cybersecurity practices by focusing on practical and effective security measures.

Canada parliamentary watchdog finds intelligence agency hiring practice violates privacy

Ishrat Chahal | Jurist News

A recent report from a Canadian parliamentary watchdog revealed that the hiring practices of an intelligence agency violated privacy laws. The agency was found to have improperly collected personal information during its recruitment process, raising concerns about the protection of individual privacy rights. The watchdog has recommended changes to ensure that hiring procedures comply with privacy regulations and uphold transparency in how personal data is handled.

'It's insanity': over 80% of employees engage in performative work or 'fauxductivity'

Stacy Thomas | Human Resources Director

A recent survey reveals that over 80% of employees engage in "performative work" or "fauxductivity," where they appear busy but aren't producing meaningful results. This trend has been fueled by remote and hybrid work environments, leading employees to feel pressured to demonstrate productivity through constant online activity, even when unnecessary. The survey highlights the growing disconnect between actual performance and the appearance of busyness in modern workplaces.

Amazon can now deliver packages inside your garage, but Toronto tech expert questions how secure it is 

Janiece Campbell | Toronto Now

Amazon has introduced a new service allowing packages to be delivered directly inside customers' garages. This service aims to enhance convenience and security by preventing packages from being left outside, where they could be stolen or damaged. Users can track deliveries in real-time through the Amazon app, and the service is part of Amazon's broader "Key" delivery offerings, which also includes in-home and in-car delivery options.

Previous
Previous

Week of 2024-10-11

Next
Next

Week of 2024-09-27