Week of 2024-10-11

Another beluga dies at Marineland, Ontario saying little on 4-year probe into park

Liam Casey | CBC News

The death of a beluga whale at Marineland in Ontario has prompted an investigation by the provincial Animal Welfare Services. However, concerns have also been raised by privacy commissioners regarding the transparency of animal welfare practices and the facility's handling of related data. The privacy commissioner emphasized the need for clearer public communication and access to information, particularly when it concerns the health and treatment of animals in facilities like Marineland, which has faced previous scrutiny for similar incidents.

Risks linked to AI, cybercrimes have intensified, OSFI says

Naimul Karim | Financial Post

The Office of the Superintendent of Financial Institutions (OSFI) in Canada has raised concerns about the growing risks of AI and cybercrimes for the financial sector. As banks increasingly adopt AI-driven technologies, OSFI warns that the threat landscape is expanding, with more sophisticated cyberattacks targeting sensitive data and financial systems. The regulator urges financial institutions to strengthen their defenses and adopt robust cybersecurity measures to mitigate these risks.

Opposition Delays of Canada’s AI Framework ‘Embarrassing,’ Minister Says

Mathieu Dion | BNN Bloomberg

Canada's Industry Minister, François-Philippe Champagne, expressed frustration over the country's lack of a formal AI regulatory framework, calling it "embarrassing." He stressed the urgency for Canada to implement comprehensive AI regulations to keep up with global competition and ensure responsible innovation. As AI technology advances, Champagne emphasized the need for regulations to address risks related to privacy, ethics, and economic impact, urging swift action to maintain Canada's leadership in the tech sector.

World Economic Forum publishes report on AI

World Economic Forum

The World Economic Forum's publication "Governance in the Age of Generative AI" discusses the challenges and opportunities posed by the rapid development of generative AI technologies. It emphasizes the need for adaptive governance frameworks to ensure ethical, transparent, and accountable AI development. The report outlines key areas for consideration, including privacy, intellectual property, and the societal impacts of AI. It also highlights the importance of global collaboration in shaping AI policies that promote innovation while mitigating risks.


White House issues guidance for agencies on AI procurement

Jack Aldane | Global Government Forum

The White House has released new guidance to federal agencies on AI procurement, aiming to ensure responsible use of AI technologies in government operations. The guidelines focus on promoting transparency, accountability, and ethics when acquiring AI tools. They also emphasize assessing the risks and benefits of AI applications, ensuring data privacy, and safeguarding against biases in AI systems. This initiative is part of a broader effort to regulate AI use within the public sector.

PimEyes says Meta glasses integration could have ‘irreversible consequences’

Masha Borak | Biometric Update

PimEyes has raised concerns about the integration of its facial recognition technology with Meta's smart glasses, warning of potential irreversible consequences for privacy and surveillance. The company highlighted risks related to unauthorized facial tracking and identification in public spaces, which could lead to widespread privacy violations. PimEyes advocates for stricter controls and regulations to prevent misuse of the technology, ensuring individuals’ rights are protected in the evolving landscape of AI-powered wearables.

Canada endorses global effort for age-assurance standards to protect children's privacy

Angelica Dino | Canadian Lawyer Magazine

Canada has endorsed a global initiative to create age-assurance standards aimed at protecting children's privacy online. This effort seeks to establish clear guidelines for verifying the age of users on digital platforms, safeguarding young individuals from privacy violations and harmful content. The move aligns with international goals to provide better online protection for children while balancing privacy concerns with technological advancements.

First UK-US online safety agreement pledges closer co-operation to keep children safe online

UK Government

The UK and US governments have signed their first online safety agreement, committing to closer cooperation to protect children online. This agreement focuses on sharing best practices and developing safety frameworks to ensure digital platforms provide safer environments for young users. Both countries aim to tackle harmful content, improve online age verification, and establish stronger protections for children across digital services.

CDT Comments on NIST Digital Identity Guidelines With a Focus on Equity, Access, Privacy in Public Benefits Administration

Elizabeth Laird | Hannah Quay-de la Vallee | Nick Doty | Centre for Democracy & Technology

The Center for Democracy & Technology (CDT) submitted comments on NIST's Digital Identity Guidelines, focusing on equity, access, and privacy in the context of public benefits administration. CDT emphasized the need for digital identity systems to be inclusive, particularly for marginalized populations, and to prioritize data privacy protections. They advocate for ensuring that digital identity verification processes do not create barriers to accessing public services while maintaining strict privacy standards.

Maryland school potentially violates student privacy rights by using AI detector

Chris Papst | Fox Baltimore

A Maryland school in Anne Arundel County is facing scrutiny for potentially violating student privacy rights by using an AI detector to monitor student assignments. The use of the AI tool, GPTZero, raises concerns about compliance with the Family Educational Rights and Privacy Act (FERPA), which protects student information. Parents and experts are questioning whether the use of such tools respects students' privacy, prompting a debate on how schools balance academic integrity and privacy rights.

23andMe is on the brink. What happens to all its DNA data?

Bobby Allyn | NPR

23andMe is facing privacy concerns after hackers accessed and posted sensitive genetic data from its users on the dark web. The breach has raised alarm over how personal DNA information is stored and protected, with potential implications for users' medical privacy and identity security. This incident highlights the growing need for stronger safeguards around genetic data, particularly as DNA testing services continue to gain popularity.

US courts, regulators weigh in on online tracking in health care

Helena Engfeldt | Rachel Ehlers | IAPP

U.S. courts and regulators are increasingly scrutinizing the use of online tracking technologies in health care due to privacy concerns. Legal actions and investigations focus on how patient data is collected and shared through tracking pixels on health websites, potentially violating privacy laws like HIPAA. The growing use of digital tools in health care is prompting calls for clearer regulatory guidance to ensure that tracking practices respect patient confidentiality.

Red light cameras, photo radars now allowed in New Brunswick

Avery MacRae | CTV News

New Brunswick has passed new legislation allowing the use of red-light cameras and photo radar to monitor traffic violations. These automated systems will help enforce traffic laws by detecting speeders and drivers running red lights, with the goal of improving road safety. The introduction of this technology follows similar measures in other Canadian provinces, aiming to reduce accidents and enhance enforcement without requiring a constant police presence.

Canada ponders 'top secret' data cloud as allies push ahead with intelligence-sharing plans

Murray Brewster | CBC News

Canada is considering its role in AUKUS, a security alliance between the U.S., U.K., and Australia, which includes cooperation on data sharing and cloud technology. While Canada is not a formal member of AUKUS, there are discussions about how the country could participate in aspects of the partnership, particularly in areas related to defense and technology collaboration. This issue highlights Canada's evolving relationship with its allies amid growing concerns over cybersecurity and technological advancements in defense.

Neural data privacy an emerging issue as California signs protections into law

Suzanne Smalley | The Record

California has passed a groundbreaking neural data privacy law, making it the first in the U.S. to regulate the collection and use of brainwave data from neurotechnology. The law is designed to safeguard individuals' neural information from misuse by companies, preventing potential privacy violations as neurotechnology advances. Experts, including Rafael Yuste, have praised the law, seeing it as an essential step in protecting mental privacy as brain-computer interfaces become more widespread.

Meta must limit data use for targeted advertising, top EU court rules

Foo Yun Chee | Reuters

The European Union’s top court has ruled in favor of privacy activist Max Schrems in a legal dispute against Meta, marking another significant development in data privacy enforcement. Schrems argued that Meta's handling of European user data violated privacy laws, particularly around the transfer of personal data to the U.S. The court's decision strengthens the case for stricter data protection regulations in Europe and could impact how U.S. tech giants manage data from European users.

Privacy Commissioner calls for interoperable privacy laws at Alberta committee review

Angelica Dino | Canadian Lawyer Magazine

Canada's Privacy Commissioner has urged for interoperable privacy laws across provinces during Alberta’s committee review. The Commissioner highlighted the importance of cohesive legislation to protect Canadians' personal information in an increasingly digital economy, calling for harmonized laws that can easily adapt to new technological challenges. This approach would enhance privacy protections while reducing the compliance burden on businesses operating in multiple regions.

Privacy Commissioner’s Annual Reports on Access to Information and Privacy Act tabled in parliament

Angelica Dino | Canadian Lawyer Magazine

The Privacy Commissioner's 2023 annual reports on the Access to Information and Privacy Acts were tabled in Parliament, highlighting challenges related to delays in access to information and gaps in privacy protection in the digital age. The reports urge the government to modernize both laws to better align with the current technological landscape and growing data privacy concerns. The Commissioner also emphasized the need for stronger transparency and accountability mechanisms across federal institutions.

Timmins police warns fraudsters using AI to mimic voices of their grandchildren

Lydia Chubak | CTV News

Timmins Police are warning the public about a new fraud scheme where scammers use AI to mimic the voices of victims' grandchildren in distress calls. The fraudsters, posing as the grandchildren, ask for immediate financial assistance under false pretenses, exploiting the emotional vulnerability of the grandparents. Authorities urge residents to verify such calls by contacting family members directly before taking any action to avoid falling victim to this scam.

Internet History Hacked, Wayback Machine Down—31 Million Passwords Stolen

Davey Winder | Forbes

The Wayback Machine, an internet archive, suffered a breach resulting in the theft of 31 million passwords. This incident highlights vulnerabilities in the security of widely-used digital tools and platforms. The attack has raised significant concerns about online safety, particularly for users whose data may now be compromised. Security experts are urging individuals to update their passwords and use two-factor authentication to protect their accounts.

Waterloo, Ont. tech company responds to surveillance, spyware allegations

Spencer Turcotte | CTV News

A tech company in Waterloo, Ontario, is responding to allegations that its software was used for surveillance and spyware purposes. The company denies these claims, emphasizing that its tools are designed for legitimate business use, such as monitoring employee productivity, and not for illegal surveillance. The issue has sparked concerns over data privacy and the ethical use of monitoring software, leading to increased scrutiny of how such technologies are deployed in the workplace.

Workplace privacy in US laws and policies

Müge Fazlioglu | IAPP

The article from IAPP outlines the evolving landscape of workplace privacy in the U.S., focusing on laws and policies that govern how employers can monitor employees. It discusses the balance between legitimate business needs, such as productivity tracking, and protecting employees' personal privacy. With growing concerns about surveillance technologies and data protection, it highlights the importance of compliance with state and federal privacy laws, including the need for transparency and consent.

Previous
Previous

Week of 2024-10-21

Next
Next

Week of 2024-10-04