Week of 2024-08-23

When Ottawa uses the Official Languages Act to deny access to information

Matt Malone | Ashley Desautels | Policy Options

The article from Policy Options discusses the challenges and importance of bilingualism in accessing government information in Canada. It highlights how linguistic duality is a fundamental part of Canadian identity, yet there are ongoing issues with ensuring equal access to information in both official languages. The piece calls for stronger measures and policies to uphold bilingualism, particularly in the context of information access, to better serve all Canadians.

Ottawa still mulling over bonus for CEO of CBC, but won’t make decision public

Mickey Djuric | Toronto Star

The Canadian government is still deciding whether to grant a bonus to the CEO of CBC but has stated that any decision made will not be disclosed to the public. This comes amid scrutiny and debates over transparency regarding executive compensation at the public broadcaster.

Alcohol sales policy catching up on age verification in the US, UK

Abigail Opiah | Biometric Update

The U.S. and U.K. are updating alcohol sales policies to better align with modern age verification methods, particularly focusing on biometric and digital ID technologies. This shift is driven by the need to enhance security and compliance in age-restricted sales. As biometric solutions, like facial recognition, become more common, both countries are examining how to integrate these technologies while balancing privacy concerns and regulatory requirements.

A booming industry of AI age scanners, aimed at children’s faces

Drew Harwell | Washington Post

The Washington Post article discusses the growing use of facial recognition technology for age verification online, particularly for children. While intended to protect kids from inappropriate content, the technology raises significant privacy concerns. Critics argue that collecting biometric data from minors could expose them to risks like data breaches and misuse. The debate highlights the tension between safeguarding children online and protecting their privacy.

We Need a Global AI Strategy: What Role for Canada?

Javier Ruiz-Soler | Daniel Araya | Centre for International Governance Innovation

The article emphasizes the need for a global strategy for AI governance, highlighting Canada's potential leadership role. It discusses the challenges posed by differing international regulations and suggests that Canada could bridge gaps between nations by advocating for inclusive, transparent AI policies. The article calls for Canada to leverage its expertise in AI ethics and human rights to shape global AI standards, ensuring they align with democratic values and protect fundamental rights.

American Bar Association Issues Formal Opinion on Use of Generative AI Tools

Linn Foster Freedman | Robinson & Cole

The American Bar Association has issued a formal opinion regarding the use of generative AI tools by lawyers. The opinion emphasizes that lawyers must exercise due diligence when using AI, ensuring they understand the technology, validate its output, and protect client confidentiality. The ABA also highlights the importance of transparency with clients about AI usage and warns against over-reliance on AI-generated content without proper review. This guidance aims to maintain professional responsibility while integrating AI into legal practice.

MIT just launched a new database tracking the biggest AI risks

Nicole Kobie | IT Pro

MIT has launched a new database aimed at tracking significant risks associated with artificial intelligence. The database is designed to document and analyze major incidents related to AI systems, helping researchers, policymakers, and the public better understand the potential dangers and challenges AI poses. This resource is intended to contribute to the development of safer and more responsible AI technologies by offering insights into past issues and trends.

Ontario Bill 194: Strengthening Cyber Security and Building Trust in the Public Sector

Daniel G.C. Glover | Marissa Caldwell | Béatrice Allard | Logan Dillon | McCarthy & Tetrault

Ontario's Bill 194, titled "Strengthening Cyber Security and Building Trust in the Public Sector Act," aims to enhance cybersecurity and data privacy in Ontario's public sector. The bill introduces new requirements for protecting sensitive data, including mandatory reporting of cybersecurity incidents and the development of cybersecurity plans. It also seeks to establish a framework for managing artificial intelligence systems within the public sector. These measures are intended to bolster public trust in how the government handles cybersecurity and data protection.

Ontario Bill 194: Amendments to Reporting Requirements and Expanding Power for the Privacy Impact Assessment

Daniel Fabiano | Alex Cameron | Christopher Ferguson | Fasken

The article discusses upcoming regulatory changes in Ontario's public sector concerning cybersecurity, artificial intelligence (AI), and privacy. It highlights the province's increased focus on strengthening cybersecurity measures, enhancing AI governance, and improving privacy protections. The regulations aim to ensure that public sector organizations in Ontario are better equipped to manage emerging digital threats and protect personal information. These changes reflect a broader trend towards more stringent digital governance in response to evolving technology and security challenges.

General Motors accused of selling data to insurers on 'bad' habits of drivers

Sky News

General Motors (GM) is accused of selling drivers' data to insurers without proper consent. The allegations suggest that GM used OnStar technology to collect detailed information on driving habits and then sold this data to insurance companies, potentially affecting insurance rates for drivers based on their behavior. The situation raises significant concerns about privacy and the ethics of data sharing between automakers and third parties. GM has denied the accusations but has faced scrutiny over its data practices.

FTC bans fake online reviews, inflated social media influence; rule takes effect in October

Rebecca Picciotto | CNBC

The FTC has introduced new regulations banning fake reviews and misleading endorsements, including those by social media influencers. These rules target deceptive practices such as paying for fake reviews or manipulating consumer ratings to falsely enhance a product's image. The FTC's move is part of a broader effort to increase transparency in online marketing and protect consumers from being misled by inauthentic content.

British civil service to target cyber specialists with new graduate scheme

Alexander Martin | The Record

The UK's Civil Service has launched the "Cyber Fast Stream" program to attract and train cybersecurity experts within government roles. This initiative aims to bolster the country's defences against growing cyber threats by providing specialized training and fast-tracking careers in the cybersecurity field. The program is part of the broader effort to strengthen national cybersecurity capabilities amid increasing global cyber risks.

The evolution of digital identity: From databases to blockchain

Rohan Pinto | Biometric Update

The article discusses the shift in digital identity management from traditional databases to blockchain technology. It highlights how blockchain offers a more secure and decentralized method for managing digital identities, reducing risks such as data breaches and identity theft. The use of blockchain can provide users with greater control over their personal information, potentially transforming how identities are verified and authenticated in various sectors, including finance, healthcare, and government services.

NIST Releases Second Public Draft of Digital Identity Guidelines for Final Review

NIST

The National Institute of Standards and Technology (NIST) has released the second public draft of its Digital Identity Guidelines for final review. These guidelines offer updated recommendations for digital identity verification and authentication to improve security and privacy. The revisions incorporate feedback from previous drafts and emphasize the importance of usability and security in identity systems. Stakeholders are invited to review and comment on this draft before the final version is published.

California’s Two Biggest School Districts Botched AI deals. Here Are Lessons From Their Mistakes

Khari Johnson | The Markup

California's two largest school districts made critical errors in their AI technology contracts, leading to privacy issues and unfulfilled promises. Lessons from their mistakes highlight the need for clear contracts, privacy protections, and rigorous vetting of AI vendors. School districts should also ensure transparency and have contingency plans for when technology doesn't deliver as expected.

Rollout of Alberta’s school cellphone ban raising concerns among teachers

Lisa Johnson | Toronto Star

The rollout of Alberta's school cellphone ban is causing concerns among teachers, who argue that the policy may be difficult to enforce and could disrupt learning. Teachers are also worried about how the ban might affect students' ability to access educational resources and manage emergencies. The ban, which aims to reduce distractions and promote focus in the classroom, is part of a broader debate on the role of technology in education.

The Googlization of the classroom: Is the UK effective in protecting children's data and rights?

Sonia Livingstone | Kruakae Pothong | Ayça Atabey | Louise Hooper | Emma Day | Computers and Education Open

The article from ScienceDirect explores the role of "regenerative medicine" in treating aging-related diseases. It discusses the use of cutting-edge therapies, such as stem cells, gene editing, and tissue engineering, to repair or replace damaged tissues and organs. The research emphasizes the potential of these technologies to extend healthy lifespan and improve quality of life, while also addressing ethical, regulatory, and technical challenges in the field.

 

Precision nutrition and biometric privacy in health tech

Cheryl Saniuk-Heinig | IAPP

The article on IAPP discusses the intersection of biometric privacy and health technology, specifically focusing on Precision Nutrition, a company that offers personalized nutrition advice based on biometric data. It highlights the privacy concerns associated with collecting and using sensitive health information and emphasizes the importance of adhering to biometric privacy laws to protect individuals' data. The article suggests that as health tech companies grow, they must carefully navigate privacy regulations to ensure user trust and compliance.

Artificial intelligence helping North London cancer patients book appointments

Haringey Community Express

An initiative in North London is using artificial intelligence to help cancer patients book appointments more efficiently. The AI technology is being integrated into the booking system, making it easier for patients to schedule necessary follow-up appointments. This development aims to reduce the administrative burden on healthcare providers and improve patient care by streamlining the appointment process.

NZ Police finally has facial recognition policy - but is it strict enough?

Phil Pennington | Radio New Zealand

New Zealand Police have introduced a facial recognition policy after years of delay, but concerns remain about its adequacy. Critics argue that the policy may not be strict enough to address privacy and civil liberty issues, with questions raised about oversight and the potential for misuse. The policy's effectiveness in safeguarding citizens' rights while leveraging facial recognition technology is under scrutiny.

Starmer plan to expand facial recognition technology after far-right riots condemned by charities

Holly Bancroft | The Independent

The article discusses concerns about the use of facial recognition technology by UK police during riots, particularly in the context of far-right protests. Critics argue that this technology can exacerbate racial profiling and civil liberties violations, especially when used in highly charged environments. The deployment of such technology has sparked debates over its effectiveness and the potential for misuse, with human rights organizations calling for stricter regulations and oversight.

Police use of facial recognition technology subject of upcoming public NIST meeting

Anthony Kimery | Biometric Update

The National Institute of Standards and Technology (NIST) will be holding a public meeting to discuss the use of facial recognition technology by police. This meeting is expected to cover the implications of facial recognition in law enforcement, addressing concerns about privacy, accuracy, and potential biases. The session aims to bring together various stakeholders, including government officials, tech experts, and civil rights advocates, to discuss standards and regulations needed to ensure responsible use of this technology.

Edmonton Police Tracked a Critic’s Social Media

Charles Rusnell | The Tyee

The Edmonton Police Service used social media tracking software to monitor the online activities of a critic who had publicly questioned police practices. This has raised concerns about privacy and the potential misuse of surveillance tools to target individuals for expressing dissent. The incident has sparked debate over the ethical implications of police monitoring citizens on social media, especially those who criticize law enforcement.

Federal Appeals Court Finds Geofence Warrants Are “Categorically” Unconstitutional

Andrew Crocker | Electronic Frontier Foundation

A federal appeals court has ruled that geofence warrants, which allow law enforcement to request data from all devices within a specific area, are categorically unconstitutional. The court found that these warrants violate the Fourth Amendment because they are overly broad and lack specificity, potentially exposing the private information of innocent people. This decision marks a significant legal precedent in the debate over privacy and the use of surveillance technology by law enforcement.

Russia launching more sophisticated phishing attacks, new report finds

Stephanie Kirchgaessner | The Guardian

Russian cyber operations are increasingly targeting Western governments, businesses, and other organizations through sophisticated phishing and hacking attacks. These efforts are part of a broader campaign by Russia to destabilize its adversaries and gather intelligence. The report highlights the rising threat of cyberattacks originating from Russia, which often employ advanced techniques to breach security systems and steal sensitive data. The article underscores the need for heightened vigilance and robust cybersecurity measures to counter these threats.

Survey of Online Harms in Canada 2024 

Angus Lockhart | Mahtab Laghaei | Sam Andrey | Toronto Metropolitan University

The "Survey of Online Harms in Canada 2024" by the Digital Asset Institute examines the prevalence and impact of harmful online behaviors, such as misinformation, cyberbullying, and hate speech, on Canadians. The report highlights key trends, including rising concerns about online safety and the need for stronger regulations to protect vulnerable populations. It also explores public attitudes towards government intervention and digital literacy.

Privacy commissioner concerned over GN use of WhatsApp

Kierstin Williams | Nunatsiaq News

The Privacy Commissioner has raised concerns about the Government of Nunavut's use of WhatsApp for official communications. The Commissioner worries that using the messaging app could lead to privacy risks, especially given WhatsApp's data-sharing practices and lack of transparency about its encryption protocols. The Commissioner suggests that more secure and appropriate communication tools should be considered for government use to protect sensitive information.

EDPB Statement on the Role of Data Protection Authorities in the AI Act

Doug McMahon | Catherine Walsh | McCann Fitzgerald

The European Data Protection Board (EDPB) issued a statement emphasizing the importance of data protection authorities (DPAs) in enforcing the AI Act. The EDPB highlights that DPAs should have a crucial role in monitoring AI systems, particularly in safeguarding privacy and data protection rights. They advocate for clear delineation of responsibilities among regulatory bodies to ensure that AI technologies are deployed ethically and in compliance with data protection laws.

Privacy regulator drops pursuit of Clearview AI as Greens call for more scrutiny on use of Australians’ images

Josh Taylor | The Guardian

Australia’s privacy regulator has dropped its legal pursuit of Clearview AI, the U.S. facial recognition firm, over its use of Australian citizens' images without consent. The decision follows Clearview AI's commitment to stop providing its services in Australia and to cease collecting or using Australian data. The case was initially pursued due to concerns over the privacy implications of the company's technology, which scrapes images from social media for facial recognition.

Florida company faces multiple lawsuits after massive data breach

Kevin Maimann | CBC News

A CBC investigation reveals that data breaches in Canada are leading to a surge in class-action lawsuits, highlighting the growing concern over privacy and security. The article discusses how victims of these breaches are seeking legal recourse, and how the legal landscape is evolving in response to increasing cyber threats. It also touches on the challenges faced by organizations in protecting sensitive information and the impact of these breaches on individuals and businesses.

Hamilton cyberattack costs hit $7.4 million

Teviah Moro | The Hamilton Spectator

The cost of the cyberattack on Hamilton's city systems has reached $7.4 million, according to a recent report. This figure includes expenses related to response efforts, system restoration, and ongoing cybersecurity measures. The attack, which occurred earlier this year, severely disrupted municipal operations and has led to significant financial impacts for the city. The city is continuing its efforts to fully recover and strengthen its digital defenses against future threats.

In landmark for post-quantum encryption, NIST releases three algorithms

Alexander Martin | The Record

The National Institute of Standards and Technology (NIST) has released a set of post-quantum encryption algorithms aimed at protecting data from future quantum computer threats. These algorithms are designed to replace current encryption methods that could be vulnerable to quantum attacks, ensuring long-term data security. The release marks a significant step in cryptography, focusing on safeguarding sensitive information as quantum computing continues to advance.

Calls for privacy law overhaul after increasing number of workers forced to undergo blood tests

Bronwyn Herbert | Heidi Davoren | Australia Broadcast Corporation

The Australia Institute has raised concerns about proposed changes to the Privacy Act, which would allow employers to request blood tests from workers. This move is seen as a potential invasion of privacy, with critics arguing that it could lead to discrimination and breaches of workers' rights. The Institute is calling for stronger safeguards to ensure that such practices do not become a norm, emphasizing the need for a balance between workplace safety and individual privacy rights.

Witness presence at firing an important HR practice: privacy commissioner

Nunatsiaq News

Nunavut's privacy commissioner highlighted the importance of having a witness present during employee terminations as a key human resources practice. This recommendation comes in response to a complaint about a firing that was conducted in a manner that may have violated privacy rights. The presence of a witness can help ensure transparency and fairness, protecting both the employer and employee in sensitive situations.

Illinois Passes Bill to Regulate Use of Artificial Intelligence in Employment Settings

David Stauss | Laura Malugade | Owen Davis  | Husch & Blackwell

Illinois has passed a new bill regulating the use of artificial intelligence in employment settings. The legislation aims to protect workers from potential biases and discrimination by requiring employers to conduct regular bias audits on AI systems used for hiring, promotions, and other employment decisions. Employers must also notify candidates when AI is being used in their evaluations. This move reflects growing concerns over the ethical implications of AI in the workplace.

Previous
Previous

Week of 2024-09-06

Next
Next

Week of 2024-08-16