Week of 2025-01-03

Information commissioners and ombuds call for ‘transparency by default’ in government services

Angelica Dino | Canadian Lawyer Magazine

In a joint resolution, Canada's information commissioners and ombuds have urged federal, provincial, and territorial governments to adopt a "transparency by default" approach in public service design and delivery. This initiative seeks to embed transparency into government systems and processes, ensuring public information is proactively accessible. The resolution emphasizes that government information belongs to the public and that openness is essential to counter misinformation and foster trust. Ontario's Information and Privacy Commissioner, Patricia Kosseim, stated that services designed with transparency enhance the credibility of government decisions. The regulators advocate for integrating transparency early in system development, proactive publication of information, and training for public service staff to cultivate a culture of openness.

A boy created AI-generated porn with the faces of girls he knew. Why Toronto police said he didn’t break the law

Calvi Leon | The Toronto Star

The article explores the story of Toronto high school girls victimized by AI-generated deepfake pornography created by a peer, sparking discussions about the ethical and legal gaps surrounding such technology. The boy superimposed the girls' faces onto explicit images using photos taken from social media and private messages, leading to widespread emotional distress. Police declined to press charges, citing insufficient evidence of distribution and gaps in Canadian law regarding AI-generated explicit content, particularly when used privately. The case has highlighted the lack of specific legal protections against deepfakes in Ontario and the territories, where intimate image laws don’t address manipulated images explicitly. Experts call for comprehensive legislation and education to address the growing issue of AI-generated abuse, emphasizing the harm and trauma it inflicts on victims.

ICO gen AI consultation response: 5 things you need to know

Bryony Bacon | Rebecca Cousin | The Lens

The UK Information Commissioner's Office (ICO) has released its response to the consultation on data protection and generative AI (genAI), maintaining its positions on purpose limitation, accuracy, and controllership, while revising its stance on lawful bases for web-scraping and individuals' rights. The ICO now emphasizes that developers must provide clear evidence when claiming that alternative data collection methods are unsuitable, particularly concerning the necessity requirement in legitimate interest assessments. Additionally, the ICO highlights the need for developers to adopt a privacy-by-design approach to effectively address individuals' rights, expressing concern over the lack of practical measures in place to facilitate the exercise of these rights. The ICO's final positions will be reflected in an upcoming joint statement with the UK Competition and Markets Authority, focusing on the interplay of data privacy, competition, and consumer law in the AI sector. Developers and deployers of genAI are advised to align their practices with the ICO's guidance to ensure compliance and mitigate potential risks.

The Dark Side Of AI: Tracking The Decline Of Human Cognitive Skills

Chris Westfall | Forbes

The Forbes article "The Dark Side of AI: Tracking the Decline of Human Cognitive Skills" discusses concerns about AI's impact on human cognition. It highlights that as AI systems handle more tasks, there's a risk of humans becoming overly reliant on technology, potentially leading to a decline in critical thinking and problem-solving abilities. The article emphasizes the importance of maintaining human oversight and engagement to ensure that AI serves as a tool to augment human capabilities rather than replace them.

World Economic Forum releases white paper on applications for autonomous AI agents

World Economic Forum

The World Economic Forum’s report, "Navigating the AI Frontier," examines the evolution of AI agents from simple rule-based systems to sophisticated entities capable of autonomous decision-making. It highlights advancements in large language and multimodal models that have enhanced AI agents' ability to perform complex tasks across industries like healthcare, education, and finance. The report also underscores the risks, such as ethical concerns and the need for transparency and robust governance frameworks. By addressing these challenges, stakeholders can responsibly harness AI agents to drive innovation and improve quality of life. The primer aims to equip decision-makers with insights into leveraging this rapidly advancing technology.

RCMP asks for help handling troubling number of kids radicalizing online

Catharine Tunney | CBC News

A recent report by the Five Eyes intelligence alliance highlights a concerning rise in youth involvement in terrorism-related activities across member countries, including Canada. Between April 2023 and March 2024, Canadian law enforcement arrested six minors under 18 for such offenses. The report emphasizes the need for a comprehensive societal response to address the factors leading to youth radicalization, particularly the role of online platforms in facilitating extremist recruitment. It calls for collaboration among governments, communities, and tech companies to develop effective prevention and intervention strategies.

Naughty or Nice? Wrapping up the Year with a Look at Children’s Privacy in Canada

Robbie Grant | McMillan

The Office of the Privacy Commissioner of Canada (OPC) has issued guidance under PIPEDA identifying six "No-Go Zones" for data practices deemed inappropriate regardless of consent. These include purposes violating Canadian laws, profiling leading to discrimination, causing harm such as financial loss or humiliation, and unauthorized tracking or surveillance. The guidance emphasizes that even with consent, data practices must align with what a reasonable person would consider appropriate. This framework aims to ensure organizations respect privacy rights while balancing legitimate business needs.

Australia Banning Kids from Social Media Does More Harm Than Good

Paige Collings | Electronic Frontier Foundation

The Electronic Frontier Foundation (EFF) criticizes Australia's Online Safety Amendment (Social Media Minimum Age) Act 2024, which bans children under 16 from using social media platforms. The EFF argues that the law's requirement for platforms to implement age verification measures poses significant privacy and anonymity risks for all users. They express concern over the lack of clarity in the legislation regarding enforcement and the potential for government overreach. Additionally, the EFF highlights the absence of conclusive studies linking social media use to harm in young people, suggesting that such restrictive measures may be unwarranted and could inadvertently harm both minors and adults who rely on online communities.

AI tools and student data: Teachers can endanger kids’ privacy without robust training

Wellington Soares | Chalkbeat

The increasing use of AI tools in classrooms has raised significant concerns about student privacy and data security. Many teachers, lacking formal training on AI applications, may inadvertently expose sensitive student information when utilizing these technologies. Instances such as the Los Angeles Unified School District's deployment of "Ed," an AI-powered assistant that was later discontinued due to data privacy issues, highlight the potential risks involved. Experts emphasize the necessity for comprehensive guidelines and training to ensure that AI integration in education does not compromise student privacy. Implementing robust data protection policies and educating educators on the ethical use of AI are crucial steps toward safeguarding student information.

Does Spying on Laptops Really Prevent High School Suicides?

Emma Camp | Reason

Schools are increasingly using AI-powered surveillance tools on student-issued devices to monitor for signs of self-harm or suicide. While intended to identify at-risk students, these systems often generate false positives, flagging innocuous activities like academic research or creative writing. Such errors can lead to distressing interventions, including unwarranted police involvement, causing emotional trauma for students and families. Critics argue that the efficacy of these monitoring tools in preventing harm is unproven and that they may infringe on student privacy without delivering the intended benefits.

DGSI Publishes New Standard for Online Voting: CAN/DGSI 111-1

Digital Governance Standards Institute

The Digital Governance Standards Institute (DGSI) has released CAN/DGSI 111-1:2024, a new standard for online voting in Canadian municipal elections. This standard outlines technical design requirements and best practices to ensure online voting systems are trustworthy, secure, private, and transparent. By establishing these guidelines, DGSI aims to bolster voter confidence in both the electoral process and the supporting technology. As the first national voting technology standard in Canada, it sets a precedent for the governance of online voting at municipal and higher government levels. The standard is available for viewing and download on DGSI's website.

Somebody Spilled the Genes: 23andMe’s Downturn Highlights Insufficient Privacy and Data Security Safeguards for Consumer Genetic Data

Suzanne Bernstein | Abigail Kunkler | Matthew Contursi | Epic

In October 2023, 23andMe experienced a significant data breach where hackers accessed the personal and ancestry information of approximately 6.9 million users, primarily targeting those who had opted into the DNA Relatives feature. The compromised data included names, birth years, relationship labels, and ancestry reports, though the company stated that raw genetic data was not accessed. This incident has intensified concerns about the privacy and security of consumer genetic data, especially as 23andMe faces financial instability, including a 98% loss in value and significant workforce reductions. The Electronic Privacy Information Center (EPIC) emphasizes the need for robust privacy safeguards and regulatory oversight to protect sensitive genetic information from unauthorized access.

HIPAA to be updated with cybersecurity regulations, White House says

Jonathan Greig | The Record

The U.S. Department of Health and Human Services (HHS) has proposed updates to the HIPAA Security Rule to strengthen cybersecurity within the healthcare sector. Key changes include mandatory encryption of electronic protected health information (ePHI), multifactor authentication, annual penetration testing, and regular internal audits. These measures aim to address the rising frequency and sophistication of cyberattacks targeting healthcare organizations. Estimated implementation costs are projected to be $9 billion in the first year and $6 billion annually afterward. The proposed rules are open for public comment, reflecting efforts to improve the resilience of U.S. healthcare systems against cyber threats.

Pickering city council moving meetings online due to threats, mayor says

Tyler Cheese | CBC News

Pickering City Council has moved its meetings online due to safety concerns following threats from supporters of a councillor. Mayor Kevin Ashe emphasized the decision was made to protect council members and the public. The duration of the virtual format has not been specified, but meetings will remain accessible to residents online. This shift highlights the challenge of ensuring public engagement while addressing security risks in local governance. Other municipalities are also exploring similar measures to balance safety and accessibility.

‘Joint strike force’ with U.S. part of Ottawa’s bid to beef up border

David Reevely | The Logic

The Canada–U.S. border has seen heightened security measures in response to increased illegal crossings and smuggling activities. Joint initiatives like the Smart Border Declaration aim to enhance cooperation, but challenges persist, including the exploitation of remote terrains. Recent reports show a significant rise in illegal crossings, with apprehensions increasing fiftyfold in some sectors. Both nations are deploying additional personnel, enhancing surveillance, and updating infrastructure to improve security while maintaining efficient cross-border travel. Despite these efforts, the border’s vast length and diverse geography continue to pose difficulties for comprehensive enforcement.

“Sometimes I Forget I'm Paralyzed.” How Neuralink’s First Patient Found Freedom by Connecting His Brain to a Computer

Alex Kantrowitz | Big Technology

In early 2024, 30-year-old quadriplegic Noland Arbaugh became Neuralink's first human patient, receiving a brain-computer interface implant that enables him to control a computer using only his thoughts. This technology has allowed Arbaugh to regain a sense of normalcy, as he can now interact online from his bed, stating, "Sometimes I forget that I'm even paralyzed." Neuralink, founded by Elon Musk and eight scientists in 2016, aims to merge human brains with computers to enhance human capabilities and align with artificial intelligence. Arbaugh's successful use of the device marks a significant milestone in the development of brain-computer interfaces, opening new possibilities for individuals with paralysis.

Developing a Framework for Collective Data Rights

Jeni Tennison | CIGI

The Centre for International Governance Innovation (CIGI) has released a report by Jeni Tennison advocating for collective data rights to address the inadequacies of the individual consent model in big data and AI contexts. The report highlights cases in the UK where individuals impacted by algorithmic decisions lack sufficient protections, emphasizing the need for a collective approach to data governance. Tennison proposes integrating collective rights into legislation to ensure community interests are considered in data practices. The report calls for a shift toward inclusive governance frameworks that balance individual and collective needs in the digital age. This approach aims to address gaps in current data protection laws and foster equitable data management practices.

It’s 2025, where’s my flying car?

Anita Balakrishnan | The Logic

The Logic's 2025 predictions highlight transformative tech trends, including deeper integration of AI across industries and significant advancements in quantum computing. The expansion of 5G networks is expected to boost connectivity and drive innovation in areas like IoT and autonomous vehicles. Cybersecurity will remain a critical focus as reliance on digital infrastructure increases. Additionally, environmental concerns are anticipated to fuel the development of green technologies and sustainable practices in the tech sector. These shifts underscore the pivotal role of technology in shaping society and industry in the coming year.

Exploring the No-Go Zones: Overview of the Guidance Issued by the Canadian Privacy Regulator Relating to Inappropriate Purposes

Amir Kashdaran | Robbie Grant | McMillan

The Office of the Privacy Commissioner of Canada (OPC) has issued guidance on "No-Go Zones," identifying six data practices deemed inappropriate under PIPEDA, regardless of consent. These include activities that violate laws, involve discriminatory profiling, or cause harm such as humiliation or financial loss. Other prohibited practices include unauthorized publication of sensitive data, unjustified surveillance, and unauthorized tracking of individuals’ locations or activities. The guidance underscores the importance of aligning data practices with reasonable societal expectations and safeguarding privacy. This framework aims to balance business needs with ethical data governance and individual rights.

Iranian cyberattackers using detailed fake personas to run long cons: Cybersecurity agency

David Reevely | The Logic

Iranian cyberattackers are using detailed fake personas in prolonged social engineering campaigns targeting military and political figures. Operations such as "Operation Newscaster" involve creating elaborate social media profiles to gain trust and extract sensitive information. Other tactics include establishing fake human resources firms to identify and exploit individuals willing to share national security secrets. These sophisticated approaches underscore the evolving nature of cyber threats and the critical need for heightened awareness and vigilance against social engineering attacks.

Key factors for creating an effective whistleblower policy

Lauren Johnson | Human Resources Director

Creating an effective whistleblower policy requires confidentiality, protection from retaliation, and transparent, prompt action. Ensuring whistleblowers’ identities remain protected encourages reporting, while safeguarding against reprisals fosters trust and openness. Organizations must address reports swiftly and fairly to prevent issues from escalating and demonstrate a commitment to ethical standards. Clear internal reporting mechanisms reduce the likelihood of external escalation, and engaging experienced investigators ensures thorough and impartial handling of complaints. These elements are essential for fostering a culture of accountability and integrity within the workplace.

'The party's over': Toronto school boards cut down on sick leave abuse

Sarah Dobson | Human Resources Director

Toronto school boards are intensifying efforts to curb sick leave abuse by employing private investigators to monitor and identify misuse. This initiative has led to disciplinary actions, including terminations, raising concerns among education unions about privacy and potential intimidation. The Toronto Catholic District School Board (TCDSB) faces a $66-million budget deficit, with $44 million attributed to unfunded sick-leave costs, highlighting the financial strain of absenteeism. Union leaders argue that such measures overlook underlying issues like increased stress and burnout among teachers, advocating for a focus on employee well-being instead. The boards maintain that these actions are necessary to ensure sick leave policies are used appropriately and resources are allocated effectively.

New tax laws require web platforms to report gig workers' income to CRA

Sarah Petz | CBC News

The Canada Revenue Agency (CRA) has introduced new reporting rules for digital platforms starting January 1, 2024, to improve tax compliance among gig workers. Platform operators must collect and report details such as names, addresses, taxpayer IDs, and earnings of sellers using their platforms. Penalties for late filing will be waived until July 31, 2025, to allow operators time to adapt. These measures aim to ensure tax fairness and help gig workers understand their tax obligations in the growing digital economy. The CRA emphasizes the importance of this initiative for maintaining transparency and equity in taxation.

EEOC says wearable devices could lead to workplace discrimination

Daniel Wiessner | Reuters

The U.S. Equal Employment Opportunity Commission (EEOC) has cautioned employers about potential discrimination risks associated with mandating wearable devices like smartwatches and headsets in the workplace. Monitoring employees' biometric data through such devices may be considered a medical examination under the Americans with Disabilities Act (ADA), permissible only when job-related and necessary. Additionally, employment decisions based on data from wearables could lead to discrimination based on disability, pregnancy, sex, race, or other protected characteristics. EEOC Chair Charlotte Burrows emphasized that civil rights laws apply to all workplace technologies, urging employers to ensure their use of wearables does not result in discriminatory practices.

Previous
Previous

Week of 2025-01-10

Next
Next

Week of 2024-12-10