Week of 2024-12-10

Goodbye INAI: Senate approves elimination of Mexico’s watchdog agencies

Mexico News Daily

The Mexican Senate has approved a constitutional reform to eliminate seven autonomous regulatory and oversight agencies, including the National Institute for Transparency, Access to Information and the Protection of Personal Data (INAI). The responsibilities of these agencies will be transferred to various government ministries, such as the Economy Ministry and the Energy Ministry. Proponents, primarily from the ruling Morena party, argue that this move will reduce corruption and save public funds. However, critics express concerns that dissolving these independent bodies could weaken transparency and accountability, potentially consolidating power within the executive branch. The reform now awaits approval from a majority of Mexico's 32 state legislatures to become law.

Ontario Place redevelopment not 'fair, transparent or accountable,' auditor general finds

CBC News

Ontario Auditor General Shelley Spence's 2024 Annual Report scrutinizes several provincial government initiatives, highlighting significant concerns. The report reveals that the redevelopment costs for Ontario Place have escalated to $2.2 billion, with irregularities in the procurement process, including undue influence from the premier's office. Additionally, the decision to relocate the Ontario Science Centre is projected to cost an extra $400 million, surpassing initial savings estimates. The report also criticizes the government's inconsistent use of Minister's Zoning Orders (MZOs) and identifies conflicts of interest within the Ontario Land Tribunal. Furthermore, it notes that the province's opioid strategy is outdated, and recent closures of supervised-consumption sites lack thorough, evidence-based analysis.

Generative AI Will Increase Misinformation About Disinformation

Elise Thomas | Lawfare

The article "Generative AI Will Increase Misinformation About Disinformation" by Elise Thomas discusses how generative AI technologies can amplify the spread of misinformation and disinformation. Thomas argues that while concerns often focus on state actors using AI for disinformation campaigns, the majority of deceptive AI-generated content is likely to come from individuals and groups seeking profit. She highlights a case where a bot network, probably designed to promote cryptocurrency influencers, inadvertently influenced political discourse in Australia, demonstrating how AI-generated content can have unintended political impacts. The article emphasizes the need to address the broader ecosystem of AI-generated misinformation, not just state-sponsored activities. 

Media outlets, including CBC, sue ChatGPT creator

Anis Heydari | CBC News

A coalition of Canadian news organizations—including Torstar, Postmedia, The Globe and Mail, The Canadian Press, and CBC/Radio-Canada—has filed a lawsuit against OpenAI, alleging that the company used their copyrighted content without authorization to train its AI models, such as ChatGPT. The plaintiffs claim that OpenAI's data scraping practices violate their terms of service and infringe upon their intellectual property rights, seeking damages and an injunction to prevent further unauthorized use. OpenAI maintains that its models are trained on publicly available data in accordance with fair use and related international copyright principles, and asserts that it collaborates with news publishers, offering them options to opt out if desired. This legal action reflects a broader trend of media companies challenging AI firms over the unlicensed use of their content, with similar lawsuits emerging in the United States.

Canada to spend hundreds of millions to build new AI supercomputer

Murad Hemmadi | The Logic

The Canadian government has unveiled the Canadian Sovereign AI Compute Strategy, committing up to $2 billion to enhance the nation's artificial intelligence (AI) infrastructure. This investment aims to bolster Canada's position as a global AI leader by expanding domestic computing capacity. The strategy allocates up to $700 million to support the development of new or expanded data centres, up to $1 billion for public computing infrastructure, and up to $300 million to provide affordable compute power for small and medium-sized enterprises through the AI Compute Access Fund. This initiative is designed to ensure that Canadian businesses, researchers, and innovators have the necessary resources to develop AI solutions domestically, thereby strengthening Canada's AI ecosystem and economic growth.

Artificial Intelligence and Administrative Tribunals

Paul Daly | University of Ottawa

In his paper "Artificial Intelligence and Administrative Tribunals," Paul Daly examines the potential benefits and challenges of integrating AI into administrative tribunals. He suggests that these tribunals, designed for efficient decision-making, could leverage AI to enhance operations in areas like public communication, case assignment, and decision drafting. However, Daly emphasizes the necessity for careful, contextual analysis before implementing AI, considering the significant implications for individual rights and the importance of maintaining transparency and accountability. He advocates for a balanced approach, recognizing both the opportunities and risks associated with AI integration in administrative justice.

No more ID? Air Canada starts rolling out facial recognition technology at the gate

Christopher Reynolds | City News

Air Canada has introduced facial recognition technology for boarding most domestic flights departing from Vancouver International Airport (YVR), starting December 3, 2024. This optional program allows passengers to create a digital profile by uploading a passport photo and a selfie via the Air Canada mobile app, enabling them to board flights without presenting physical identification. The airline plans to expand this technology to additional airports and services in the future. While the initiative aims to streamline the boarding process, it has raised privacy concerns regarding data security and the handling of biometric information.

Australia passes social media ban for children under 16

Byron Kaye | Praveen Menon | Reuters

Australia has enacted the Online Safety Amendment (Social Media Minimum Age) Act 2024, prohibiting individuals under 16 from creating accounts on major social media platforms such as Facebook, Instagram, TikTok, Snapchat, Reddit, and X (formerly Twitter). The legislation mandates that these platforms implement robust age verification measures to prevent underage access, with fines up to AU$50 million for non-compliance. The law aims to protect the mental and physical health of young Australians by limiting their exposure to potential online harms. However, critics express concerns about the practicality of enforcement and potential unintended consequences, such as isolating vulnerable youth who rely on online communities for support. The legislation is set to take effect in late 2025, allowing time for platforms to develop and implement the required age verification systems.

Hand over your ID or your facial data? The would-you-rather buried in the teen social media ban

Ange Lavoipierre | ABC News

Australia's recent legislation banning social media access for individuals under 16 has sparked significant debate, particularly concerning the enforcement mechanisms required to verify users' ages. Implementing such a ban may necessitate the collection of personal data, including government-issued identification or biometric information like facial recognition scans, to confirm users' ages. This approach has raised privacy concerns among experts and advocates, who warn about the potential risks associated with extensive data collection and the security of sensitive personal information. Additionally, there are apprehensions that stringent age verification could inadvertently exclude marginalized groups who may lack access to official identification documents. The government has yet to provide detailed plans on how these verification processes will be implemented, leaving questions about the balance between protecting young users and preserving individual privacy rights.

As Australia bans social media for children, Quebec is paying close attention

Maura Forrest

Castanet

Australia's recent legislation banning social media access for individuals under 16 has drawn international attention, including from Quebec, Canada. In response, Quebec's government is exploring similar measures to protect minors online. However, experts caution that such bans may be difficult to enforce and could inadvertently isolate vulnerable youth who depend on online communities for support. The debate in Quebec reflects a global trend of balancing online safety for children with the challenges of implementation and potential unintended consequences.

Real Life Robotics takes fresh run at putting delivery robots on Canada’s sidewalks

David Reevely | The Logic

In September 2022, Pizza Hut Canada partnered with Serve Robotics to pilot autonomous sidewalk delivery robots in Vancouver, marking a significant step in integrating robotics into Canada's food delivery services. These robots are designed to navigate urban environments, delivering orders directly to customers' doorsteps. However, the deployment of such technology faces challenges, including navigating diverse weather conditions and addressing regulatory concerns. For instance, in Toronto, accessibility advocates have raised issues about sidewalk robots potentially obstructing pathways for individuals with disabilities. Despite these hurdles, the adoption of delivery robots represents a growing trend in Canada, aiming to enhance efficiency and reduce environmental impact in the delivery sector.

Ontario’s public services-digital push lags without a plan: auditor general

David Reevely | The Logic

Ontario's Auditor General, Shelley Spence, has identified significant shortcomings in the province's efforts to digitize public services. Despite investing approximately $100 million over the past decade, many high-demand services, such as those related to driver's licenses and health cards, remain unavailable online. The report criticizes the government's fragmented approach to digitalization and highlights insufficient safeguards to protect personal data from unauthorized access. Additionally, critical IT systems were found to lack strong password protections, underscoring the need for enhanced cybersecurity measures.

Revealed: bias found in AI system used to detect UK benefits fraud

Robert Booth | The Guardian

An internal assessment by the UK's Department for Work and Pensions (DWP) has revealed that an AI system used to detect welfare fraud exhibits biases based on age, disability, marital status, and nationality. The fairness analysis, conducted in February 2024, identified "statistically significant outcome disparity" in the system's recommendations for investigating universal credit claims. Despite these findings, the DWP maintains that human caseworkers make final decisions, asserting that the system's use is "reasonable and proportionate" given the estimated £8 billion annual loss due to fraud and errors. However, critics argue that relying on biased AI systems can lead to unjust scrutiny of marginalized groups, emphasizing the need for transparency and comprehensive fairness evaluations across all protected characteristics.

Are you tracking your health with a device? Here’s what could happen with the data

Hannah Fry | Los Angeles Times

Smartwatches and fitness trackers collect extensive personal health data, including heart rate, sleep patterns, and activity levels. This information is often stored in the cloud, where it may be accessed by companies and researchers for various purposes. While such data can offer valuable health insights, it raises significant privacy concerns, especially regarding potential misuse or unauthorized access. For instance, there have been instances where companies sold personal health data to advertisers without user consent. Additionally, features like location tracking can inadvertently reveal sensitive information, such as military base locations, as seen in past incidents. Users should be aware of these risks and take steps to safeguard their data, including reviewing privacy policies and managing device settings.

Alberta sees increase in abandoned health records and misuse of health information

Cindry Tran | Edmonton Journal

Alberta's Information and Privacy Commissioner, Diane McLeod, has raised significant concerns regarding two government bills aimed at amending access to information and privacy regulations. She highlights that the proposed legislation contains vague definitions and lacks sufficient safeguards, which could create legislative gaps. One particular issue is a provision allowing the disclosure of minors' personal information without their consent if deemed in their "best interest," without clearly defining who determines this or what criteria are used. Additionally, McLeod is apprehensive about changes that could grant the government increased control over information access, potentially leading to greater gatekeeping. In response, Technology and Innovation Minister Nate Glubish has stated that the government will review the commissioner's concerns and recommendations.

Court application filed against province change to medical privacy laws

Rachel Morgan | City News

The Justice Centre for Constitutional Freedoms has filed a Notice of Application in the Supreme Court of Nova Scotia, challenging recent amendments to the province's Personal Health Information Act (PHIA). These amendments grant the government authority to compel healthcare providers to disclose patient information without consent for purposes such as healthcare system planning, resource allocation, and developing electronic health records. The applicants, including doctors and the Nova Scotia Civil Liberties Association, argue that this violates Sections 7 and 8 of the Canadian Charter of Rights and Freedoms, which protect individual autonomy and privacy. They contend that anonymized data could suffice for the government's stated objectives, and that patients should have the option to opt out of such disclosures. The case is expected to take several months before proceeding to a hearing.

Rollout of body-worn cameras for Hamilton cops to begin in spring

Sebastian Bron | The Hamilton Spectator

The Justice Centre for Constitutional Freedoms has filed a Notice of Application in the Supreme Court of Nova Scotia, challenging recent amendments to the province's Personal Health Information Act (PHIA). These amendments grant the government authority to compel healthcare providers to disclose patient information without consent for purposes such as healthcare system planning, resource allocation, and developing electronic health records. The applicants, including doctors and the Nova Scotia Civil Liberties Association, argue that this violates Sections 7 and 8 of the Canadian Charter of Rights and Freedoms, which protect individual autonomy and privacy. They contend that anonymized data could suffice for the government's stated objectives, and that patients should have the option to opt out of such disclosures. The case is expected to take several months before proceeding to a hearing.

Questions arise about effectiveness of body-worn police cameras in Canada

Alex Karpa | CTV News

The effectiveness of body-worn cameras (BWCs) in Canadian policing is under scrutiny, with mixed opinions about their impact. Advocates argue that BWCs enhance transparency, protect officers, and provide crucial evidence, potentially expediting criminal trials. Critics, however, highlight limitations, noting that camera footage only captures part of an incident and may not fully restore public trust. Privacy concerns and challenges in data management further complicate their implementation. As more police services adopt BWCs, their long-term benefits and effectiveness remain to be fully assessed.

Online Safety Act duties cover gen-AI and chatbots, Ofcom confirms

Meghan Higgins | Pinsent Masons

The UK's Online Safety Act encompasses generative AI and chatbot services, as clarified by Ofcom in a recent open letter to online service providers. Platforms that allow users to share AI-generated content or interact with chatbots are classified as "user-to-user services" under the Act, making them subject to its regulations. This includes services where users can create and share their own chatbots, which are then accessible to others. Ofcom's guidance emphasizes the importance of safety measures in these services to protect users from potential harms associated with AI-generated content. 

Emboldened 'manosphere' accelerates threats and demeaning language toward women after U.S. election

Christine Fernando | CTV News

Following the 2024 U.S. presidential election, the online "manosphere" has intensified its threats and demeaning language toward women, capitalizing on political polarization and societal divisions. Experts warn that the growth of these communities, fueled by misogynistic content and rhetoric, is leading to more targeted harassment campaigns and real-world safety concerns for women. Social media platforms are under scrutiny for failing to address this escalation, as algorithms often amplify inflammatory content for engagement. Advocacy groups are calling for stricter regulations and platform accountability to curb the proliferation of harmful narratives that perpetuate gender-based violence and discrimination. This trend highlights the urgent need for a broader societal response to address the toxic influence of online hate groups.

Government Finally Splits the Online Harms Bill: Never Too Late To Do The Right Thing…Or Is It?

Michael Geist

Justice Minister Arif Virani has agreed to split Bill C-63, the Online Harms Act, in response to public pressure. This decision separates the bill's provisions on internet platform responsibilities from those amending the Criminal Code and the Canada Human Rights Act, which had raised concerns about potential overreach and threats to free expression. The Standing Committee on Justice and Human Rights is set to begin a "pre-study" of the remaining sections of Bill C-63, even though it hasn't passed second reading in the House of Commons. While this move addresses some issues, experts like Michael Geist emphasize the need for thorough examination of the bill's enforcement mechanisms, particularly the extensive powers granted to the proposed Digital Safety Commission.

Cybercrime is a 2024 intelligence priority for Canada, but one in six cybersecurity jobs go unfilled

Erik Henningsmoen | ICTC

In September 2024, the Canadian government released its first publicly available intelligence priorities document, emphasizing the critical importance of cybersecurity in safeguarding the nation's digital economy. This strategic focus aims to protect Canada's public and private digital systems, critical infrastructure, and information environments from a spectrum of cyber threats. However, a 2022 study by the Information and Communications Technology Council (ICTC) revealed a significant talent deficit in the cybersecurity sector, with approximately one in six positions remaining unfilled. Addressing this workforce gap is essential to effectively counter the evolving cyber threat landscape and ensure the resilience of Canada's digital infrastructure.

How Canadian tech companies are stepping up to fight cyber threats

David Israelson | The Globe and Mail

Canadian technology companies are actively enhancing cybersecurity by developing advanced solutions to counter the increasing threat of cyberattacks. Firms like eSentire utilize artificial intelligence to detect and mitigate threats in real-time, providing managed detection and response services to a global clientele. Additionally, organizations such as the Canadian Centre for Cyber Security collaborate with industry partners to share intelligence and best practices, aiming to strengthen the nation's overall cyber resilience. Despite these efforts, challenges persist, including a significant talent shortage in the cybersecurity sector, which hampers the ability to effectively address the evolving threat landscape. Ongoing investment in technology and human resources is essential to bolster Canada's defenses against cyber threats.

Toronto-based arts-grant provider says nearly $10M was stolen by ‘cybercriminal intruder’

Alex Arsenych | CP24

The Foundation Assisting Canadian Talent on Recordings (FACTOR), a Toronto-based non-profit that provides grants to musicians, reported that nearly $10 million was stolen from its Scotiabank account by a cybercriminal who converted the funds into cryptocurrency. FACTOR has filed a lawsuit against Scotiabank, arguing that the bank should be responsible for reimbursing the loss. Scotiabank contends that the breach likely resulted from compromised login credentials due to phishing, employee fraud, or inadequate protection of information on FACTOR's part. The court will need to determine how the breach occurred and assign responsibility for the loss.

How the US is handling AI-driven hiring practices

Caitlin Andrews | IAPP

The United States is addressing the complexities of AI-driven hiring through a combination of federal guidance and state-specific legislation. At the federal level, the Equal Employment Opportunity Commission (EEOC) has issued guidelines to ensure that AI tools used in employment decisions comply with existing anti-discrimination laws. Concurrently, states like New York and Illinois have enacted laws mandating transparency and bias audits for AI systems in hiring processes. For instance, New York City's Local Law 144 requires employers to conduct annual bias audits of automated employment decision tools and to notify candidates about their use. These efforts aim to balance the efficiency benefits of AI in recruitment with the imperative to uphold fairness and prevent discrimination in employment practices.

Ghost jobs: The phantom hiring trend with data privacy implications

Kayla Bushey | IAPP

"Ghost jobs" refer to job postings by legitimate companies for positions that do not actually exist, differing from fraudulent listings intended to deceive applicants. Employers may use ghost jobs to build a pool of potential candidates, project company growth, or motivate current employees by implying replaceability. However, this practice raises ethical concerns and significant data privacy issues, particularly regarding transparency and the proper use of collected personal information. In jurisdictions like the EU and California, data protection laws mandate that employers provide clear notice about the purpose of data collection during recruitment. Ghost job postings often fail to disclose their true intent, leading to potential violations of these legal requirements. Employers should be aware of the regulatory and reputational risks associated with this practice and adjust their recruitment strategies accordingly.

Apple accused of silencing workers, spying on personal devices

Daniel Wiessner | Reuters

Apple is facing a lawsuit filed by employee Amar Bhakta, alleging that the company illegally monitors employees' personal devices and iCloud accounts, and enforces policies that prevent discussions about pay and working conditions. Bhakta claims that Apple requires staff to install management software on personal devices used for work, granting the company access to personal data such as emails and photos. He also asserts that employees are prohibited from discussing workplace issues, including with the media, and are discouraged from whistleblowing. Apple has denied these allegations, stating that the claims lack merit and emphasizing that employees are trained annually on their rights to discuss working conditions.

Previous
Previous

Week of 2025-01-03

Next
Next

Week of 2024-12-03