Week of 2024-10-25
Cabinet of Curiosities: The SCC on Cabinet Secrecy
Alain Zaramian | TheCourt
The Supreme Court of Canada (SCC) recently examined the issue of cabinet secrecy, underscoring the need to protect confidential discussions within the federal Cabinet while balancing it with the public interest in transparency. The court addressed legal boundaries around the disclosure of Cabinet documents, emphasizing that while cabinet secrecy is essential for frank policy discussions, certain conditions may warrant limited disclosure. This decision reinforces the complex dynamics between government accountability and protecting the integrity of executive decision-making.
Manitoba premier says he will look at revealing more travel expenses
Steve Lambert | CTV News
Manitoba’s Premier has announced plans to consider increased transparency in reporting government travel expenses. This comes after calls for greater accountability on how public funds are spent on official travel. The proposal aims to enhance public trust by allowing Manitobans to see more detailed information about their leaders' travel costs and activities.
The promise of government-official accountability is vanishing - along with their texts
Matt Malone | The Globe and Mail
Concerns are rising over Canadian federal employees using disappearing messaging apps, like Signal and WhatsApp, on government-issued phones, bypassing transparency and record-keeping laws. Although platforms such as TikTok and WeChat are banned, other encrypted apps are not tracked, enabling potential use of non-secure platforms for official business. This practice challenges transparency efforts and underscores Canada’s need for stronger regulations on government communications, accountability, and data retention, as current oversight may fail to capture critical records of public decisions.
The hidden face of Law 25: Key changes regarding access to information
Antoine Guilmain | Marc-Antoine Bigras | Phillippe Dalmau | Gowling WLG
Quebec's Law 25 introduces significant privacy law changes, emphasizing enhanced consent, data portability rights, and stricter requirements for managing personal data. The legislation enforces accountability for companies, including mandatory privacy impact assessments, breach notifications, and appointing a privacy officer. These changes bring Quebec’s privacy standards closer to the European GDPR, strengthening individuals' control over their data and imposing stricter compliance obligations on organizations.
'Significant risks': Employees outpacing employers in adopting AI
Dexter Tilo | Human Resources Director
A recent article highlights the risks of employees adopting AI tools faster than employers can implement policies to manage them. This rapid, unregulated adoption may lead to security and compliance risks, as organizations may lack sufficient oversight and protocols for AI usage. Employers are advised to establish clear guidelines and training to ensure AI tools are used responsibly and align with company standards.
Huge AI vulnerability could put human life at risk, researchers warn
Andrew Giffin | The Independent
A recent article raises concerns about AI's safety and potential vulnerabilities in critical systems. Experts warn that rapid advancements in AI technologies, such as robotics, increase the risk of exploitation and unintended consequences, especially in systems without adequate oversight. Addressing these risks requires comprehensive regulation, transparency, and ongoing research to prevent AI-driven technology from becoming vulnerable to misuse.
Meta releases AI model that can check other AI models' work
Katie Paul | Reuters
Meta has released an AI model designed to evaluate and verify the work of other AI models, aiming to improve transparency and reliability in AI operations. This technology allows one AI to check for potential issues, such as biases or errors, in the output of another AI, fostering more robust oversight. Meta's approach represents a step toward self-regulating AI systems to ensure safer and more accurate deployments in various applications.
Meta releases AI model that can check other AI models' work
Jack Aldane | Global Government Forum
The OECD has introduced a G7 toolkit to support the safe, secure, and trustworthy use of AI in the public sector. This toolkit provides guidelines for ethical AI adoption, addressing risks around data privacy, bias, transparency, and accountability. Aimed at helping governments navigate AI's complexities, the toolkit promotes responsible deployment while enhancing public trust in AI applications within government operations.
Canada mulls signing up to first global treaty on AI
Murad Hemmadi | The Logic
Canada is engaging in discussions about a potential global AI treaty aimed at establishing international standards for responsible AI use. This initiative focuses on ensuring AI transparency, safety, and ethical deployment, addressing growing concerns about the technology’s impact on society. The treaty would involve collaboration with global leaders to create a shared framework for governing AI, promoting trustworthy and secure AI development.
Europe launches ‘gait recognition’ pilot program to monitor border crossings
Suzanne Smalley | The Record
The European Union is launching a pilot program to study gait recognition technology, which identifies individuals based on their unique walking patterns. This biometric approach aims to explore new ways to enhance security and identification, though it raises privacy and ethical concerns. The study will assess gait recognition's effectiveness and potential applications, while also considering regulatory measures to protect individual rights.
Biometrics take center stage in daily life but privacy concerns loom: report
Abigail Opiah | Biometric Update
A recent report highlights the increasing role of biometrics in daily activities, from unlocking devices to verifying identity in public spaces, with concerns growing about privacy implications. While biometrics enhance convenience and security, critics warn that widespread use may lead to data misuse and inadequate protections for personal information. The report emphasizes the need for stringent privacy measures to balance technological benefits with individual rights.
Meta reboots Facebook face biometrics to combat ‘celeb-bait’ ads
Lu-Hai Liang | Biometric Update
Meta is reintroducing facial biometrics on Facebook to tackle "celeb bait" ads, which often use AI-altered celebrity images to mislead users. By deploying facial recognition technology, Meta aims to identify and remove such ads more effectively, thereby improving platform integrity and user experience. However, this move revives privacy concerns around facial biometrics and how Meta manages user data.
Social media faces big changes under new Ofcom rules
Stuart Clarkson | Ellen Kirwin | Tom Gerken | BBC
The BBC article covers recent advancements and debates around artificial intelligence (AI) and its impact on society. It highlights both the potential benefits, such as improved efficiencies, and the ethical dilemmas, including privacy concerns and AI’s influence on human decision-making. The piece reflects on the need for responsible governance and regulation to harness AI's potential while mitigating its risks.
UK Considers New Smartphone Bans for Children
Matt Reynolds | Wired
The UK is considering new restrictions on smartphone usage for children, potentially banning their use for younger age groups in an effort to protect mental health and limit screen time. This follows concerns from experts and parents about the effects of excessive smartphone use on child development. The proposed measures would include guidelines for parents and schools to help manage children’s digital engagement more effectively.
New UK bill could force social media firms to make content less addictive for under 16s
Jessica Elgot | The Guardian
A new bill in the UK could require social media platforms to redesign features that make content highly engaging or addictive for children. The legislation aims to protect young users from excessive screen time and mental health impacts by mandating less manipulative design practices. The proposed law would be part of a broader effort to enhance online safety for children and enforce accountability on social media companies.
FTC Announces Final “Click-to-Cancel” Rule to Streamline Subscription Cancellation for Consumers
Hunton Andrews Kurth
The FTC has announced a new "Click to Cancel" rule requiring companies to simplify subscription cancellations, ensuring consumers can easily cancel services online. The rule mandates that cancellation processes be as straightforward as the subscription sign-up, enhancing consumer rights and transparency. Companies must also provide clear renewal reminders and disclosures, addressing complaints about difficult-to-cancel subscriptions.
Algorithms Policed Welfare Systems For Years. Now They're Under Fire for Bias
Morgan Meaker | Wired
Algorithms have been used to manage welfare systems globally, but they are now facing criticism for inherent biases that can affect vulnerable populations. Issues include inaccuracies in data processing, unfairly targeting individuals, and reinforcing socioeconomic disparities. These biases have led to increased scrutiny of AI-driven welfare policies, with calls for transparency and reforms to ensure fairness in automated decision-making.
Canadian minister signals push for cybersecurity and digital credentials
Matt Ross | Global Government Forum
Canada's Digital Government Minister is advocating for strengthened cybersecurity measures and the adoption of digital credentials to improve secure access to government services. This initiative aims to enhance data protection for Canadian citizens and streamline online government interactions. The push aligns with global trends toward digital identification, emphasizing security and privacy in the face of growing cyber threats.
A human rights-centered approach to digital public infrastructure
Access Now
Access Now’s guide to digital public infrastructure (DPI) emphasizes building secure, inclusive, and rights-respecting digital frameworks for public services. The guide explores how DPI can improve access to services like digital ID, payments, and health care, while addressing privacy concerns and digital equity. It advocates for transparency, accountability, and robust data protection in DPI design to empower citizens and protect individual rights.
Elections Ontario report: Maintaining a Level Playing Field: Addressing Misinformation and Disinformation Threats to Electoral Administration in Ontario
Elections Ontario
The Ontario Elections report titled "Maintaining a Level Playing Field" addresses the growing risks posed by misinformation and disinformation to the integrity of electoral administration. It outlines strategies for mitigating these threats, emphasizing the need for robust public education, enhanced monitoring, and collaboration with social media platforms to safeguard democratic processes. The report aims to provide a foundation for preserving fair and transparent elections in Ontario.
The rules for ‘granny cams’ in long-term care homes are unclear, with both families and staff looking for certainty
Ann Hui | The Globe and Mail
A proposal in Ontario suggests installing video cameras in long-term care homes to enhance safety and transparency for residents. Advocates argue that surveillance can help prevent abuse and improve accountability, while critics raise concerns about privacy for both residents and staff. The initiative reflects ongoing discussions around balancing safety and privacy in vulnerable care settings.
Unveiling Alzheimer’s: How Speech and AI Can Help Detect Disease
Vector Institute
The Vector Institute is exploring how AI and speech analysis can aid in the early detection of Alzheimer’s disease. Researchers are developing models that analyze subtle changes in speech patterns to identify cognitive decline, potentially allowing for quicker and less invasive diagnostic methods. This AI-driven approach could transform Alzheimer’s detection, providing valuable tools for early intervention and improving patient outcomes by detecting symptoms earlier than traditional methods.
UK health secretary unveils plans for ‘patient passports’ to hold all medical records
Piper Crerar | Denis Campbell | The Guardian
UK Shadow Health Secretary Wes Streeting has proposed a "patient passport" system, which would centralize all of an individual’s medical records into a single digital file. This initiative aims to improve healthcare efficiency, making it easier for medical professionals to access critical patient information across different services. However, the proposal has raised privacy concerns, with some questioning how securely sensitive health data can be managed and protected under a centralized system.
Microsoft Threat Intelligence healthcare ransomware report highlights need for collective industry action
Microsoft
Microsoft's latest report on healthcare ransomware highlights the urgent need for collective industry action to address increasing cyber threats targeting healthcare systems. The report emphasizes that ransomware attacks on healthcare facilities have grown in frequency and sophistication, endangering patient data and operational continuity. Microsoft calls for a unified approach involving healthcare providers, technology firms, and regulatory bodies to enhance defenses, share threat intelligence, and implement robust security protocols to protect sensitive patient information.
Google Translate adds Inuktut language to its online service
Kierstin Williams | Nunatsiaq News
Google Translate has added Inuktut, the language spoken by Inuit communities across Canada, to its translation service, marking a significant milestone in preserving and promoting Indigenous languages. This addition aims to support Inuktut speakers in accessing digital tools in their native language and helps increase awareness and understanding of Inuit culture. Advocates see it as a step forward in bridging communication gaps and ensuring language representation for Indigenous communities online.
Police seek public’s help to set up security camera database
Tyler Clarke | Sudbury
The Greater Sudbury Police Service is seeking community assistance to establish a security camera database. The initiative encourages residents and businesses with surveillance cameras to voluntarily register their devices with the police. This database will allow police to identify potential sources of footage when investigating crimes, aiming to enhance public safety and streamline investigative processes. Participation is voluntary, and registered cameras remain privately owned and operated.
'Long-awaited': Sudbury police details timelines, costs for bodycams
Rajpreet Sahota | CBC News
Sudbury police are set to begin a body camera pilot program in 2024, equipping some officers with body-worn cameras to enhance accountability and transparency in law enforcement. The pilot is part of an effort to improve community trust and provide an objective record of police interactions. Officials plan to evaluate the impact of body cameras on police operations and public safety, considering both privacy concerns and the potential benefits for documenting incidents accurately.
Even with new powers, CSIS says there are limits on its ability to name names
Catharine Tunney | CBC News
The CBC article reports on recent tensions between Conservative leader Pierre Poilievre and Prime Minister Justin Trudeau regarding intelligence briefings on foreign interference. Poilievre has questioned Trudeau's handling of national security, particularly concerning foreign interference in Canadian affairs, and demands transparency about any briefings Trudeau may have received from the Canadian Security Intelligence Service (CSIS). This issue highlights ongoing concerns over the government’s response to threats of foreign influence and transparency in protecting Canadian democratic processes.
National security cited as B.C. drone engineer's devices seized
Jason Proctor | CBC News
Skycope, a Vancouver-based drone company, is facing scrutiny after national security concerns were raised regarding one of its former employees. Allegations suggest the employee may have had access to sensitive drone technology and potentially shared information that could compromise security. This case underscores broader issues in the tech industry related to safeguarding intellectual property and the risks of insider threats, particularly when national security is involved.
Colorado and California Get Ahead of Neural Data Regulation
Jiwon Kim | Justin T. Yedor | Baker Hostetler
Colorado and California have taken early steps to regulate neural data, focusing on protecting sensitive information derived from brain-computer interfaces and similar technologies. These regulations aim to safeguard individuals' mental privacy, addressing concerns about potential misuse and unauthorized access to neural data. By establishing privacy protections for this emerging category of personal information, both states are setting precedents for managing the ethical and privacy implications of neurotechnology.
Sandvine removed from U.S. blacklist after supplying tech for mass surveillance
Catherine McIntyre | The Logic
Canadian technology firm Sandvine has been removed from the U.S. blacklist after previously facing sanctions for allegedly supplying technology used in mass surveillance. Sandvine had been under scrutiny for providing deep packet inspection tools reportedly used by certain governments to monitor and control internet traffic. The removal from the blacklist follows a review, though concerns remain about the ethical implications of exporting surveillance technology to countries with questionable human rights records.
Perspectives on US Tech Policy After November
Gabby Miller | Justin Hendrix | Ben Lennett | Prithvi Iyer | Tech Policy
An article from Tech Policy Press gathers expert perspectives on the direction of U.S. tech policy following the November elections, exploring potential shifts in regulation and governance of technology companies. Key topics include addressing data privacy concerns, strengthening antitrust actions, and enhancing protections against misinformation. Experts emphasize the need for bipartisan support in enacting policies that balance innovation with accountability, reflecting growing public interest in a more secure and transparent digital ecosystem.
Fostering a thriving PETs ecosystem
Claudine Tinsman | Elea Himmelsbach | Calum Inverarity | Jared Robert Keller | Elena Simperl | Neil Majithia | Open Data Institute
The Open Data Institute (ODI) report on fostering a thriving Personalised & Efficient Transportation Services (PETS) ecosystem examines strategies to support innovation, sustainability, and collaboration in the transportation sector. The report emphasizes the importance of open data, cross-sector partnerships, and regulatory frameworks to drive improvements in transportation services. By leveraging data sharing and fostering inclusive policies, the PETS ecosystem can enhance efficiency and accessibility, benefiting consumers and businesses alike.
Privacy Commissioner Suggests MUN Board Members Refrain from Using Personal Email Accounts for Work Matters
VOCM
Memorial University (MUN) in Newfoundland and Labrador has advised its employees to avoid using personal email accounts for work-related communications to enhance data security and maintain privacy standards. This recommendation is part of an effort to protect sensitive information and ensure that university-related correspondence remains within secure, monitored systems. Using official accounts also supports transparency and accountability, aligning with best practices in institutional data management.
Data Use and Access Bill: A fresh start for UK data policy
Emma Thwaites | Open Data Institute
The Open Data Institute (ODI) discusses the UK’s proposed Data Use and Access Bill, which aims to overhaul national data policies to improve data accessibility, security, and innovation. The bill seeks to establish a more streamlined framework for data sharing across sectors, balancing privacy concerns with the benefits of open data. ODI views the bill as a "fresh start" that could enhance the UK's data infrastructure, enabling responsible data-driven decision-making while protecting individual rights.
No, Chinese scientists didn’t use a quantum computer to break encryption
Murad Hemmadi | The Logic
Recent reports have incorrectly claimed that Chinese scientists used a quantum computer to break modern encryption methods. This misinformation stemmed from a misinterpretation of research into quantum computing capabilities, which have not yet reached the level necessary to compromise current encryption standards like RSA. The Logic clarifies that while quantum computing continues to advance, its practical application in breaking widely-used encryption remains theoretical and years away.
Feeling safe with that complicated password? Think again, security experts say – complexity affects memorability and fosters unsafe practices
George Fitzmaurice | IT Pro
Security experts are challenging the traditional advice that complex passwords offer the best protection, arguing instead that password complexity can compromise security by reducing memorability and leading to unsafe practices. Complicated passwords are often hard to remember, causing users to store them insecurely or reuse passwords across platforms. Experts suggest that longer, more memorable passphrases may offer a better balance of security and usability, encouraging safe password management without sacrificing protection.
Fewer businesses hit by cyberattacks in 2023, but recovery costs rose sharply
David Reevely | The Logic
In 2023, fewer businesses reported cyberattacks, but those affected faced significantly higher recovery costs, according to recent data. The increase in expenses is attributed to the complexity of modern attacks, which often require more extensive mitigation efforts and specialized cybersecurity expertise. The findings suggest that while security measures may be reducing attack frequency, the financial impact of successful breaches is growing, emphasizing the need for robust cybersecurity investment.
AI can help streamline the hiring process, but getting rid of our biases is not so easy
Anthony Mancusa | Financial Post
Artificial intelligence is increasingly used to streamline hiring processes, but eliminating bias within these systems remains a significant challenge. AI tools, while efficient, can inadvertently replicate or even exacerbate human biases if trained on flawed or unrepresentative data. This has led to calls for improved oversight, transparency, and ongoing adjustments in AI algorithms to ensure fairer hiring practices and avoid discrimination in recruitment.
The AI Revolution Is Coming for Your Non-Union Job
Molly Kinder | Mark Muro | Xavier De Souza Briggs | Time
An essay in Time discusses the impact of AI on non-union jobs, focusing on how automation and AI-driven systems are reshaping labor practices and potentially displacing workers in non-unionized roles. The lack of union protections makes these employees particularly vulnerable to rapid technological changes, as they may face job insecurity and wage stagnation. The essay calls for proactive measures, including regulatory frameworks and labor protections, to ensure that AI advancements benefit workers rather than undermining job stability.
CFPB Takes Action to Curb Unchecked Worker Surveillance
Consumer Financial Protection Bureau
The Consumer Financial Protection Bureau (CFPB) has initiated actions to address concerns over invasive worker surveillance practices in the U.S. This move aims to limit the extent to which employers can monitor employees, focusing on preventing surveillance methods that could undermine privacy, worker rights, and mental health. The CFPB emphasizes that unchecked surveillance can lead to significant power imbalances in the workplace, and seeks to establish clearer regulations to protect workers' rights.