Week of 2025-03-31

Sunshine Fest 2025: FOI experts tackle the challenges of government transparency

Joseph L. Brechner

The inaugural Sunshine Fest, held on March 19-20, 2025, brought together approximately 160 participants from countries including the U.S., Canada, Brazil, and Ghana to address pressing issues in government transparency. Organized by the Joseph L. Brechner Freedom of Information Project at the University of Florida, the event coincided with the 20th anniversary of Sunshine Week, an annual initiative promoting the public's right to know. Discussions covered topics such as the growing backlog of Freedom of Information Act (FOIA) requests, the impact of populism on transparency, the role of artificial intelligence in managing FOIA processes, and balancing privacy with the public's right to information. The event also marked the launch of the Sunshine United Network, aiming to foster ongoing dialogue on these critical issues throughout the year.

People named in JFK assassination documents are not happy their personal information was released

Dave Collins | AP News

The latest release of documents related to the John F. Kennedy assassination has sparked privacy concerns, as several files contain unredacted personal information about individuals still living. The AP investigation revealed that details such as Social Security numbers, birthdates, and home addresses were made public without appropriate redactions. Experts warn that this poses risks of identity theft and violates norms around data protection, even in the context of historical transparency. While the U.S. National Archives has committed to reviewing and possibly redacting the exposed data, privacy advocates argue this oversight underscores the need for stricter review protocols in the declassification process. The incident adds a layer of complexity to the broader debate over public access versus individual privacy in government transparency efforts.

Wired is dropping paywalls for FOIA-based reporting. Others should follow

Freedom of Press Foundation

Wired magazine has announced it will remove paywalls from articles primarily based on Freedom of Information Act (FOIA) requests, ensuring public access to information derived from public records. This initiative, highlighted by the Freedom of the Press Foundation (FPF), underscores the importance of transparency in journalism, especially amid increasing government secrecy. While investigative reporting can be resource-intensive, Wired's decision balances public interest with business considerations, potentially fostering greater reader engagement and trust. The FPF encourages other media outlets to adopt similar practices, emphasizing that public records should be freely accessible to uphold democratic values.

The Foilies 2025

Dave Maass | Aaron Mackey | Beryl Lipton | Hannah Diaz | Electronic Frontier Foundation

In March 2025, the Electronic Frontier Foundation (EFF), in collaboration with MuckRock and the Association of Alternative Newsmedia (AAN), released the 10th annual Foilies awards during Sunshine Week. These tongue-in-cheek awards spotlight the most egregious instances of government agencies obstructing public access to information through absurd or incompetent responses to Freedom of Information Act (FOIA) and state transparency law requests. The 2025 edition underscores a decade-long pattern of challenges in government transparency, highlighting cases where agencies imposed exorbitant fees, provided heavily redacted documents, or employed delaying tactics to hinder information disclosure. By publicly ridiculing these practices, the Foilies aim to promote greater accountability and openness within government institutions.

ChatGPT hit with privacy complaint over defamatory hallucinations

Natasha Lomas | Tech Crunch

A Norwegian man, Arve Hjalmar Holmen, has filed a privacy complaint against OpenAI after ChatGPT falsely identified him as a convicted child murderer. The AI tool mixed accurate personal details with a fabricated criminal narrative, prompting privacy advocacy group Noyb to file a formal complaint with Norway’s data protection authority. The complaint argues that such “hallucinations” violate the GDPR’s accuracy requirements and that OpenAI’s disclaimers aren’t sufficient to excuse the harm caused. Noyb is calling for financial penalties and demanding that OpenAI correct the false data. The case highlights growing concerns over the reputational risks of AI-generated misinformation and the need for stronger safeguards.

Virginia governor vetoes AI bill

Caitlin Andrews | IAPP

Virginia Governor Glenn Youngkin has vetoed the High-Risk Artificial Intelligence Developer and Deployer Act (HB 2094), which aimed to define certain AI applications as "high risk" and impose transparency and reporting requirements on their use. The bill targeted AI systems involved in consequential decisions related to education, employment, financial services, health care, and legal services, mandating developers to disclose risks and implement measures against algorithmic discrimination. Governor Youngkin expressed concern that the bill's regulatory framework could hinder AI innovation, particularly affecting smaller firms and startups lacking extensive compliance resources. He emphasized that existing laws already address issues like discrimination, privacy, and data use, suggesting that HB 2094's rigid structure fails to accommodate the rapidly evolving AI industry.

Teresa Scassa: Routine Retail Facial Recognition Systems an Emerging Privacy No-Go Zone in Canada?

Teresa Scassa

Canadian privacy authorities are increasingly scrutinizing the use of facial recognition technology (FRT) in retail settings. The Commission d’accès à l’information du Québec (CAI) recently halted a pilot project by Métro that aimed to use FRT to identify individuals linked to past security incidents, citing privacy concerns. Similarly, in 2023, the British Columbia Privacy Commissioner found that several Canadian Tire stores had violated privacy laws by deploying FRT without proper customer consent. These actions reflect a growing consensus that routine use of FRT in retail environments poses significant privacy risks and may contravene Canadian privacy legislation.

China says facial recognition should not be forced on individuals

Reuters

China's Cyberspace Administration has introduced new regulations governing the use of facial recognition technology (FRT), emphasizing that individuals should not be compelled to verify their identity using such methods. Effective June 2025, these rules mandate that organizations provide alternative, reasonable, and convenient options for identity verification beyond facial recognition. The regulations also require companies to obtain explicit consent before processing facial data and to implement clear signage where FRT is deployed. These measures aim to address growing public concerns over privacy risks associated with the widespread use of facial recognition in everyday activities, such as hotel check-ins and accessing residential complexes. This initiative aligns with China's broader efforts, including the Personal Information Protection Law enacted in 2021, to regulate data collection and enhance personal privacy protections amid the rapid advancement of AI-driven surveillance technologies.

Biometrics in Québec: Regulator continues to set high bar for use by retailers

Alexandra Quigley | Dentons

The Commission d’accès à l’information du Québec (CAI) has prohibited Metro Inc. from implementing facial recognition technology (FRT) intended to identify individuals involved in shoplifting or fraud within its stores. The CAI determined that Metro's proposed system, which would capture and analyze customers' biometric data without their express consent, violates Québec's privacy laws, specifically the Act to establish a Legal framework for information technology (LFIT) and the Act respecting the protection of personal information in the private sector. This decision underscores the CAI's stringent stance on the use of biometric technologies in retail settings, emphasizing the necessity for explicit consent and the protection of individual privacy rights.

(Gen)eration AI: Safeguarding youth privacy in the age of generative artificial intelligence

Tiffany Kwok | Christelle Tessono | The Dais

The Dais at Toronto Metropolitan University has released a report titled "(Gen)eration AI: Safeguarding Youth Privacy in the Age of Generative Artificial Intelligence," highlighting the privacy risks children and teenagers face with the rise of generative AI (genAI) tools. These risks include unconsented data collection, profiling for targeted advertising, exposure to biased or false information, and increased surveillance in educational settings. The report calls for policymakers, educators, and technologists to implement safeguards that protect youth privacy, such as enhancing transparency, obtaining explicit consent, and promoting digital literacy among young users. As genAI becomes more integrated into daily life, the report emphasizes the urgency of addressing these challenges to ensure the safety and privacy of younger generations.

Utah Passes Child Safety Law Requiring Apple to Verify User Age

Juli Clover | Mac Rumors

Utah has passed a new law requiring age verification for downloading apps from app stores like Apple’s App Store and Google Play, as part of its broader push to protect minors online. The law, set to take effect in March 2026, mandates that users provide age and identity verification before accessing social media or other age-sensitive apps. App developers and digital platforms are now tasked with implementing verification systems that comply with the law while minimizing privacy risks. Critics, including tech companies and digital rights groups, warn the law could lead to over-collection of personal data and limit access to legitimate online content. Utah’s move adds to a growing trend of state-level legislation targeting children’s online safety amid concerns over social media and AI-driven content.

FTC to Hold Workshop on May 28 on The Attention Economy: How Big Tech Firms Exploit Children and Hurt Families

Federal Trade Commission

The Federal Trade Commission (FTC) has announced a workshop titled "The Attention Economy: How Big Tech Firms Exploit Children and Hurt Families," scheduled for May 28, 2025, at its headquarters in Washington, D.C. This event aims to convene parents, child safety experts, and government leaders to discuss how major technology companies design addictive features, undermine parental authority, and inadequately protect children from harmful content. Topics will include potential solutions such as age verification and parental consent requirements. The workshop will be accessible both in-person and online, with registration details to be provided on the FTC's website prior to the event. Individuals interested in participating as panelists or providing expertise are encouraged to contact the FTC by April 30, 2025, via email at AttentionEconomy@ftc.gov.

Google to buy cybersecurity firm Wiz for $32B US in company's biggest ever deal

CBC News

Google's parent company, Alphabet, has announced its largest acquisition to date, agreeing to purchase the cloud security startup Wiz for $32 billion. Founded in 2020, Wiz has rapidly become a leader in cloud security solutions, offering tools that help organizations identify and mitigate risks in cloud environments. This strategic move aims to bolster Google Cloud's security capabilities amid increasing demand for robust cloud and AI services. The acquisition is subject to regulatory approval and reflects Alphabet's commitment to enhancing its position in the competitive cloud computing market.

Canadian citizen allegedly involved in Snowflake attacks consents to extradition to US

Matt Kapko | Cyberscoop

Connor Moucka, a 26-year-old Canadian citizen, has consented to extradition to the United States to face 20 federal charges related to his alleged involvement in cyberattacks targeting customers of Snowflake Inc., a cloud storage company. Arrested on October 30, 2024, in Kitchener, Ontario, Moucka is accused of participating in a hacking campaign that compromised data from approximately 165 companies, including major firms like AT&T and Ticketmaster. Operating under aliases such as "Waifu" and "Judische," he, along with co-conspirator John Binns, allegedly extorted around $2.5 million from victims. Moucka waived the 30-day waiting period for extradition, expediting his transfer to U.S. authorities. This case highlights the growing international collaboration in addressing cybercrime and underscores the vulnerabilities associated with cloud storage services.

A Win for Encryption: France Rejects Backdoor Mandate

Joe Mullin | Electronic Frontier Foundation

The French National Assembly has rejected a proposed law that would have mandated backdoor access to encrypted messaging services like Signal and WhatsApp for law enforcement purposes. This provision, embedded within anti-drug trafficking legislation, aimed to implement the "ghost participant" model, allowing authorities to silently join encrypted conversations without users' knowledge. Critics argued that such measures would introduce systemic vulnerabilities, compromise user privacy, and undermine trust in secure communication platforms. The Assembly's decision is being hailed as a significant victory for digital rights and privacy advocates, emphasizing the importance of encryption in safeguarding personal communications.

EU Data Act Imposes New Data Sharing Obligations

Laura Brodahl | Laura De Boel | Wilson Sonsini

The European Union's Data Act, effective September 12, 2025, introduces significant data-sharing obligations for providers of connected devices and related services within the EU. These entities must grant users access to raw usage data upon request, encompassing information collected from sensors, user interfaces, or associated services. Additionally, any necessary metadata to contextualize this data, such as timestamps, must be provided. However, data inferred or derived through proprietary algorithms is excluded from these requirements. Companies are advised to review their data handling processes to ensure compliance, including facilitating secure, free-of-charge data retrieval for users. Devices introduced to the EU market after September 12, 2026, must be designed to allow users direct access to their usage data.

Messaging apps: a report on Australian Government agency practices and policies

Office of the Australian Information Commissioner

The Office of the Australian Information Commissioner (OAIC) has released a report examining the use of messaging apps by Australian Government agencies. The study reviewed 22 agencies and found that 16 permitted the use of messaging apps for work purposes, yet only 8 of these had policies or procedures to support their use. Among these policies, many lacked considerations for essential archive requirements, Freedom of Information (FOI) search obligations, and the necessity of using official accounts or devices. The OAIC recommends that agencies develop comprehensive policies addressing information management, FOI, privacy, and security considerations related to messaging apps. This initiative aims to enhance transparency, accountability, and adherence to statutory obligations within government communications.

Meta to seek disclosure on political ads that use AI ahead of Canada elections

Reuters

Meta Platforms has announced that, ahead of Canada's upcoming federal elections, advertisers must disclose the use of artificial intelligence (AI) or other digital techniques in creating or altering political or social issue ads. This policy applies to ads featuring photorealistic images, videos, or realistic-sounding audio that have been digitally manipulated to depict real individuals or events inaccurately. The initiative aims to combat misinformation by ensuring transparency in political advertising. Additionally, Meta is collaborating with Elections Canada to provide users with authoritative information about voting procedures and election results. These measures reflect Meta's ongoing efforts to uphold election integrity and address the challenges posed by AI-generated content.

Drive-thru and at home voting to be piloted in next Kitchener election

Justine Fraser | CityNews

The City of Kitchener is set to introduce new voting methods in the 2026 municipal election to enhance accessibility and voter participation. Following a report presented on March 17, 2025, the city council approved two initiatives: a pilot program for drive-thru voting and the implementation of at-home voting for residents unable to leave their homes due to physical injuries or disabilities. While there was interest in internet and online voting, city staff recommended against it, citing increasing cybersecurity threats and concerns over third-party vendor reliance. Additionally, vote-by-mail was not recommended due to potential voter confusion, rising postage costs, and dependencies on external delivery services. These measures aim to provide convenient and secure voting options, ensuring all eligible voters have the opportunity to participate in the electoral process.

Who Owns Your DNA Now?

Ken Macon | Reclaim the Net

The bankruptcy of 23andMe has sparked serious privacy concerns over the fate of genetic data from around 15 million users. While the company claims its data protection policies remain intact during the proceedings, critics warn that a sale of assets could transfer sensitive DNA information to new, potentially less privacy-conscious owners. Several U.S. state attorneys general have advised users to delete their data, though experts caution that deletion may not be immediate or complete. In response, lawmakers in Pennsylvania have proposed the Genetic Materials Privacy and Compensation Act, aiming to ensure individuals retain control over their genetic data and are compensated if it’s used for profit. The case underscores growing demands for federal regulation over consumer genetic data, especially in cases of corporate transitions like bankruptcy.

DNA of 15 Million People for Sale in 23andMe Bankruptcy

Jason Koebler | 404 Media

The bankruptcy of 23andMe has sparked alarm over the potential sale of its most valuable asset—the genetic data of over 15 million customers. While the company claims data protection remains a priority, privacy experts warn that a sale could expose sensitive DNA information to new, less trustworthy entities. In the absence of strong federal regulations on genetic privacy during bankruptcy, several U.S. states, including California and North Carolina, have urged users to delete their data. Meanwhile, Pennsylvania lawmakers have introduced the Genetic Materials Privacy and Compensation Act to ensure individuals retain ownership of their genetic information and receive compensation if it's monetized. The situation underscores the urgent need for clear legal protections in the era of commercial genomics.

Flurry to pay $3.5 million for harvesting sexual and reproductive health data from period app

Suzanne Smalley | The Record

The now-defunct analytics firm Flurry has agreed to a $3.5 million settlement to resolve a class-action lawsuit alleging unauthorized collection of sensitive data from users of the Flo Health period-tracking app. Between November 2016 and February 2019, Flurry allegedly accessed personal information—including addresses, birth dates, and intimate health details—without user consent. The lawsuit also named AppsFlyer, Meta, and Google as entities that obtained data from Flo Health users. This settlement addresses only the claims against Flurry, highlighting ongoing concerns about privacy and data security in health-related applications.

The NYPD is sending more drones to 911 calls, but privacy advocates don’t like the view

Suzanne Smalley | The Record

The NYPD has expanded its use of drones as first responders to 911 calls, citing faster response times and increased officer safety. These drones can fly up to 40 minutes and are equipped with powerful telephoto cameras, thermal imaging, and 3D mapping capabilities. However, civil liberties advocates and a city inspector general report have raised concerns about the lack of transparency in how and when the drones are used, and how the data is stored or shared. Critics fear the initiative could become a form of warrantless mass surveillance, especially when combined with other technologies like facial recognition. Calls for clearer policies and public accountability are growing as the drones increasingly become a staple of NYPD operations.

Swedish government proposes bill to allow police to use AI face-recognition

Reuters

The Swedish government has proposed legislation to permit police use of AI-powered facial recognition technology for combating serious crimes such as human trafficking, kidnapping, and murder. Justice Minister Gunnar Strömmer emphasized that this tool is essential to address persistent gang violence and enhance public safety. Sweden has experienced significant gang-related issues, leading to the highest rate of deadly gun violence per capita in the EU as of 2023. If approved by Parliament, the law would take effect in early 2026 and include safeguards to ensure compliance with personal integrity laws, restricting usage to particularly significant cases.

Toronto police move to upgrade facial recognition technology, raising concerns

Xavier Richer Vis | Ricochet Media

The Toronto Police Service (TPS) is planning to upgrade its facial recognition technology, a move that has ignited privacy concerns among civil liberties advocates. Critics argue that such enhancements could lead to increased surveillance, disproportionately affecting marginalized communities and potentially infringing on individual privacy rights. They emphasize the need for transparency and robust oversight to prevent misuse of the technology. The TPS maintains that the upgrade aims to improve public safety and assist in criminal investigations. However, the debate underscores the delicate balance between leveraging technological advancements for security purposes and safeguarding civil liberties.

Ottawa police detective found guilty of discreditable conduct in unauthorized investigations into child deaths

Ted Raymond | CTV News

Ottawa Police Detective Helen Grus has been found guilty of discreditable conduct for conducting unauthorized investigations into the deaths of children. The disciplinary hearing concluded that Grus accessed police databases without authorization, violating department protocols. Her actions have raised concerns about privacy and the integrity of police procedures. The ruling underscores the importance of adherence to established investigative protocols within law enforcement agencies. ​

Are Chicago police using CrimeTracer?

Shawn Mulcahy | Chicago Reader

In August 2024, the City of Chicago authorized a $727,361 payment to SoundThinking, the company behind ShotSpotter, for a six-month pilot of their CrimeTracer software—a law enforcement search engine and information platform. Despite this significant expenditure, details about the software's deployment and effectiveness remain undisclosed, as neither the Chicago Police Department (CPD) nor the Mayor's Office have provided information regarding its current use or future plans. This lack of transparency has raised concerns among privacy advocates and the public, highlighting ongoing issues with oversight and accountability in the adoption of surveillance technologies by law enforcement agencies.

Trump officials texted attack plans to a group chat in a secure app that included a journalist

Tara Copp | Aamer Madhani | Eric Tucker | AP News

In March 2025, top U.S. national security officials inadvertently included Jeffrey Goldberg, editor-in-chief of The Atlantic, in a Signal group chat discussing planned military strikes against Yemen's Houthi rebels. The chat, intended for coordination among officials, revealed sensitive details about the operation. Upon realizing the error, officials attempted to remove Goldberg, but the information had already been disclosed. Defense Secretary Pete Hegseth downplayed the incident, stating, "Nobody was texting war plans." However, the leak has sparked significant criticism and calls for investigations into the mishandling of sensitive information.

Canadian governments rely on Starlink for critical services. Some are reconsidering

CTV News

Several Canadian provincial and territorial governments have been utilizing Starlink, the satellite internet service operated by Elon Musk's SpaceX, to provide critical internet and emergency communication services in remote areas. However, recent geopolitical tensions and economic disputes, particularly the imposition of tariffs by the U.S. government, have led some jurisdictions to reconsider their reliance on Starlink. For instance, the province of Ontario announced the cancellation of a $100 million contract with Starlink, citing retaliatory measures against U.S. tariffs. This move reflects broader concerns about the stability and security of depending on foreign-owned infrastructure for essential services. As a result, Canadian authorities are exploring alternative solutions to ensure reliable and sovereign communication capabilities.

Peterborough Council votes to discontinue use of X

Scott Arnold | Peterborough Today

Peterborough City Council has voted to discontinue its use of the social media platform X (formerly Twitter), citing concerns about the platform's failure to effectively manage hate speech and misinformation. The motion, introduced by Councillor Matt Crowley, reflects the city's desire to uphold values of inclusivity and non-discrimination, distancing itself from platforms that may host ideologically conflicting content. While the city continues to use other social media platforms like Facebook and Instagram, the resolution carves out exceptions for Peterborough Transit, Fire Services, and emergency communications, where timely updates are critical. To reduce reliance on social media for service updates, staff are tasked with exploring alternative communication tools, including a Snow Plow Tracker and a dedicated smartphone app for transit alerts, with proposals due by the 2026 Budget deliberations. The decision reflects a growing trend of municipalities reassessing their presence on social platforms amid evolving concerns over digital safety and civic responsibility.

Meta to stop targeting UK citizen with personalised ads after settling privacy case

Dan Milmo | The Guardian

Meta, the parent company of Facebook and Instagram, is contemplating introducing a subscription-based, ad-free service for users in the United Kingdom. This consideration follows a legal settlement with Tanya O’Carroll, a human rights campaigner who sued Meta in 2022 for allegedly breaching UK data laws by refusing to cease the collection and processing of her data for targeted advertising. The UK's Information Commissioner's Office supported O’Carroll's stance, emphasizing individuals' rights to object to their personal information being used for direct marketing. In response, Meta is exploring offering UK users a paid subscription option to access its platforms without advertisements, similar to initiatives previously considered in the European Union.

Porn companies must take strong action to protect privacy and prevent future harms

Elaine Craig | The Conversation

The adult entertainment industry is under increasing pressure to strengthen privacy protections and prevent data misuse, following reports that 93% of porn sites leak user data to third parties. This widespread data sharing raises serious concerns, especially for vulnerable users whose browsing habits could be exposed without consent. In response, some companies have adopted basic safeguards like HTTPS encryption, but security breaches and lax standards persist. Regulatory bodies in countries like France and U.S. states are now mandating age verification laws, aiming to protect minors while also sparking debate about privacy and free speech. The situation underscores an urgent need for the industry to implement transparent, privacy-first policies to rebuild trust and avoid future harms.

“MyTerms” wants to become the new way we dictate our privacy on the web

Kevin Purdy | ARS Technica

Doc Searls, a veteran technology journalist and advocate for digital rights, is spearheading an initiative called "MyTerms", officially known as IEEE P7012. This proposed standard aims to empower individuals by enabling them to set their own privacy terms when interacting with websites and online services. Unlike traditional models where companies unilaterally dictate terms of service, MyTerms seeks to establish a framework where user-defined privacy preferences are communicated and respected in a standardized, machine-readable format. This approach could potentially shift the current dynamics of online privacy, granting users greater control over their personal data. The initiative is still in its early stages, and its success will depend on widespread adoption by both users and service providers.

Removal of FTC commissioners fuels uncertainty

Joe Duball | IAPP

The dismissal of FTC Commissioners Alvaro Bedoya and Rebecca Kelly Slaughter by President Trump has sparked a legal and constitutional battle over the independence of regulatory agencies. Both commissioners, appointed to fixed terms, argue that their removal violates the FTC Act and established Supreme Court precedent protecting the agency’s autonomy. The White House claims the firings were justified, asserting that the FTC exercises executive power and must align with presidential policy. Legal experts expect the case to reach the Supreme Court, with potential implications for the structure and independence of agencies like the FTC, FCC, and SEC. In the meantime, the FTC now holds a Republican majority, raising concerns about future consumer protection and antitrust enforcement.

Unpacking the ICO’s third edition of the Tech Horizon Report

Tayla Byatt | Slaughter and May

The UK’s Information Commissioner’s Office (ICO) has released the third edition of its Tech Horizons Report, highlighting four emerging technologies expected to shape the next 2–7 years: connected transport, quantum sensing in healthcare, digital therapeutics, and synthetic media. Each technology raises specific privacy concerns, such as the collection of novel personal data (e.g., brain activity), challenges around informed consent in shared environments, and accountability in complex tech ecosystems. The report warns that growing data volumes and opaque processing make it harder for individuals to understand or control how their information is used. The ICO urges organizations to embed privacy protections early, using tools like Privacy-Enhancing Technologies (PETs). This report is part of the ICO’s broader effort to help stakeholders proactively address privacy issues as technology evolves.

The extraterritorial reach of B.C.’s privacy laws: Court upholds privacy commissioner’s order against foreign AI company

Claire Feltrin |Ingrid Vanderslicce | Daniel J. Michaluk | Frédéric Wilson | Hélène Deschamps Marquis | Eric S. Charleston | BLG

The British Columbia Supreme Court has upheld a provincial Privacy Commissioner’s order against Clearview AI, affirming that B.C.’s Personal Information Protection Act (PIPA) applies to foreign companies with a real and substantial connection to the province. Clearview AI had collected facial images from B.C. residents and provided services to local entities, including law enforcement, without proper consent. The court ruled that these activities subjected the company to provincial privacy laws, requiring it to cease data collection and delete the personal information of B.C. residents. This decision reinforces the extraterritorial reach of Canadian privacy law and sets a strong precedent for holding international tech companies accountable. It also highlights the growing global scrutiny of AI-driven biometric data practices.

Conclusion of preliminary investigation X, formerly Twitter: use of personal data for training the AI Grok

Swiss Federal Data Protection and Information Commissioner

The Swiss Federal Data Protection and Information Commissioner (FDPIC) has concluded a preliminary investigation into Platform X (formerly Twitter) regarding its use of personal data to train the artificial intelligence model Grok. The FDPIC determined that users have the right to object to their public posts being utilized for AI training purposes. This decision underscores the importance of user consent and data protection in the development of AI technologies. Platform X users concerned about their data being used in this manner are advised to review their privacy settings and exercise their right to object if desired.

More Loblaw employees to wear body cameras across Canada, company says

Aaron D’Andrea | Global News

Loblaw Companies Limited is expanding its pilot program to equip employees with body-worn cameras in response to rising retail crime across Canada. Initially implemented in Abbotsford, Saskatoon, and Calgary, the initiative will now include additional stores in British Columbia, Ontario, and Manitoba, with some Toronto locations confirmed. The cameras are activated only during situations where there is a risk of escalation, aiming to enhance safety for both staff and customers. Participation is voluntary, with designated personnel such as asset protection representatives and store managers trained to use the devices and required to inform individuals when recording begins. Early results suggest that the presence of body cameras may help reduce violent incidents, prompting Loblaw to evaluate their effectiveness across a broader range of stores and banners.

Previous
Previous

Week of 2025-4-7

Next
Next

Week of 2025-03-24