Week of 2024-09-06

California AI bill passes State Assembly, pushing AI fight to Newsom

Gerrit De Vynck | Cat Zakrzewski | Washington Post

California's recently passed SB 1047, known as the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act," aims to regulate the development of powerful AI systems. The bill requires developers of advanced AI models, particularly those costing over $100 million to train, to implement rigorous safety measures. These include pre-training testing, establishing fail-safe mechanisms, and preventing models from causing or enabling severe harm, such as cyberattacks or biological weapon development. While the legislation has received bipartisan support, critics argue that it could stifle innovation, particularly for smaller AI startups and open-source projects. The bill now awaits Governor Gavin Newsom's signature to become law, potentially setting a precedent for AI regulation across the U.S.

Labor considers an Artificial Intelligence Act to impose ‘mandatory guardrails’ on use of AI

Paul Karp | The Guardian

Australia’s Labor government is considering introducing an Artificial Intelligence (AI) Act that would impose mandatory regulations on the use of AI technologies. The proposal aims to establish guardrails to address potential risks associated with AI, such as privacy violations, misuse, and other harms. The discussion around this legislation is spurred by concerns about the unchecked development and deployment of AI without clear ethical or legal guidelines. The Act would seek to enforce standards on how AI can be responsibly used, potentially affecting industries ranging from tech to government services. However, the specifics of what these guardrails would entail and how they would be enforced remain under debate as the government works toward ensuring public trust in AI technologies.

This initiative aligns with global efforts to regulate AI, as seen in similar frameworks being developed in places like the European Union and the United States. Critics worry about stifling innovation, while proponents emphasize the need for proactive governance to prevent negative consequences as AI becomes more integral to daily life.

Artificial intelligence should not be allowed to adjudicate cases in Canada’s Federal Court

Bryce J. Casavant | Andrea Menard | Siomonn Pulla | The Conversation

The article argues against the use of artificial intelligence (AI) to adjudicate cases in Canada’s Federal Court. The authors express concerns that AI lacks the human judgment and empathy necessary for delivering justice in complex legal situations. They emphasize that while AI may improve efficiency, it could undermine fundamental legal principles such as fairness and accountability. The authors caution that relying on AI in judicial decision-making could erode public trust in the legal system and disproportionately harm marginalized groups. They advocate for a more thoughtful approach to AI integration in the judiciary, ensuring it supports human judges rather than replacing them.

Cyber Security, Artificial Intelligence and Enhanced Privacy Regulation Coming to Ontario Hospitals and Health Agencies

Daniel Fabiano | Alex Cameron | Christopher Ferguson | Fasken

Ontario's proposed Bill 194, known as the "Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024," aims to enhance data privacy and cybersecurity regulations for hospitals and health agencies. The bill would introduce mandatory privacy impact assessments (PIAs) for institutions before collecting personal information, ensuring they detail the purpose, legality, and safeguards for protecting the data. Additionally, the bill would require public disclosure about the use of artificial intelligence (AI) systems in healthcare settings and the implementation of accountability frameworks to manage associated risks.

Further amendments to Ontario's Freedom of Information and Protection of Privacy Act (FIPPA) would increase the powers of the Information and Privacy Commissioner, demanding that health institutions report privacy breaches under stricter conditions. These changes aim to strengthen protections for personal information and hold healthcare institutions accountable for managing privacy risks.

Stadiums Are Embracing Face Recognition. Privacy Advocates Say They Should Stick to Sports

Caroline Haskins | Wired

Protests have emerged in response to the growing use of facial recognition technology at U.S. sports stadiums, including Citi Field and Madison Square Garden. Advocacy groups such as Fight for the Future and the Surveillance Technology Oversight Project (STOP) argue that this technology poses serious privacy risks, including potential misidentifications, racial bias, and the misuse of biometric data by law enforcement. While stadiums promote facial recognition as a tool for security and streamlined fan experiences, critics highlight concerns about data security and surveillance overreach. Privacy advocates have called for legislative actions to regulate or ban the use of facial recognition in public and residential spaces. Despite its benefits for ticketing efficiency, opponents argue that the risks to civil liberties outweigh the potential advantage.

US wants EU members to give access to travelers’ biometric data by 2027

Masha Borak | Biometric Update

The U.S. government is pushing for all EU member states to sign the Enhanced Border Security Partnership (EBSP) by 2027, which would allow access to their biometric databases for traveler screening. This initiative would connect EU countries’ databases with the U.S. IDENT/HART systems as part of a broader effort to enhance border security and streamline the identification process for travelers. However, the plan faces legal and privacy challenges within the EU, as the transfer of biometric data is not currently covered by existing EU-U.S. agreements. European lawmakers are also questioning whether this data sharing is compatible with EU legislation, and a new international treaty may be needed to address these concerns.

Transgender kids have a right to privacy from parents, U.S. court rules

Holly Ramer | Global News

The New Hampshire Supreme Court ruled that transgender students have the right to privacy from their parents regarding their gender identity, upholding a school district's policy that prevents school staff from disclosing a student's transgender status without the student's consent. The case stemmed from a mother challenging the policy after discovering her child was being addressed by a different name at school. The court concluded that the policy does not infringe on parental rights, while a dissenting justice argued that it impacts parents' ability to guide their children.

Judge Rules $400 Million Algorithmic System Illegally Denied Thousands of People’s Medicaid Benefits

Todd Feathers | Gizmodo

A U.S. District Court judge ruled that Tennessee's $400 million TennCare Connect system, an algorithmic system used to determine Medicaid eligibility, illegally denied benefits to thousands of residents. The system, designed to automate eligibility processes, was found to have terminated benefits due to programming errors, disproportionately affecting low-income individuals and those with disabilities. The ruling highlights concerns about the reliability and fairness of automated decision-making systems in public welfare programs.

Personhood Credentials: Everything to Know About the Proposed ID for the Internet

Lisa Lacy | CNet

The concept of "personhood credentials" is a proposed digital ID system designed to verify that online users are human, rather than AI bots. This would combat AI-generated content and fraudulent activities by requiring cryptographic proof of a user's humanity. These credentials could come from various institutions, such as governments or tech companies, and aim to preserve privacy while ensuring secure interactions online. However, critics argue that such a system could raise ethical concerns, including overreliance on governments for digital identity validation​.

How the Ottawa Catholic School Board is leading the way in using AI in the classroom

Alex Goudge | Ottawa Citizen

The Ottawa Catholic School Board is at the forefront of integrating AI into classrooms, with newly developed AI principles aimed at balancing ethical use with practical applications. Students can use AI for tasks like essay outlines and solving math problems, while teachers benefit from AI tools to enhance lesson planning and accommodate diverse learning needs. However, AI use is regulated, with privacy protections in place and a cap on AI-driven lesson content. The board also emphasizes digital literacy, teaching students about AI risks like misinformation and bias​.

Teachers to get more trustworthy AI tech as generative tools learn from new bank of lesson plans and curriculums, helping them mark homework and save time

UK Gov

The UK government has launched a £4 million initiative to enhance the use of AI tools in education, specifically to help teachers with tasks like lesson planning and marking homework. This project will create a content store filled with curriculum guidelines, lesson plans, and anonymized pupil assessments to train AI systems in generating high-quality teaching materials. The aim is to reduce administrative burdens on teachers, allowing them more time for face-to-face interaction with students. The initiative also involves collaboration with tech companies and institutions like the Open University to ensure the AI tools are safe, reliable, and tailored to meet educational needs. There are concerns, however, about over-reliance on technology and the potential reduction of human interaction in education, as expressed by parents and students in recent research​.

Make AI tools to reduce teacher workloads, tech companies urged

Richard Adams | The Guardian

The UK government is urging tech companies to develop AI tools to help reduce teachers' workloads by providing access to a new content bank containing official assessments, lesson plans, and curriculum materials. This £3 million initiative aims to improve AI tools for tasks like marking homework and creating teaching materials. A further £1 million will be awarded in a competition to developers creating the most effective AI tools to assist teachers, with an emphasis on improving accuracy and reducing time spent on administrative tasks. The goal is to ease teachers' workloads, allowing more focus on face-to-face teaching while ensuring AI tools are trained on reliable, high-quality educational content.

Make political parties subject to privacy laws

The Brandon Son

The article emphasizes the privacy risks posed by the lack of regulation on how Canadian political parties handle voter data. OpenMedia's report reveals that at least 91 companies in Canada's political influence industry have access to voter data with minimal oversight. Unlike private or public institutions, political parties in Canada are largely exempt from privacy laws, creating a regulatory void that puts Canadians' personal information at risk of misuse, fraud, and identity theft. Despite widespread public support for regulating political parties' data practices, major federal parties have resisted efforts to bring them under privacy laws, even appealing a ruling that would enforce such measures in British Columbia. The article calls for political parties to recognize their ethical responsibility and prioritize protecting Canadians' privacy.

Manitoba introduces GPS monitoring program for bail supervision

Bernise Carolino | Canadian Lawyer Magazine

Manitoba has introduced a GPS monitoring program for individuals on bail, aimed at improving compliance with release conditions and preventing repeat offenses. The program, funded with $2.9 million over two years, uses ankle monitors equipped with GPS technology to track real-time movements and ensure adherence to court-ordered restrictions. It also includes communication features to alert law enforcement if individuals violate restrictions. The initiative is part of a broader effort to enhance public safety, particularly targeting recidivism among repeat offenders in retail and other crimes.

Teslas Are Being Towed by Police to Collect On-Board Video Footage

Lucas Bell | Road and Track

Police in California, particularly in Oakland, have been using Tesla vehicles as a source of video evidence in criminal investigations. Teslas equipped with "Sentry Mode" automatically record their surroundings when parked, capturing potential criminal activity. If a Tesla is parked near a crime scene, law enforcement may seek permission from the owner to access the footage. However, in cases where the owner cannot be contacted, police have resorted to obtaining search warrants to tow the vehicles and secure the footage for investigation purposes. This practice has raised concerns about privacy and the potential for constitutional challenges, as it involves seizing private property without the owner's consent​.

Ottawa police secretly wiretapped 5 Black officers, lawsuit alleges

Shaamini Yogaretnam | CBC News

The article reports on a lawsuit filed by five Somali-Canadian Ottawa police officers against the Ottawa Police Services Board, alleging that they were secretly wiretapped without charges or explanation. The officers claim that their private communications were intercepted as part of an investigation that used racist stereotypes about Black men and Somali families. They argue that their loose connections to relatives involved in criminal activity were the only reasons for the surveillance, which they believe was based on discriminatory assumptions. The lawsuit also alleges the officers were targeted because of their advocacy against racism within the force, leaving them under suspicion by colleagues and damaging their careers. The Ottawa Police Services Board denies the allegations and maintains that the wiretaps were lawfully obtained. The officers continue to fight for the unsealing of the court documents related to the wiretap authorizations.

Drones helped in big Vancouver arrest. It’s time for policy scrutiny, researchers say

Darryl Greer | Toronto Star

Researchers are calling for more scrutiny of police drone use after Vancouver police used drones to assist in a major arrest following two violent attacks. While drones are increasingly employed by police across Canada, experts like Brenda McPhail argue that public transparency and accountability are lacking. The evolving technology, including improved cameras and potential integration with facial recognition, raises privacy concerns. There is a push for updated policies and public discussions to address the balance between safety and privacy in police drone operations​.

Toronto police registry for vulnerable persons has failed to do its job: Ombudsman

John Marchesan | CityNews

Researchers are calling for more scrutiny of police drone use after Vancouver police used drones to assist in a major arrest following two violent attacks. While drones are increasingly employed by police across Canada, experts like Brenda McPhail argue that public transparency and accountability are lacking. The evolving technology, including improved cameras and potential integration with facial recognition, raises privacy concerns. There is a push for updated policies and public discussions to address the balance between safety and privacy in police drone operations​

FBI is losing track of classified and sensitive data, watchdog finds

Adam Mazmanian | NextGov

A recent audit by the U.S. Department of Justice's Office of the Inspector General (OIG) found that the FBI is mishandling sensitive and classified data stored on electronic media devices marked for destruction. The audit uncovered significant weaknesses in how the FBI tracks and manages items such as hard drives and thumb drives, especially after they are removed from computers slated for destruction. These issues include improper labeling, insufficient tracking, and weak physical security measures at facilities where the media is stored prior to destruction. Some devices, including ones containing national security information, were left unguarded for extended periods, heightening the risk of theft or loss.

The OIG recommended that the FBI strengthen its inventory controls, ensure proper labeling of storage media, and improve the physical security of these items. The FBI has acknowledged the findings and is working on new policies to address the concerns raised by the audit.

Ransomware Reckoning – The New Bill Changes the Game

Sinan Pismisoglu | Eric Setterlund | Bradley

The new Intelligence Authorization Act for Fiscal Year 2025 significantly enhances the U.S. government's approach to combating ransomware. The bill elevates ransomware threats to a national intelligence priority, mandates reports on ransomware risks, and introduces processes to designate state sponsors of ransomware. It fosters public-private partnerships and establishes an AI Security Center to develop tools to detect and counteract ransomware attacks. The bill aims to strengthen accountability and transparency by requiring regular progress reports to Congress, highlighting the importance of collective defense against evolving cyber threats.

Clearview AI fined by Dutch agency for facial recognition database

Reuters

Clearview AI has been fined €30.5 million ($33.7 million) by the Dutch Data Protection Authority (DPA) for creating an illegal facial recognition database by scraping billions of images from the internet without user consent. This penalty, issued under Europe's stringent GDPR regulations, highlights the invasive nature of facial recognition technology. The DPA stressed that Clearview’s actions violated privacy rights, with further fines possible if the company does not comply with regulatory orders. Despite these penalties, Clearview maintains that it does not fall under the jurisdiction of EU law, arguing that it has no direct business presence in Europe. This fine is part of a broader crackdown on privacy violations in Europe, following similar actions against other tech firms​.

Toronto school board confirms students’ info stolen as LockBit claims breach

Jonathan Greig | The Record

The Toronto District School Board (TDSB), Canada's largest school board, experienced a ransomware attack that targeted a test environment used by its technology department. Fortunately, the attack did not impact the board's official networks or operational systems. The TDSB immediately secured its data and is working with third-party cybersecurity experts and law enforcement to investigate the incident. While the attack did not directly affect critical systems, the TDSB notified Ontario’s Information and Privacy Commissioner as a precaution and will inform affected individuals if any personal information was compromised.

This attack is part of a broader trend, as ransomware gangs have targeted several prominent Toronto institutions recently, including the city's library, zoo, and children's hospital​.

Irish court decision demonstrates standards expected when handling employee personal data

Pinsent Masons

A recent ruling from the Dublin Circuit Court sets a clear standard for employers regarding the handling of employee personal data. The case involved an employee who was filmed without consent by a manager while on sick leave, leading to claims of unlawful data processing under GDPR. The court ruled that employers must ensure transparency and accountability in data handling and are responsible for actions closely related to their employees' duties. AA Ireland was ordered to account for or notify the destruction of the video in question.

Employer discriminated because of pre-employment google search

Zoe Ingenhaag | Lewis Silkin

The case of Ngole v. Touchstone Leeds highlights the legal risks associated with using Google or social media searches during the recruitment process. Mr. Ngole, a qualified social worker, applied for a mental health support worker role but had his conditional offer withdrawn after the employer discovered online information about his religious views against homosexuality, which had led to previous legal proceedings. The tribunal ruled that withdrawing the offer amounted to direct discrimination based on religious belief, as Touchstone had other, less intrusive options available, such as discussing how he would approach the role.

Key takeaways for employers include being cautious when using online searches, especially if it reveals sensitive or protected characteristics, such as religious views. Employers must ensure that they don't make decisions based on discriminatory factors and should consider proportional actions when concerns arise. The case also stresses the importance of transparency, data protection, and giving candidates a chance to respond to adverse findings. Touchstone's initial response was deemed excessive, though the refusal to reinstate the offer was ultimately seen as proportionate after further discussions.

Right to disconnect: Ontario's law flawed, say expert

Jim Wilson | HRD

Ontario's "right to disconnect" law, which took effect in 2022, requires employers to establish policies limiting work-related communications after hours. However, experts argue the law is flawed because it doesn't directly address underlying causes of employee burnout, such as excessive workloads and lack of resources. Additionally, there are no penalties for non-compliance, making the law feel superficial. Critics also note that the law may unintentionally limit the flexibility many workers seek in managing their schedules. Employers are encouraged to focus on deeper solutions to workplace stress.

Previous
Previous

Week of 2024-09-13

Next
Next

Week of 2024-08-23