Biden Administration’s Voluntary AI Safety Agreement

The Biden administration announced that it brokered a voluntary agreement with several of the biggest technology and artificial intelligence (AI) companies.  The agreement, available here, has the companies taking a number of actions intended to encourage safe, secure, and trustworthy development of AI technologies, particularly generative AI systems. While the commitments are not as extensive as other frameworks, such as the NIST AI Risk Management Framework or the Biden Administration’s Blueprint for an AI Bill of Rights, they are in some ways more concrete and actionable, and could serve as a model for other companies entering the AI market.

Safety

Signatories to the agreement commit to adversarial testing (red-teaming) to evaluate areas such as misuse, societal risks, and national security concerns.  Adversarial testing should be performed internally as well as by independent third parties.  Testing will include a number of specific areas:

  • Biological, chemical, and radiological risks, such as the potential of an AI system to lower barriers to entry for weapons design, development, or use;
  • Cyber capabilities, such as the ways in which an AI system can be used for vulnerability discovery or the exploitation or defense of a computer system;
  • The effects of an AI system’s interaction with other systems, particularly the capacity to control physical systems;
  • The capacity for an AI system to self-replicate; and
  • Societal risks of the AI systems, such as bias and discrimination.

Security

Signatories will invest in cybersecurity and insider threat safeguards in connection with the AI system.  Additionally, they will offer incentives, including bug bounties, contests or prizes, for third parties to discover and report unsafe behaviors, vulnerabilities, and other issues with the AI system.

Trust

AI systems should use mechanisms that enable users to understand if audio or visual content is AI-generated, including watermarking that identifies the service or model.  Signatories will also develop tools or APIs to determine if a particular piece of content was created with their AI system.

AI companies will publicly report model or system capabilities, limitations, and domains of appropriate and inappropriate use, including a discussion of societal risks, such as effects on fairness and bias. Reports should include information about the safety evaluations conducted (including information about dangerous capabilities, to the extent it is responsible to publicly disclose this information), significant limitations in performance that have implications for the domains of appropriate use, discussion of the model’s effects on societal risks such as fairness and bias, and the results of adversarial testing conducted to evaluate the model’s fitness for deployment.

Signatories should evaluate societal risks posed by an AI system, including the potential for harmful bias and discrimination, and protecting privacy.  AI systems should be designed to avoid harmful biases and discrimination from being created or propagated. Companies should also employ trust and safety teams, advance AI safety research, advance privacy, protect children, and proactively manage the risks of an AI system.

Tagged with: , , , ,
Posted in Artificial Intelligence

Artificial Intelligence Systems, Profiling, and the New U.S. State Privacy Laws

The rapid spread of Artificial Intelligence (AI) systems in recent years has overlapped with the enactment of comprehensive privacy laws by multiple U.S. states.  Aside from being generally applicable to AI systems in the same way as they are to any online service, several of the comprehensive state privacy laws have provisions that specifically address certain uses of AI systems, in particular use in profiling.  This article surveys those provisions and assumes the reader is already familiar with basic concepts in the comprehensive privacy laws, such as controllership and applicability thresholds.

The new comprehensive privacy laws of Iowa and Utah have no specific provisions on the use of AI for profiling or otherwise.  The laws of Connecticut, Delaware, Indiana, Montana, Tennessee, Texas, and Virginia are largely the same.  California, Colorado, Florida, and Oregon have some significant differences.

Read more ›
Tagged with: , , , , ,
Posted in Artificial Intelligence, Legislation, Privacy, Regulations

Final Interagency Guidance on Managing Risks Associated with Third-Party Relationships

On June 6, 2023, the Board of Governors of the Federal Reserve System, Office of the Comptroller of the Currency and Federal Deposit Insurance Corp. (collectively, the “Agencies”) issued final interagency guidance that provides granular recommendations for how banks and other regulated financial institutions should manage risks associated with third-party relationships (the “Guidance”). The Guidance replaces prior guidelines that were released by the Agencies on July 19, 2021.

Read more ›
Tagged with: , , ,
Posted in Policies and Procedures, Risk Management, Standards

The Biden Administration’s Blueprint for an AI Bill of Rights

As the use of artificial intelligence (AI) rapidly expands throughout the private sector and government, the Biden administration has published a report titled A Blueprint for an AI Bill of Rights. A summary is available at https://www.whitehouse.gov/ostp/ai-bill-of-rights/ and the full document is available at https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf.

The report cites a myriad of issues with AI systems, including uses in hiring and credit decisions that have been found to reproduce existing inequities or create new harmful bias, uses in patient care that proved to be unsafe or ineffective, and increased collection or use of data that threatens people’s opportunities or undermines their privacy.  The report argues these harmful outcomes are not inevitable and that the AI tools have the potential to revolutionize many industries and benefit all parts of society.

The report sets out 5 basic rights of people that should be respected in connection with AI systems.

Read more ›
Tagged with: , , ,
Posted in Artificial Intelligence

NIST Issues New Artificial Intelligence Risk Management Framework

The National Institute of Standards and Technology (NIST) recently released version 1.0 of its Artificial Intelligence Risk Management Framework. The framework is available at https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf, and a full set of supporting documents is available at https://www.nist.gov/itl/ai-risk-management-framework

There is an emerging consensus that AI systems present a significantly different risk profile than conventional information technology systems.  While there is currently no legal requirement to use a risk management framework when developing AI systems, there are a growing number of proposals that would require the use of a risk management framework or offer a safe harbor from certain types of liability if one is used.

The framework identifies 6 factors for mitigating risk and evaluating the trustworthiness of an artificial intelligence (AI) system.

Read more ›
Tagged with: , , , ,
Posted in Artificial Intelligence

AI: What You Need to Know

Andy Baer is joined by three of his Cozen O’Connor colleagues for a panel discussion exploring the evolving law of artificial intelligence in the U.S. and Europe, including legal risks associated with ChatGPT and other AI tools, the current state of regulation, and how providers and users of AI tools can manage risk going forward.

Download this episode.

Read more ›
Posted in Cyber Law Monitor Podcast

You’ve Been Breached – Who Do You Call?

Host Andrew Baer is joined by Matthew Klahre from Cozen O’Connor’s Technology, Privacy, & Data Security practice group for a discussion, with practical tips, on how to manage internal and external communications following a data breach.

Download this episode.

Read more ›
Posted in Cyber Law Monitor Podcast

Update on EU-US Personal Data Transfers

Andy Baer is joined by Christopher Dodson of Cozen O’Connor to discuss EU-US personal data transfers after Schrems II, including the latest on the EU-US Data Privacy Framework.

Download this episode.

Read more ›
Posted in Cyber Law Monitor Podcast

Incoming State Privacy Laws in 2023

Introducing the Cyber Law Monitor Podcast, a podcast from Cozen O’Connor’s Technology, Privacy & Data Security practice group with discussions and perspectives on emerging trends, developments and best practices. In the inaugural episode, host Andrew Baer is joined by his Cozen O’Connor colleague, Benjamin Mishkin, for a discussion about the new state privacy laws in the United States, which will go into effect in 2023.

Download this episode.

Read more ›
Posted in Cyber Law Monitor Podcast

Federal Privacy Law Passage in Doubt?

A few months ago it seemed like the American Data Privacy and Protection Act (ADPPA) was gaining momentum in Congress and represented the best hope in years for passage of a federal data privacy law that would preempt the five overlapping (but not totally consistent) state comprehensive privacy laws and offer businesses a uniform national framework. However, California’s attorney-general Rob Bonta and nine other state attorneys-general are now opposing the ADPPA in its current form, claiming that there should be no preemption and any federal privacy law should establish a “floor not a ceiling.” Of course, this would be the worst possible outcome for many businesses, which would face an additional compliance regime overlaid on the existing ones as well as a private right of action substantially broader than California’s. Please check out the following article published by Meghan Stoppel of Cozen O’Connor’s State Attorneys General group, which examines the state AGs’ position and evaluates the ADPPA’s chances of passage. 

Posted in Uncategorized
About Cyber Law Monitor
In the new digital world, individuals and businesses are almost entirely dependent on computer technology and electronic communications to function on a daily basis. Although the power of modern technology is a source of opportunity and inspiration—it also poses huge challenges, from protecting privacy and securing proprietary data to adhering to fast-changing statutory and regulatory requirements. The Cyber Law Monitor blog covers privacy, data security, technology, and cyber space. It tracks major legal and policy developments and provides analysis of current events.
Subscribe For Updates

cyberlawmonitor

Cozen O’Connor Blogs