The Colorado AI Act, set to take effect on February 1, 2026, introduces new consumer protection regulations for AI systems used by Colorado residents. It imposes disclosure requirements for public-facing AI systems and stricter obligations for high-risk AI systems – those which significantly impact decisions in areas like education, employment, and health care.
This is part three of our examination of the European Union’s new artificial intelligence law (the “EU AI Act”). In part one, we introduced the scope of the EU AI Act and discussed what types of AI systems are outright banned. In part two, we discussed high-risk AI systems. In this article, we look at the requirements for general-purpose AI models.
On May 9, 2024, Governor Wes Moore signed the Maryland Online Data Privacy Act (MODPA) making Maryland the seventeenth state to enact a comprehensive data privacy law. The law takes effect October 1, 2025, but it does not apply to any personal data processing activities before April 1, 2026. The full text can be found here. For more information on MODPA’s applicability thresholds, exemptions, and consumer rights, check out our client alert on the law, which you can view here.
This is part two of our examination of the European Union’s new artificial intelligence law, the (“EU AI Act”). In part one, we introduced the scope of the EU AI Act and discussed what types of AI systems are outright banned. In this article, we look at what types of AI systems are considered high-risk by the EU and what the resulting compliance requirements are.
The European Union recently enacted its new artificial intelligence regulation, the (“EU AI Act”). The new law is expected to have a substantial impact on the AI industry, including on companies outside of the EU, much as the GDPR did.
Overall, the EU AU Act follows a risk-based approach and contains several categories of AI systems. High risk systems are subject to the most stringent requirements, while AI systems presenting less risk are subject to lighter regulation. But certain uses of AI systems are entirely prohibited. Additionally, special rules apply to general purpose AI systems and foundation models.
In this article we begin to examine the key elements of the EU AI Act. Due to the size and complexity of the EU AI Act, we will analyze it over the course of several installments.
The Biden administration announced that it brokered a voluntary agreement with several of the biggest technology and artificial intelligence (AI) companies. The agreement, available here, has the companies taking a number of actions intended to encourage safe, secure, and trustworthy development of AI technologies, particularly generative AI systems. While the commitments are not as extensive as other frameworks, such as the NIST AI Risk Management Framework or the Biden Administration’s Blueprint for an AI Bill of Rights, they are in some ways more concrete and actionable, and could serve as a model for other companies entering the AI market.
Safety
Signatories to the agreement commit to adversarial testing (red-teaming) to evaluate areas such as misuse, societal risks, and national security concerns. Adversarial testing should be performed internally as well as by independent third parties. Testing will include a number of specific areas:
Biological, chemical, and radiological risks, such as the potential of an AI system to lower barriers to entry for weapons design, development, or use;
Cyber capabilities, such as the ways in which an AI system can be used for vulnerability discovery or the exploitation or defense of a computer system;
The effects of an AI system’s interaction with other systems, particularly the capacity to control physical systems;
The capacity for an AI system to self-replicate; and
Societal risks of the AI systems, such as bias and discrimination.
Security
Signatories will invest in cybersecurity and insider threat safeguards in connection with the AI system. Additionally, they will offer incentives, including bug bounties, contests or prizes, for third parties to discover and report unsafe behaviors, vulnerabilities, and other issues with the AI system.
Trust
AI systems should use mechanisms that enable users to understand if audio or visual content is AI-generated, including watermarking that identifies the service or model. Signatories will also develop tools or APIs to determine if a particular piece of content was created with their AI system.
AI companies will publicly report model or system capabilities, limitations, and domains of appropriate and inappropriate use, including a discussion of societal risks, such as effects on fairness and bias. Reports should include information about the safety evaluations conducted (including information about dangerous capabilities, to the extent it is responsible to publicly disclose this information), significant limitations in performance that have implications for the domains of appropriate use, discussion of the model’s effects on societal risks such as fairness and bias, and the results of adversarial testing conducted to evaluate the model’s fitness for deployment.
Signatories should evaluate societal risks posed by an AI system, including the potential for harmful bias and discrimination, and protecting privacy. AI systems should be designed to avoid harmful biases and discrimination from being created or propagated. Companies should also employ trust and safety teams, advance AI safety research, advance privacy, protect children, and proactively manage the risks of an AI system.
The rapid spread of Artificial Intelligence (AI) systems in recent years has overlapped with the enactment of comprehensive privacy laws by multiple U.S. states. Aside from being generally applicable to AI systems in the same way as they are to any online service, several of the comprehensive state privacy laws have provisions that specifically address certain uses of AI systems, in particular use in profiling. This article surveys those provisions and assumes the reader is already familiar with basic concepts in the comprehensive privacy laws, such as controllership and applicability thresholds.
The new comprehensive privacy laws of Iowa and Utah have no specific provisions on the use of AI for profiling or otherwise. The laws of Connecticut, Delaware, Indiana, Montana, Tennessee, Texas, and Virginia are largely the same. California, Colorado, Florida, and Oregon have some significant differences.
On June 6, 2023, the Board of Governors of the Federal Reserve System, Office of the Comptroller of the Currency and Federal Deposit Insurance Corp. (collectively, the “Agencies”) issued final interagency guidance that provides granular recommendations for how banks and other regulated financial institutions should manage risks associated with third-party relationships (the “Guidance”). The Guidance replaces prior guidelines that were released by the Agencies on July 19, 2021.
The report cites a myriad of issues with AI systems, including uses in hiring and credit decisions that have been found to reproduce existing inequities or create new harmful bias, uses in patient care that proved to be unsafe or ineffective, and increased collection or use of data that threatens people’s opportunities or undermines their privacy. The report argues these harmful outcomes are not inevitable and that the AI tools have the potential to revolutionize many industries and benefit all parts of society.
The report sets out 5 basic rights of people that should be respected in connection with AI systems.
There is an emerging consensus that AI systems present a significantly different risk profile than conventional information technology systems. While there is currently no legal requirement to use a risk management framework when developing AI systems, there are a growing number of proposals that would require the use of a risk management framework or offer a safe harbor from certain types of liability if one is used.
The framework identifies 6 factors for mitigating risk and evaluating the trustworthiness of an artificial intelligence (AI) system.
In the new digital world, individuals and businesses are almost entirely dependent on computer technology and electronic communications to function on a daily basis. Although the power of modern technology is a source of opportunity and inspiration—it also poses huge challenges, from protecting privacy and securing proprietary data to adhering to fast-changing statutory and regulatory requirements. The Cyber Law Monitor blog covers privacy, data security, technology, and cyber space. It tracks major legal and policy developments and provides analysis of current events.
Subscribe For Updates
Thank you for registering. Please check your email to confirm your subscription.