As the use of artificial intelligence (AI) rapidly expands throughout the private sector and government, the Biden administration has published a report titled A Blueprint for an AI Bill of Rights. A summary is available at https://www.whitehouse.gov/ostp/ai-bill-of-rights/ and the full document is available at https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf.
The report cites a myriad of issues with AI systems, including uses in hiring and credit decisions that have been found to reproduce existing inequities or create new harmful bias, uses in patient care that proved to be unsafe or ineffective, and increased collection or use of data that threatens people’s opportunities or undermines their privacy. The report argues these harmful outcomes are not inevitable and that the AI tools have the potential to revolutionize many industries and benefit all parts of society.
The report sets out 5 basic rights of people that should be respected in connection with AI systems.
- Protection from unsafe or ineffective systems.
AI systems should be developed and tested with consultation from diverse stakeholders and domain experts to identify concerns, risks, and potential impacts of the systems. They should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring that demonstrate they are safe and effective based on their intended use. AI should be designed to proactively protect people from harms stemming from unintended, but foreseeable, uses or impacts.
2. Protection from discrimination
Algorithmic discrimination occurs when automated systems contribute to unjustified different treatment or negatively impact people based on their race, color, ethnicity, sex, religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law. AI developers should take proactive measures, as part of the design and training of an AI system, to protect individuals from algorithmic discrimination. Independent evaluation and plain language reporting in the form of an algorithmic impact assessment, including disparity testing results and mitigation information, should be performed and made public whenever possible to confirm these protections.
3. Protection of Privacy
People should have agency over how their data is used by AI systems. Designs should ensure that privacy protections are included by default, including ensuring that data collection conforms to reasonable expectations and that only data strictly necessary for the specific context is collected. Designers of AI systems should seek permission from individuals and respect their decisions regarding collection, use, access, transfer, and deletion of personal information. Where this is not possible, alternative privacy by design safeguards should be used. Systems should not employ user experience and design decisions that hide user choices or burden users with defaults that are privacy-invasive. Consent should only be used to justify collection of data in cases where it can be appropriately and meaningfully given.
Enhanced protections for data, including inferences, related to sensitive domains such as health, work, education, criminal justice, and finance, and for data pertaining to children should be used. In sensitive domains, data and related inferences should only be used for necessary functions. Surveillance technologies should be subject to heightened oversight that includes at least pre-deployment assessment of potential harms and scope limits to protect privacy and civil liberties. Continuous surveillance and monitoring should not be used in education, work, housing, or in other contexts where the use of such surveillance technologies is likely to negatively impact rights, opportunities, or access. Whenever possible, people should have access to reporting that confirms that their data decisions have been respected and provides an assessment of the potential impact of surveillance technologies on their rights, opportunities, or access.
4. Right to Notice and Explanation
People should know that an AI system is being used and understand how it impacts them. Designers of an AI system should provide generally accessible plain language documentation with clear descriptions of the overall system and the role that AI plays, the developer responsible for the system, and explanations of outcomes that are clear. People impacted by the AI system should be notified of significant changes in the uses or functionality. People should know how and why an outcome impacting them was determined by an AI system. AI systems should provide explanations that are technically valid, meaningful and useful to people, and calibrated to the level of risk based on the context. Reporting that includes summary information about AI systems in plain language and assessments of the clarity and quality of the notice and explanations should be made public whenever possible.
5. Right to Human Alternatives, Consideration, and Fallback
People should be able to opt out of AI decision-making in favor or a human alternative, where appropriate. Appropriateness should be determined based on reasonable expectations in a given context and with a focus on protecting the public from especially harmful impacts. People should have access to a person who can quickly consider and remedy problems they encounter or if they would like to appeal or contest the AI system results. Human consideration (often referred to as “human intervention”) and fallback should be accessible, equitable and effective, and should not impose an unreasonable burden on the public.
It is fair to view the Biden administration’s report as a recommendation to Congress for what the administration would like to see in legislation that regulates AI systems and as a roadmap for developers about what federal regulators may potentially expect of them.