The United States of America White House: The Office of Science & Technology Policy recently released its Blueprint for an AI Bill of Rights, and provided recommendations that developers, businesses, users, and lawmakers can follow to reduce AI’s potential harms to humans, and to our society at large.
As we wrote in The AI Dilemma book, AI has the potential to help create a Perfect World, or a Perfect Storm. Our research continues to find many AI solutions in the market, in recruiting, credit, bank loans, patient care, that are causing harm, are unsafe, and often have biased algorithms reproducing or increasing existing unwanted inequities and embedding new harmful bias and discrimination.
We have unlocked a universe of unchecked data on the web that is aggregating data about us that is not approved often by us, and is undermining our privacy and risking our security, more importantly a world where AI agents can create havoc of unprecedented scale and as leaders we have a responsibility to get our AI legal statutes in order.
These digital realities are harmful – but if we start to lead as a global unified nation where AI is used for good, we can modernize our world and bring more value and benefits. AI is already helping our agriculture industry predict storm paths to minimize risk impacts, identifying health risks in our healthcare systems, guiding traffic to best routes, improving customer relationship management practices, guiding sales professionals to end revenue uncertainty, predicting margin outcome odds one quarter before results are released…. giving time to adjust decisions, and so many AI use cases where AI for Good is front and center.
AI is being used in so many positive ways, however, until we advance our legal systems and audit systems and procurement systems and our educational systems – we will continue to put at risk our evolution in our more intelligent world.
Why is this AI Blueprint so important?
First, we currently don’t have legal statutes and regulations that are binding across North America. The Technology innovators are all advancing globally with algorithmic biases, many software product innovations are surfing up inaccurate black box predictions, as many companies don’t approve the features going into AI models and analyze precision and recall patterns to ensure models are optimized. Hence, a prediction in many products magically appears but few companies are peeling the onion back to ensure error rates are in risk acceptance zones.
I have been writing for some time on the business imperative for board directors and C-Level leadership teams to advance their digital literacy capabilities so they can start to train legal officers, and procurement officers about AI risk management. Understanding the principles positioning in this AI Bill of Rights Blueprint is an excellent start of learning about the risks with AI.
Foundation Principles to Advance the Governance in AI.
I read every word of the USA Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People and it is an excellent foundation for others to advance, and start to align on legal language globally. Although not yet a formal policy it creates a framework to evolve existing policies, statutes, regulations, policies, etc.
First point is context is important. Guiding the usage of AI and automated systems can vary from automated robots for surgical procedures, automated cars, school building security systems, traffic monitoring systems – AI can literally be applied in every industry – hence the risks to get this right.
The Blueprint for an AI Bill of Rights recognizes that law enforcement activities require a balancing of equities, for example, between the protection of sensitive law enforcement information and the principle of notice; as such, notice may not be appropriate, or may need to be adjusted to protect sources, methods, and other law enforcement equities. (Reference Executive Order 13960: Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government (December 2020)).
The Five AI Principles
Five AI principles are identified to guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence. The Blueprint for an AI Bill of Rights is a guide for a society that protects all people from these threats—and uses technologies in ways that reinforce our highest values. The Blueprint for an AI Bill of Rights includes a Foreword, the five principles, notes on Applying the Blueprint for an AI Bill of Rights, and From Principles to Practice that gives concrete steps that can be taken by many kinds of organizations—from governments at all levels to companies of all sizes—to uphold these values.
In the simplest form, the five principles state:
- Automated systems should be effective and safe.
- Users of such systems should be protected against algorithmic discrimination, and the system should be designed and used in an equitable way.
- People should be able to control how their data is used, and they should not be subjected to abusive data practices.
- Users should know why and how an AI system made its determination, and
- People should have the choice to opt out of AI decision-making and fall back on a human if the system has an error, fails, or they want to challenge the decision.
Definitions of Terms
The USA AI Blueprint also defines key terms important to understand that will shape future policy and legal statutes
1. Algorithmic discrimination
2. Automated System
5. Rights, Opportunities or access
6. Sensitive Data
7. Sensitive Domains
8. Surveillance Technology
9. Underserved Communities
1. “Algorithmic discrimination” occurs when automated systems contribute to unjustified different treatment or impacts disfavoring people based on their race, color, ethnicity, sex (including pregnancy, childbirth, and related medical conditions, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law. Depending on the specific circumstances, such algorithmic discrimination may violate legal protections.
2. An “automated system” is any system, software, or process that uses computation as whole or part of a system to determine outcomes, make or aid decisions, inform policy implementation, collect data or observations, or otherwise interact with individuals and/or communities. Automated systems include, but are not limited to, systems derived from machine learning, statistics, or other data processing or artificial intelligence techniques, and exclude passive computing infrastructure. “Passive computing infrastructure” is any intermediary technology that does not influence or determine the outcome of decision, make or aid in decisions, inform policy implementation, or collect data or observations, including web hosting, domain registration, networking, caching, data storage, or cybersecurity.
3. “Communities” include: neighborhoods; social network connections (both online and offline); families (construed broadly); people connected by affinity, identity, or shared traits; and formal organizational ties. This includes Tribes, Clans, Bands, Rancherias, Villages, and other Indigenous communities. AI and other data-driven automated systems most directly collect data on, make inferences about, and may cause harm to individuals. But the overall magnitude of their impacts may be most readily visible at the level of communities.
4. “Equity” means the consistent and systematic fair, just, and impartial treatment of all individuals. Systemic, fair, and just treatment must take into account the status of individuals who belong to underserved communities that have been denied such treatment, such as Black, Latino, and Indigenous and Native American persons, Asian Americans and Pacific Islanders and other persons of color; members of religious minorities; women, girls, and non-binary people; lesbian, gay, bisexual, transgender, queer, and intersex (LGBTQI+) persons; older adults; persons with disabilities; persons who live in rural areas; and persons otherwise adversely affected by persistent poverty or inequality.
5. “Rights, opportunities, or access” is used to indicate the scoping of this framework. It describes the set of: civil rights, civil liberties, and privacy, including freedom of speech, voting, and protections from discrimination, excessive punishment, unlawful surveillance, and violations of privacy and other freedoms in both public and private sector contexts; equal opportunities, including equitable access to education, housing, credit, employment, and other programs; or, access to critical resources or services, such as healthcare, financial services, safety, social services, non-deceptive information about goods and services, and government benefits.
6. “Sensitive Data” Data and metadata are sensitive if they pertain to an individual in a sensitive domain (defined below); are generated by technologies used in a sensitive domain; can be used to infer data from a sensitive domain or sensitive data about an individual (such as disability-related data, genomic data, biometric data, behavioural data, geolocation data, data related to interaction with the criminal justice system, relationship history and legal status such as custody and divorce information, and home, work, or school environmental data); or have the reasonable potential to be used in ways that are likely to expose individuals to meaningful harm, such as a loss of privacy or financial harm due to identity theft. Data and metadata generated by or about those who are not yet legal adults is also sensitive, even if not related to a sensitive domain. Such data includes, but is not limited to, numerical, text, image, audio, or video data.
7. “Sensitive domains” are those in which activities being conducted can cause material harms, including significant adverse effects on human rights such as autonomy and dignity, as well as civil liberties and civil rights. Domains that have historically been singled out as deserving of enhanced data protections or where such enhanced protections are reasonably expected by the public include, but are not limited to, health, family planning and care, employment, education, criminal justice, and personal finance. In the context of this framework, such domains are considered sensitive whether or not the specifics of a system context would necessitate coverage under existing law, and domains and data that are considered sensitive are understood to change over time based on societal norms and context.
8. “Surveillance technology” refers to products or services marketed for or that can be lawfully used to detect, monitor, intercept, collect, exploit, preserve, protect, transmit, and/or retain data, identifying information, or communications concerning individuals or groups.
9. The term “underserved communities” refers to communities that have been systematically denied a full opportunity to participate in aspects of economic, social, and civic life.
The release of the USA AI Bill of Rights Blueprint will now advance into public and legal forums to advance AI for Good. This blueprint is a must read for board of directors and C-Suite leaders as it creates knowledge to lead and increase digital literacy.
It is my hope that all democratic countries can rally together an international legal framework and statutes can be secured. AI moves like grease lightening and we all need to rally our policy and legal stewards to get the job done far more rapidly that what is underway.
This being said, the AI blueprint release is a very positive step in the right direction. However other countries are further ahead that the USA.
Some highlights include: The EU AI Act, is ahead of the USA as it released a proposed European law on artificial intelligence (AI) – the first law on AI by a major regulator anywhere. In addition, in September 2022, Brazil’s Congress passed a bill that also creates a legal framework for AI. The Canadian government also recently introduced iBill C-27 , the introduction of the Artificial Intelligence and Data Act (“AIDA”), an entirely new law which aims to regulate the development and use of AI in Canada. Legal firm, McCarthy Tetrault did an excellent summary of the merits and loopholes of this new Bill under review.
I expect we will in 2023 more Bills of AI legal frameworks coming foreward.
In summary, AI systems should work, should not discriminate, and should not use data indiscriminately, USA AI blueprint co-writer, Suresh Venkatasubramanian, wrote in a tweet.
Let’s get this right.
The Organization for Economic Co-operation and Development’s (OECD’s) 2019 Recommendation on Artificial Intelligence, includes principles for responsible stewardship of trustworthy AI and can be found here.
A Little About The Office of Science and Technology Policy
This function was established in 1976 to provide the President and others with advice on the scientific, engineering, and technological aspects of the economy, national security, health, foreign relations, the environment, and the technological recovery and use of resources, among other topics.