Managing AI Risk
In 2017, when blockchain was the new shiny thing, a little-known micro-cap stock, Long Island Iced Tea Corp., changed its name to Long Blockchain Corp. That day, its stock price jumped 200% on the news, but it was still a beverage maker – it simply announced that it was exploring opportunities in blockchain technology. Simply attaching the word “blockchain” to its corporate name was enough to create a frenzy in the stock.
Even though artificial intelligence has been a part of our lexicon for more than seventy years, artificial intelligence remains the latest bright shiny thing. Businesses large and small feel compelled to incorporate artificial intelligence into their company descriptions even with a limited understanding of what artificial intelligence is, or how it could help their business. While incorporating artificial intelligence into a business model may be a good move, jumping on the AI bandwagon can have unintended consequences.
What is Artificial Intelligence
Most of us have an imperfect concept of artificial intelligence: we think that the title is descriptive of the product. However, artificial intelligence is not necessarily what it sounds like. IBM defines artificial intelligence as “technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity and autonomy.” But what most people think of as artificial intelligence is generative AI, technology that can create original text, images, video, and other content without human intervention.
Underlying this is a hard fact. Artificial intelligence is highly technical and exceedingly difficult. As an expert in the field, Joseph Greenfield of Maryman and Associates told me, “To understand artificial intelligence, you understand neural networks.” I don’t understand neural networks – do you?
What are the risks of Artificial Intelligence?
Some of the risks in artificial intelligence – or, more accurately, AI systems and tools – are well publicized. For example, AI “hallucinations,” a generative AI tool that creates responses to prompts that have little or no basis in fact, have become legendary. Biased or inaccurate responses are a common issue, and certain AI models have design flaws that can magnify those issues. Additionally, because of the complexity of AI systems, they cannot be treated simply as another form of software.
An AI system is not like a car, or a computer, or a lot of things we use but don’t understand. Or, more accurately, it’s like having a car without understanding what the steering wheel, accelerator and brake do. You are bound to have an accident.
The National Institute for Standards and Technology recently published a “Risk Management Framework” that identifies several risks that are inherent in AI systems. Among other things:
- Difficulty in Measurement. The risk in using AI systems is difficult to measure, making it challenging to implement a “safe” system.
- Adapting and Evolving. AI systems are, by their nature, continually adapting and evolving, which may make a risk analysis at one stage in the AI lifecycle inapplicable to a later stage.
- Lack of Transparency. AI systems are often opaque, lacking in documentation or explanation.
Moreover, a functioning AI system raises risks of inadequate compliance with laws, inadvertent disclosure of personal and business information, and a variety of ethical dilemmas. The takeaway here is that if you cannot identify or measure the risk, you might be unable to manage it.
Managing the Risk.
While eliminating risk might be impossible, it can be managed. Some steps a company can take to control the risk in AI systems include:
- Understand the system and how you plan to use it. Make sure that you understand the purpose of the AI system and how it will address your needs.
- Consider compliance. There are a variety of laws and regulations that impact the legal uses of artificial intelligence. Currently, the European Union AI Act, Utah AI Policy Act, and Colorado AI Act all stand out as specific laws geared toward artificial intelligence, but the nature of artificial intelligence is that it can trigger virtually all privacy laws as well as scrutiny by the FTC and state attorneys general. And, just as legislatures and regulators are focusing on privacy rights, they are moving into artificial intelligence regulation as well (even without fully understanding the concepts).
- Hot button Issues. Recognize that some applications of artificial intelligence are particularly sensitive, such as:
- Employment decisions;
- Credit scoring;
- Training with protected or unlawfully obtained data; and
- For those in the federal supply chain, the Biden Administration’s AI Executive Order.
There are also actions you can take to limit your risk exposure:
- Risk Analysis: Despite the challenge, understand how the AI system might create risks to your company. The risks can range from violation of specific artificial intelligence and privacy laws, intellectual property infringement, loss of trade secrets, and reputational harm.
- Vendor Assessment: Learn as much as you can about who will provide or develop the AI System – its experience, reputation, past projects, and personnel.
- Training Materials: Find out what data was used to train the AI system and where it came from. Does it include personal information, copyrighted materials, or trade secrets? Did the developer have the right to use the data?
- Review the Agreement Carefully: As noted above, artificial intelligence systems are different from other software. A careful review of the representations and warranties, indemnification provisions and limitations on liability are essential.
- Don’t skimp on the Statement of Work: The statement of work (the actual description of what the AI system will do) is key. That is challenging because it’s often the case that an AI system is developed with broad initial goals, making a continuing review of system requirements and goals essential.
- Have an AI Governance Committee and Policy: Establish a company group with meaningful authority, and with technical and legal expertise, to oversee the use of AI systems and tools.
Artificial Intelligence tools are expected to transform the way we work. They have the potential to automate tasks, improve decision-making, and provide valuable insights into our operations. However, the use of AI tools also presents new challenges in terms of information security and data protection. Adopting AI systems and tools requires preparation and careful thought – don’t just reach for the brightest new penny!
JMBM’s Cybersecurity and Privacy Group counsels’ clients with a commitment to protecting personal information in a wide variety of industries, including accounting firms, law firms, business management firms and family offices, in artificial intelligence implementation and other new technologies, development of cybersecurity strategies, creation of data security and privacy policies and procedures, responding to data breaches and regulatory inquiries and investigations, and crisis management. The Cybersecurity and Privacy Group uses a focused intake methodology that permits clients to get a reliable sense of their cybersecurity readiness and to determine optimal, client-specific approaches to cybersecurity.
Robert E. Braun is the co-chair of the Cybersecurity and Privacy Law Group at Jeffer Mangels Butler & Mitchell LLP. Clients engage Bob to develop and implement privacy and information security policies, data breach response plans, negotiate agreements for technologies and data management services, and comply with legal and regulatory requirements. Bob manages data breach response and responds quickly to clients’ needs when a data breach occurs. Contact Bob at RBraun@jmbm.com or +1 310.785.5331.