In 2017, when blockchain was the new shiny thing, a little-known micro-cap stock, Long Island Iced Tea Corp., changed its name to Long Blockchain Corp. That day, its stock price jumped 200% on the news, but it was still a beverage maker – it simply announced that it was exploring opportunities in blockchain technology. Simply attaching the word “blockchain” to its corporate name was enough to create a frenzy in the stock.

Even though artificial intelligence has been a part of our lexicon for more than seventy years, artificial intelligence remains the latest bright shiny thing. Businesses large and small feel compelled to incorporate artificial intelligence into their company descriptions even with a limited understanding of what artificial intelligence is, or how it could help their business. While incorporating artificial intelligence into a business model may be a good move, jumping on the AI bandwagon can have unintended consequences.

What is Artificial Intelligence

Most of us have an imperfect concept of artificial intelligence: we think that the title is descriptive of the product. However, artificial intelligence is not necessarily what it sounds like. IBM defines artificial intelligence as “technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity and autonomy.” But what most people think of as artificial intelligence is generative AI, technology that can create original text, images, video, and other content without human intervention.

Underlying this is a hard fact. Artificial intelligence is highly technical and exceedingly difficult. As an expert in the field, Joseph Greenfield of Maryman and Associates told me, “To understand artificial intelligence, you understand neural networks.” I don’t understand neural networks – do you?

What are the risks of Artificial Intelligence?

Some of the risks in artificial intelligence – or, more accurately, AI systems and tools – are well publicized. For example, AI “hallucinations,” a generative AI tool that creates responses to prompts that have little or no basis in fact, have become legendary. Biased or inaccurate responses are a common issue, and certain AI models have design flaws that can magnify those issues. Additionally, because of the complexity of AI systems, they cannot be treated simply as another form of software.

An AI system is not like a car, or a computer, or a lot of things we use but don’t understand. Or, more accurately, it’s like having a car without understanding what the steering wheel, accelerator and brake do. You are bound to have an accident.

The National Institute for Standards and Technology recently published a “Risk Management Framework” that identifies several risks that are inherent in AI systems. Among other things:

  • Difficulty in Measurement. The risk in using AI systems is difficult to measure, making it challenging to implement a “safe” system.
  • Adapting and Evolving. AI systems are, by their nature, continually adapting and evolving, which may make a risk analysis at one stage in the AI lifecycle inapplicable to a later stage.
  • Lack of Transparency. AI systems are often opaque, lacking in documentation or explanation.

Moreover, a functioning AI system raises risks of inadequate compliance with laws, inadvertent disclosure of personal and business information, and a variety of ethical dilemmas. The takeaway here is that if you cannot identify or measure the risk, you might be unable to manage it.

Managing the Risk.

While eliminating risk might be impossible, it can be managed. Some steps a company can take to control the risk in AI systems include:

  • Understand the system and how you plan to use it. Make sure that you understand the purpose of the AI system and how it will address your needs.
  • Consider compliance. There are a variety of laws and regulations that impact the legal uses of artificial intelligence. Currently, the European Union AI Act, Utah AI Policy Act, and Colorado AI Act all stand out as specific laws geared toward artificial intelligence, but the nature of artificial intelligence is that it can trigger virtually all privacy laws as well as scrutiny by the FTC and state attorneys general. And, just as legislatures and regulators are focusing on privacy rights, they are moving into artificial intelligence regulation as well (even without fully understanding the concepts).
  • Hot button Issues. Recognize that some applications of artificial intelligence are particularly sensitive, such as:
    • Employment decisions;
    • Credit scoring;
    • Training with protected or unlawfully obtained data; and
    • For those in the federal supply chain, the Biden Administration’s AI Executive Order.

There are also actions you can take to limit your risk exposure:

  • Risk Analysis: Despite the challenge, understand how the AI system might create risks to your company. The risks can range from violation of specific artificial intelligence and privacy laws, intellectual property infringement, loss of trade secrets, and reputational harm.
  • Vendor Assessment: Learn as much as you can about who will provide or develop the AI System – its experience, reputation, past projects, and personnel.
  • Training Materials: Find out what data was used to train the AI system and where it came from. Does it include personal information, copyrighted materials, or trade secrets? Did the developer have the right to use the data?
  • Review the Agreement Carefully: As noted above, artificial intelligence systems are different from other software. A careful review of the representations and warranties, indemnification provisions and limitations on liability are essential.
  • Don’t skimp on the Statement of Work: The statement of work (the actual description of what the AI system will do) is key. That is challenging because it’s often the case that an AI system is developed with broad initial goals, making a continuing review of system requirements and goals essential.
  • Have an AI Governance Committee and Policy: Establish a company group with meaningful authority, and with technical and legal expertise, to oversee the use of AI systems and tools.

Artificial Intelligence tools are expected to transform the way we work. They have the potential to automate tasks, improve decision-making, and provide valuable insights into our operations. However, the use of AI tools also presents new challenges in terms of information security and data protection. Adopting AI systems and tools requires preparation and careful thought – don’t just reach for the brightest new penny!


JMBM’s Cybersecurity and Privacy Group counsels’ clients with a commitment to protecting personal information in a wide variety of industries, including accounting firms, law firms, business management firms and family offices, in artificial intelligence implementation and other new technologies, development of cybersecurity strategies, creation of data security and privacy policies and procedures, responding to data breaches and regulatory inquiries and investigations, and crisis management. The Cybersecurity and Privacy Group uses a focused intake methodology that permits clients to get a reliable sense of their cybersecurity readiness and to determine optimal, client-specific approaches to cybersecurity.

 Robert E. Braun is the co-chair of the Cybersecurity and Privacy Law Group at Jeffer Mangels Butler & Mitchell LLP. Clients engage Bob to develop and implement privacy and information security policies, data breach response plans, negotiate agreements for technologies and data management services, and comply with legal and regulatory requirements. Bob manages data breach response and responds quickly to clients’ needs when a data breach occurs. Contact Bob at RBraun@jmbm.com or +1 310.785.5331.

Every ransomware attack requires the victims to make a hard decision – whether or not to pay the ransom. The decision is often made on the basis of past mistakes – failure to implement basic security (such as not implementing multi-factor authentication), failure to train personnel in recognizing phishing, or failure to establish and maintain an effective backup protocol. Lack of backups is often the deciding factor – if a company cannot reinstall systems and recover lost data, it may feel that it has no choice except to pay the ransom.

Why You Shouldn’t Pay. Even if that were the case, paying the ransom may be the wrong decision. Here’s why:

  • Paying the Ransom May Be Illegal. Federal and some state and local governments have rules against paying ransom to bad actors because it funds support for illegal activities. The U.S. Department of the Treasury’s Office of Foreign Assets Control (OFAC) oversees these incidents, and the International Emergency Economic Powers Act and the Trading with the Enemy Act have strict rules against foreign financial engagement, and it is illegal to conduct a transaction with any person on the OFAC’s Specially Designated Nationals and Blocked Persons List. As it happens, hackers are often on the list. Violations of the sanctions rules can result in civil penalties, and even jail time.

Ransom payments made to individuals and entities on the list can include cases where the victim is unaware that their payments violate these laws; the government can seek civil penalties even if the victims didn’t know the payments were illegal.

Continue reading

5d390682-b1e1-401a-beb1-e33483e4e383In 2024, privacy laws adopted by Montana, Oregon, Texas and Utah will become effective. While the laws have much in common (and are similar to the laws already in effect), they each have special characteristics, and companies will need to evaluate how they impact operations, disclosures and policies.

What do they have in common?

Each of the new laws provides similar rights to consumers:

  • The right to opt out of data collection and processing
  • The right to correct inaccuracies in their personal data
  • The right to access a copy of their data
  • The right to delete their personal data
  • The right to opt in, or opt out, of processing sensitive personal data
  • The right to opt out of the sale of personal data, profiling, or profiling personal information for targeted advertisements

The statutes also impose similar obligations on businesses:

  • Publish a privacy notice and description of business’s data collection and processing practices, and whether data is shared with third parties
  • Recognize opt-out preference signals, which could allow consumers to opt out of data collection and processing without having to verify their identities
  • Perform and document data protection assessments (DPAs) for high-risk processing activities

None of the new state laws provides for a private right of action like California’s (which allows users to sue violating companies), but each of them has an enforcement mechanism that includes penalties for noncompliance. Enforcement will generally be carried out by the attorney general of these states. Continue reading

Companies that are subject to the registration and disclosure requirements of the United States Securities Act and Securities Exchange Act face the challenge of complying with a broad variety of detailed regulations addressing their disclosure and reporting obligations. The Securities Exchange Commission recently adopted regulations which will have an impact on publicly traded companies that suffer a data breach. Because the SEC’s standards for disclosure often set a standard for private companies as well, the regulations are likely to have an impact on other companies.

Breach Notifications for the Past 20 Years.  Ever since California became the first state to require companies to notify their customers of data breaches in 2003, the time between the date a breach was discovered and the time the breach was reported has been an issue of contention. Early reporting gives consumers a leg up in protecting their personal information, and lets investors, vendors and customers of companies know if key business information has been compromised. At the same time, companies want as much time as possible to investigate a breach, understand what happened, and provide accurate information – companies that give early notice often have to give multiple notices as more information becomes available, and may even find that the original notice wasn’t necessary. Regardless, lawsuits against companies that have suffered data breaches almost universally point to the gap in time between the discovery and notification of a breach.

The SEC Acts.  Regulators have stepped in and identified time frames for public notification of a data breach. Most recently, the Securities Exchange Commission issued a final rule that reduces the time for reporting companies (companies whose securities are registered with the SEC) to disclose cyberattacks publicly. As has been widely reported, with some exceptions, a company that is the victim of a cyberattack now has four days to publicly disclose the impact of the attack. Cyberattacks that involve the theft of intellectual property, a business interruption or reputational damage will likely require disclosure under the regulations. Continue reading

Congress has managed not to adopt a federal privacy law, leaving it to the Securities Exchange Commission, the Federal Trade Commission, and other regulators to fill the void – something that will take years to implement and will be subject to challenges.

We now have, however, ten state privacy laws – five adopted in just the past two months. While the laws have commonalities, none of them are entirely consistent with each other; businesses, particularly those with operations in multiple states, will have to consider how to comply in an efficient and effective manner. This will be no easy task, since in addition to the ten existing state laws, there are nine additional states with active bills. When state legislatures return, it is entirely likely that we will need to revisit this issue.

Creating a privacy regime requires an individual analysis of each company, including the data it collects, how it uses it, and who has access to it. Ten separate laws make the job much more difficult, but we start here on three points – who is covered, what rights are granted, and key similarities and differences. Continue reading

Website analytics are a key part of understanding whether a website “works,” and how to improve it; they arose almost at the same time that companies began using websites to transact business. For the most part, and for a long time, website analytics were seen as benign – a way to track information without trampling on an individual’s privacy rights. But the multitude of ways in which companies collect information on websites without a user’s knowledge make it more and more likely that a website owner can find itself in violation of privacy laws.

More than that, analytics have become a security issue. The tools used to collect visitor data – cookies, pixels, beacons, and other technologies – have created a risk surface that can allow bad actors to identify targets and breach defenses. At the same time, the nature of these tools makes them one of the risks that companies can manage, allowing them to comply with privacy mandates and reduce cyber risk.

In the Beginning . . .

Originally, analytics were limited. Cookies and other devices allowed a website recognize a user, and to smooth the operations of the website. This little piece of code on your computer made it easier to log on to a website, to complete a purchase, and to see the information you look for. Although cookies did allow the website to recognize a user – essentially, to collect personal information – they were generally limited to the website; they were also typically “session cookies” used to facilitate a single user session, or “persistent cookies,” allowing the site to differentiate a new visitor from a prior visitor.

Since then, the tools used to identify website visitors and their actions have exploded in both numbers and potency, creating opportunities and challenges for website owners. Continue reading

On Monday, October 17, 2022, the California Privacy Protection Agency Board issued revised regulations to the California Consumer Privacy Act of 2018 (as amended by the California Privacy Rights Act of 2020). The revised regulations propose dozens of changes that were intended to address business concerns that some of the requirements were confusing and costly to implement.

While the proposed regulations are still in draft form and are likely to go through additional changes – the proposal itself identifies additional areas for the CPPA Board to consider, there are a few clear takeaways from the most recent draft:

  • Notice at Collection. Businesses will need to review and update notices at collection; a simple statement that personal information is being collected in accordance with a privacy policy will not be adequate. In particular, the proposed regulations emphasize that references to the collection and use of information in a notice at collection must be specific; the link should direct the reader to the specific provision, not just to the first page of the privacy policy.
  • Contract Requirements for Service Providers and Contractors. The proposed regulations carry over and emphasize the contractual requirements for Service Providers and Contractors. The importance of incorporating these provisions into vendor agreements, whether directly into an agreement or through an addendum is essential, as is implementing the guardrails described in the regulations. The recent settlement between Sephora and the California Attorney General is a direct result of the failure to address this issue.
  • Limits on Selling and Sharing Personal Information. Covered businesses will need to look carefully at how their vendor relationships could be construed as selling or sharing personal information and be ready to include a “Do Not Sell/Share” link, not just where data is collected, but also on the home page of the business’ website.
  • B2B and Employee Data. Most companies should, by now, be aware that personal information gathered from business contacts and employees will be subject to the CCPA beginning January 1, 2023. For companies that have not had to comply with these requirements before, this will impose a significant burden to implement effective procedures and policies addressing these needs.
  • Regulators (and others) are Looking. Finally, companies should be aware that the CPPA and the California Attorney General (along with plaintiffs’ counsel and even some consumers) are watching. Businesses that don’t make a good faith effort to comply can expect to be called out, and often in public and expensive ways.

Continue reading

In 2018, the California Legislature adopted the California Consumer Privacy Act (CCPA) and became the first state to enact a comprehensive law designed to protect the privacy of consumers’ personal information. Businesses that are subject to the CCPA are required, among other things, to respond to consumers who wish to view the personal information collected by the business, delete personal information, and opt-out of the sale of personal information. The CCPA was amended in 2020 when California voters approved the California Privacy Rights Act of 2020 (CPRA), which added additional requirements and restrictions regarding the collection, use, sale and sharing of personal information.

Employee and Business Personal Information

While the CCPA is aimed at protecting consumers’ personal information, the terms of the law extend to the personal information of employees and business contacts. The California legislature reacted by exempting employment information and “business to business” (B2B) personal information from many of the provisions of the CCPA until January 1, 2021, which was extended in the CPRA to January 1, 2023.

The Exemption and its Demise

The broad consensus after the adoption of the CPRA was that the California legislature would extend the exemptions of employee and B2B personal information. While there were a number of attempts to come to an agreement, ultimately, the California Legislature adjourned on August 31, 2022 without adopting an extension. As a result, it is a certainty that full consumer rights will apply to personal information obtained from employees or as a result of a B2B relationship.

Continue reading

Online privacy policies are ubiquitous. Sometimes they are mandated by law – that’s been the case in California for years – and a variety of other states and federal agencies (like the Securities and Exchange Commission) require them as well. As a practical matter, almost every firm that has an online presence has a privacy policy. But it’s not enough to have a privacy policy – the policy has to be “right,” and failing to do that can open a company to liability. At the same time, done correctly, a privacy policy is an important asset.

Privacy Policies as an Asset – or Liability

An accurate and well-written privacy policy can be an important asset to a company. Consumers today, more and more, look for transparency in the vendors they patronize. A privacy policy that is readable and organized benefits a company, not just because it better complies with applicable laws, but also because it reflects the firm’s commitment to accuracy and transparency. A confusing, ill-conceived policy, by contrast, opens up a company to liability, both from consumers and from governmental bodies, who regularly examine privacy policies to confirm that they comply with fair trade practices. Moreover, a privacy policy that doesn’t reflect a company’s actual practices can be used in a data breach to cast blame, and create monetary burden, on a firm.

Recent Privacy Laws Make Privacy Policies More Challenging

The California Consumer Privacy Act of 2018 (as amended by the California Privacy Rights Act of 2020), along with similar (but not identical) laws adopted in Connecticut, Virginia, Colorado and Utah) add complexity to the mix. These laws include specific disclosure requirements in connection with the collection of personal information and enumerate rights of consumers, all of which need to be disclosed to the consumer; as a practical matter, a privacy policy is the only effective way of complying. Continue reading

Addressing privacy compliance and cybersecurity is becoming more and more challenging for companies. At least 26 states are considering various kinds of data privacy laws. At the same time the rate, depth, and impact of ransomware, wiperware and data breaches has become more intense and more expensive, and there is no indication that the trend will end soon.

Complying with privacy mandates, and preparing for and defending against a data breach, requires knowledge – it requires visibility.

What does that mean? To achieve visibility, an enterprise needs to increase its knowledge of key elements in its infrastructure:

See Your Network

Most C-level executives, other than chief technology officers and chief financial officers, have little knowledge of their network. But understanding what data is stored on the network, how the various parts of the network interact, and who has access to the network (and what kind) is essential to evaluating risks, complying with privacy laws, and preparing and defending against attacks. This means not only knowing what is supposed to be on the network, but the “silent” nodes as well – things like unused servers and the devices that attach to the network, such as personal laptops, smart phones and tablets.

Part of knowing your network also means knowing what is happening on the network. Companies need to know when there is a threat, where it is, and how to contain it. Simply having firewalls and other endpoint security isn’t enough; it’s too easy for hackers to gain access to the network. Being able to “see” what is happening on the network in real time is what can allow a company to defend itself. When a breach is in process, speed is essential.

See Your Data

Surprisingly, many companies are not fully aware of the data they collect, save and process – but this is key to complying with data privacy laws. Companies need to know:

  • What data does the company collect?
  • What data does the company need to collect?
  • How does the company collect data – directly from users, clients, and consumers, or through third parties?
  • Where the company stores its data?
  • How does the company use the data it collects – particularly personal information of individuals, including employees?
  • Who has access to the data?

Continue reading