Micaela McMurrough’s commentary was included in a Cybersecurity Law Report article offering practical guidance on the AI risk assessment process, including who to involve, timing, identifying key risks, and addresses how to use the results to help mitigate risks.
Micaela told Cybersecurity Law Report that “the adoption rate is unlike anything we’ve seen in the past. It is great to see that companies are leaning into [AI tools] and adapting as the technology evolves,” observed Micaela. “The goal is to benefit from the upside of these technologies while minimizing the downside risk,” she added.
“There can be levels to AI risk assessment,” Micaela noted. A broader AI risk assessment can guide the process “across the enterprise, and within that framework, specific use cases may warrant individual AI risk assessments.” For example, “there can be an assessment of risk associated with the specific use of a particular AI tool, or an assessment of an organization’s use of AI more broadly,” she explained.
No matter how the term “AI risk assessment” is defined, it differs from more traditional risk assessments. “Traditional technology, cyber and privacy risk assessments are often performed as gap assessments against well-known standards – organizations are assessing whether there are gaps in programs or processes against known benchmarks and existing laws, regulations and guidelines, such as the NIST’s Cybersecurity Framework or the New York Department of Financial Services cybersecurity regulation,” explained Micaela. In contrast, AI risk assessments are being conducted right now in somewhat unchartered territory. “There are fewer firm guidelines or laws regarding what is expected,” Micaela added.
A company’s business model, industry, jurisdiction and risk tolerance all impact how it defines and manages AI risk. As a starting point, companies should “figure out what legal frameworks apply in the jurisdictions where they operate and determine what their own risk tolerance is with respect to AI,” suggested Micaela. “From that starting point, they can design appropriate frameworks and processes to identify and manage risk,” she added.
If an organization is relying “on inaccurate information generated by AI,” it can give rise to legal risk,” Micaela also cautioned. Moreover, corporate use of AI can lead to “potential issues related to legal privilege, preservation obligations or confidentiality concerns,” she said. “Companies should consider how their management of AI risk fits within their broader enterprise risk management program,” according to Micaela. That includes classifying high-risk versus low-risk use cases and clarifying responsibility for managing AI-related risk, she continued.