Yaron Dori was quoted in a Cybersecurity Law Report article about the AI governance priorities experts recommend companies adopt in 2026 as they work to manage increasingly complex AI related risks and organizational pressures.
Yaron said, “Companies began to license AI tools in earnest in 2025. They have made these tools available to a wider range of employees.”
To achieve trustworthy AI, companies will need employees to follow policies, which is no easy task. “Monitoring for compliance across an entire employee base – not to mention across key business partners –is difficult,” Yaron observed. Lessons from the past decade of data security have proven how difficult it is to monitor at scale, he said. “Companies that have made meaningful investments in AI tools and are loosening restrictions in their eagerness to demonstrate that those investments are worthwhile” will have a particularly challenging time ensuring employees comply with AI governance policies, he warned.
With some employees wary of AI adoption, companies must engage their workforce directly to foster trustworthy AI. “Collaboration and training will be key,” Yaron predicted. “Sharing stories of successes – and failures – will be important” for strategic reasons, because companies have yet to find many impactful uses of Gen AI, he explained. At the same time, “knowledge sharing and collaboration can be used to emphasize governance, security and trust standards, thereby injecting a training element into everyday uses” of the AI, he recommended.