X Close Search

How can we assist?

Demo Request

How to Navigate AI Governance, ISO 42001 & New Regulations

Learn how to navigate AI governance, ISO 42001 compliance, and evolving state and global regulations to mitigate risks and build trust.

Post Summary

Artificial intelligence (AI) is transforming industries at a rapid pace, but with this evolution comes complex governance and regulatory challenges. For professionals in healthcare and cybersecurity, the stakes are high - balancing innovation with compliance, risk management, and ethical implementation. In a recent discussion, Walter Haydock, founder of Stackaware, shared his expertise on navigating these challenges, focusing on AI governance frameworks like ISO 42001, state and international regulations, and risk management strategies.

This article unpacks the key insights from the conversation and provides actionable recommendations for IT leaders, CISOs, and decision-makers in healthcare and cybersecurity.

Understanding ISO 42001: A Foundation for AI Governance

ISO 42001

ISO 42001 is an internationally recognized standard designed to help organizations build robust AI management systems. It establishes a framework of policies, procedures, and technical controls to manage AI risks effectively. According to Haydock, ISO 42001 is divided into two main components:

1. Administrative Controls

These foundational elements focus on governance structures and include:

  • Establishing an AI policy to define risk appetite and ethical guidelines.
  • Conducting analyses of internal and external issues, including regulatory requirements and business incentives.
  • Implementing programs for measurement, monitoring, and audits.
  • Ensuring management review of key AI objectives and challenges.

2. Technical Controls

This second layer addresses the operational aspects of AI systems:

  • Evaluating data provenance and the preparation of datasets for AI models.
  • Setting up feedback mechanisms for whistleblowers and external parties.
  • Clarifying roles and responsibilities within the AI ecosystem, including vendors and stakeholders.

By adhering to ISO 42001, organizations can proactively identify and mitigate risks, showcase compliance through external audits, and build trust with customers.

The Regulatory Landscape: From Colorado to the EU

Governments worldwide are grappling with how to regulate AI effectively. Haydock highlighted key differences between the approaches taken by the United States and Europe, along with emerging state-level regulations like Colorado's SB 205.

U.S. vs. European Approaches

In the United States, regulation is largely fragmented, with states leading the way in AI-specific legislation. For instance:

  • Colorado SB 205: This regulation mirrors the European Union's AI Act, requiring companies to categorize systems as high-risk or low-risk and implement controls accordingly. Notably, it references ISO 42001 and the NIST AI Risk Management Framework as benchmarks for compliance.
  • Other States: Utah has focused on regulating mental health chatbots, while Texas has opted for a lighter regulatory approach, banning only extreme AI use cases.

In contrast, the European Union's AI Act is more centralized but faces challenges in implementation. Its tiered framework for high-risk systems and harmonized standards introduces complexities, especially as key requirements and standards remain in development.

Haydock predicts the following trends:

  • "High-Watermark" Approach: Organizations operating across jurisdictions may adopt the most stringent regulations (e.g., GDPR) as a baseline to ensure compliance globally.
  • Selective Feature Disabling: Some companies may restrict AI functionalities in specific regions to avoid non-compliance, as seen with AI-powered hiring tools in New York City.
  • Uncertainty in Europe: With incomplete standards and tight deadlines, the EU may face delays or last-minute adjustments to align with established frameworks such as ISO 42001.

Building Resilient AI Governance Programs

For organizations looking to implement or enhance AI governance, Haydock emphasizes three essential steps:

1. Establish an AI Policy

Define the organization’s risk appetite, ethical guidelines, and acceptable use of AI. This policy serves as the foundation for all governance activities.

2. Conduct a Comprehensive Inventory

Identify all AI systems currently in use, including those adopted without formal approval (shadow AI). This inventory provides a clear view of the organization’s AI footprint.

3. Perform Risk Assessments

Evaluate each AI system against governance standards like ISO 42001. Identify risks and specify appropriate mitigation, transfer, or avoidance strategies.

Additionally, Haydock underscores the importance of transparency, particularly for new organizations. By sharing governance practices and AI system details, companies can build trust with stakeholders. However, for legacy organizations, a phased approach to transparency may be more practical.

The Role of Employees and Culture in AI Security

Creating a secure and compliant culture is critical for responsible AI adoption. Haydock advises organizations to:

  • Set Clear Expectations: Avoid vague guidelines like "use AI ethically" and instead define specific behaviors and restrictions.
  • Foster Accountability: Ensure employees understand their roles and responsibilities in managing AI risks.
  • Promote Transparency: Encourage open communication about AI systems, risks, and governance strategies.

Organizations that approach AI governance with rigidity - such as banning all AI tools - risk alienating employees and stifling innovation. Conversely, clarity and flexibility can empower teams to use AI responsibly.

Industry Dynamics: Healthcare Leading the Way

Among sectors, healthcare has emerged as a leader in AI governance due to its dual focus on confidentiality and compliance. Haydock highlights:

  • High Reward Potential: AI-driven efficiencies can address systemic issues in healthcare delivery, making responsible adoption particularly valuable.
  • Proactive Governance: Healthcare organizations are implementing governance frameworks to navigate strict U.S. regulations and mitigate risks.

Other sectors, such as manufacturing, are also advancing their AI governance efforts, driven by concerns over intellectual property protection.

The Evolving Role of the CISO in AI Governance

As AI adoption accelerates, the role of the CISO is evolving. While security teams are well-positioned to handle AI governance due to their expertise in risk management, Haydock suggests that AI governance may eventually require specialized roles, such as Chief AI Officers (CAIOs). These leaders would oversee AI strategy, ensuring alignment with policy, security, and business objectives.

In the medium term, CAIOs may become common in multinational and mid-market organizations. However, Haydock predicts that AI governance will eventually be integrated into broader organizational structures, similar to the evolution of the Chief Information Officer (CIO) role.

Key Takeaways

  • ISO 42001: Provides a structured framework for AI governance, covering administrative and technical controls.
  • Fragmented Regulation: U.S. states are leading AI legislation, while the EU faces challenges with its centralized framework.
  • High-Watermark Compliance: Organizations may adopt stringent regulations (e.g., GDPR) as a baseline for global compliance.
  • Inventory and Risk Assessment: Identify all AI systems, including shadow AI, and evaluate them against governance standards.
  • Transparency Matters: Proactively share governance practices to build trust, especially for new organizations.
  • Industry Insights: Healthcare leads AI governance due to high reward potential and strict compliance requirements.
  • CISO Evolution: Security teams should focus on risk management but avoid acting as sole decision-makers for AI deployment.
  • Future of AI Governance: Chief AI Officers may play a key role in the medium term but could eventually integrate into broader operations.

Final Thoughts

AI governance is no longer optional - it is a business imperative. For healthcare and cybersecurity leaders, understanding standards like ISO 42001, navigating regulatory complexities, and fostering a secure culture are critical steps. By taking a proactive approach, organizations can harness the transformative potential of AI while managing risks effectively. As Haydock aptly noted, success lies in balancing innovation with responsibility, ensuring organizations build their digital empires on solid ground, not sand.

Source: "Navigating the Maze of AI Governance: Insights on ISO 42001 and New Regulations with Walter Haydock" - Forcepoint, YouTube, Aug 26, 2025 - https://www.youtube.com/watch?v=As7mOhqO20k

Use: Embedded for reference. Brief quotes used for commentary/review.

Related Blog Posts

Key Points:

Censinet Risk Assessment Request Graphic

Censinet RiskOps™ Demo Request

Do you want to revolutionize the way your healthcare organization manages third-party and enterprise risk while also saving time, money, and increasing data security? It’s time for RiskOps.

Schedule Demo

Sign-up for the Censinet Newsletter!

Hear from the Censinet team on industry news, events, content, and 
engage with our thought leaders every month.

Terms of Use | Privacy Policy | Security Statement | Crafted on the Narrow Land