AI Governance in Practice: A New Year's Resolution for Boards on Agentic AI
By: Fred Kneip, CEO
A new year is a useful forcing function for boards: reset expectations, tighten oversight, and turn vague “we’re looking into it” updates into clear accountability. That matters now because AI risk won’t wait for governance to catch up—especially as agentic AI moves from experimentation into business workflows that can take actions, delegate tasks, and trigger downstream systems.
Boards don’t need to become technical operators. But they do set the standard for risk tolerance, control ownership, and what “good governance” looks like in practice. If oversight is treated as optional, material decisions about access, autonomy, and failure handling will be made by default—inside product teams, business units, or vendors—without consistent guardrails.
Agentic AI changes the oversight problem in two important ways
Risk becomes system-level. It’s no longer just “what model did we use?” but “what can this agent reach, what can it do, and what happens when it behaves unexpectedly?”
Controls must be continuous. Point-in-time reviews don’t hold up when models drift, tools change, permissions expand, and new integrations get spun up quickly.
So, here’s a practical board-level resolution for 2026: require AI governance that is operational, measurable, and evidenced in production—not just policy statements or slideware. That resolution becomes real when it’s anchored in the same three questions, asked consistently until the organization can answer them with confidence and proof.
Three questions every board member should be asking in 2026
Where is AI running today, and which business processes depend on it?
Boards should expect a complete inventory that includes both customer-facing and internal use, along with a view of critical dependencies.
What governance exists at the AI interaction layer?
If the organization is deploying agents or using Model Context Protocol tools, boards should ask how connections are discovered, monitored, and governed. The question is not only what AI can access, but what it can trigger through workflows.
How do we validate that controls work in production?
Boards should expect evidence of continuous monitoring, policy enforcement, and testing that is designed for real-world failure modes such as drift, manipulation, and unintended actions.
The bottom line
The board does not manage AI risk directly, but it owns the consequences of weak AI governance. In 2026, oversight will be measured by what the board required, what it reviewed, and what it could prove.