Who is Responsible for AI Governance in Organisations?
The use of AI in organisations is growing rapidly, supporting various aspects of business operations. These cover many aspects such as customer service and data analysis to process automation and decision-making. However, many organisations are getting swept up in the hype, adopting AI technologies without a clear strategy, defined ownership, or proper governance framework in place.
Jon Eller from Concentric AI spoke at InfoSecurity Europe last week. In his talk, ‘Modern Data Security in the Generative AI Era’, he noted that AI is often rushed into organisational processes simply because the board has decided it must be adopted. However, this frequently happens without a clear understanding of why it’s being implemented or what specific purpose it is intended to serve.
Organisations are rushing to implement AI without a clear strategy in place. Without a defined roadmap, it becomes difficult to establish the necessary structures, responsibilities, and governance around it. This lack of clarity often leads to confusion over accountability, security gaps, and missed opportunities. When the purpose of adopting AI is well understood and a strategic plan is in place, it becomes much easier to assign ownership and clarify who is responsible for each aspect? From technical implementation, ethical oversight, or regulatory compliance.
Without a clear strategy, one critical gap is often overlooked: who is responsible for governing AI?
In his talk ‘Securing the Future: AI Security, Governance, and the Rapidly Evolving Landscape,’ Lee Munson, Principal Research Analyst at the Information Security Forum, challenged the audience to consider whether they had dedicated AI policies in place. Comparable to standard business policies such as those for wellbeing or anti-bribery. About half the audience raised their hands. He then asked how many of those policies were written specifically for AI, rather than adapted from existing documents. Only two hands went up.
Munson emphasized that the first step in establishing effective AI governance is to develop dedicated AI policies tailored to the unique challenges and risks AI presents. These policies should go beyond generic language and clearly outline acceptable use, data handling, accountability, and review processes.
As AI continues to transform the way organisations operate, it’s essential that its adoption is guided by purpose, not pressure. Jumping on the AI bandwagon without a clear strategy, ownership structure, or governance framework risks creating more problems than it solves. Leaders must take a step back and ask the right questions: Why are we implementing AI? What do we want it to achieve? Who is accountable for its impact?
Creating dedicated AI policies which are crafted with intention rather than copied from existing frameworks. This is a crucial first step toward responsible, ethical, and sustainable use of AI. Only then can organisations harness its full potential while effectively managing the risks it brings.