21 Oct 2025
by Paul Latham, Gallagher

The results from the 2025 Gallagher global benchmarking study, Attitudes to AI Adoption and Risk, highlighted that one of the key organisational risks hampering AI adoption is the ethical impact of AI usage. With Gen AI becoming more than a buzzword, that concern is also felt by public sector leaders.

A recent survey of UK civil servants working in finance, transformation and digital roles found that a vast majority were using tools like Chat GPT and Gemini in their day-to-day work.

This appetite, while encouraging, comes with a catch: Nearly 4 in 10 have encountered AI-driven errors, and about a third feel their organisations do not give them clear guidance on how to use these tools.

Evolving risk perceptions of AI

Public sector organisations that use AI often reflect their unique workplace realities. The potential is huge: 70% of UK public servants endorse Gen AI’s transformative capability in improving policy development, and a majority of organisations view AI as a tool to infuse the human touch into public administration.

Research also suggests that up to 41% of non-frontline routine tasks could be automated, opening the door to faster, more efficient public services. 

With regulations such as the EU AI Act setting pace for responsible adoption (and likely influencing frameworks in the UK) the ethical stakes for public sector bodies are growing. 

AI ethics in the public sector

When it comes to AI adoption in the public sector, the top challenges that limit oversight, accountability and trust are:

  • A lack of algorithmic transparency in decision-making.
  • A persistent shortage of digital and data skills.
  • AI bias - a phenomenon that arises when discriminatory data is baked into AI algorithms, then amplified at scale.

Failing to build control systems early in design and implementation processes could leave your organisation with a technical debt. Prioritising speed over quality could mean much higher rectification costs down the road. Alternatively, you’ve got ethical debt; where ethical considerations that took a backseat to quicker implementation could haunt you.

Angela Isom, global chief privacy officer at Gallagher recognises the challenges of managing AI compliance in such a black box environment; where system decisions are often opaque and difficult to interpret. To start with, she says: ”a structured AI governance framework is necessary, one that remains agile and responsive to the fast-evolving technological landscape”.

AI governance framework checklist

  • Are your standard operating procedures updated to reflect AI integration?
  • Is there governance for AI-driven decision-making?
  • Have your employees been trained in AI processes, use cases, and limitations, and is there evidence to support it?
  • Do you have contingency plans if the AI system goes offline?
  • Is there a maintained inventory of AI use cases with ongoing testing and validation controls?

However, if organisations want to oversee this area effectively, they must collaborate with business leaders. Senior executives like educated AI ethics officers must be held accountable for identifying and escalating AI-related risks, and for formulating new regulations governing their function. 

When establishing an AI risk oversight role, embedding it within existing governance structures helps integrate risk management into daily operations. As Isom points out: “where standalone AI committees have been established, they are typically led by a chief data officer skilled in managing data across the organisation.”

AI adoption — ethical considerations for leaders

When bringing AI into the public sector, organisations face a deeper layer of ethical challenges. These go beyond technology, touching on trust, fairness and the public good.

Key AI governance framework considerations to keep AI adoption ethically aligned include:

  • Ethical accountability and responsible use: to serve the public good, public sector AI must be tightly governed to stay within its intended scope and avoid mission creep; with gradual expansion into unrelated or intrusive areas.
  • Data privacy, security and lawful access: use of sensitive personal data for AI must be justifiable, with appropriate safeguards to comply with data protection laws. When publicly sourced data is used, explicit consent and anonymisation may be mandatory.
  • Transparency and usage: to build trust, citizens need to have transparency about when and how AI data is used in decisions that affect them.
  • Impact on jobs and the workforce: AI adoption may displace or change certain roles. Employees must be reassured of upskilling and reskilling pathways and opportunities for internal mobility.
  • Bias, fairness and model drift: AI models must be continuously monitored for biases, and public agencies must guard against model drift; where AI behaviour shifts over time. Ongoing data revalidation is needed.
Building trust across the enterprise

As AI risks gain visibility, it's vital to address ethical concerns in ways that forge trust with both stakeholders and employees.

Yet, when asked if senior leaders fully understand AI, most UK public sector respondents signalled a lack of confidence. Compounding the uncertainty, most respondents lacked clarity on their organisation’s process for evaluating and procuring AI tools. In fact, only a small fraction agreed one existed in the first place.

“Part of the AI ethics officer's role is to ensure that AI initiatives align with the company's ethical standards and societal values, fostering a culture of ethical awareness throughout the company,” notes Isom. 

Just as crucially, this clear ethical leadership shows a commitment to responsible innovation, which makes an organisation more attractive and trustworthy to talent.

Work with AI for ethical accountability

AI holds great promise, including: more efficient services, smarter policies and smarter use of public resources. But these benefits require strong ethical guardrails and shared responsibility to prevent misuse.

A dedicated AI ethics officer who works alongside leaders, risk managers and citizens, can help balance innovation with accountability. AI-driven digital tools also challenge assumptions, guide users constructively and reinforce, rather than replace, human judgement.

Related topics