Member News

Why Proactively Managing AI Risk and Building Trust Is A C-Suite Issue

Back to Articles

09 July 2025

Servicenow’s Anna Mazzone on Why Proactively Managing AI Risk and Building Trust Is A C-Suite Issue 

 

When it comes to risk, the stakes are strikingly higher than just a few years ago. As AI becomes a core part of business operations, leaders are under pressure to move fast whilst remaining compliant, secure, and in control. Regulations like the EU’s Digital Operational Resilience Act (DORA) are already changing the rules. At the same time, organisations are shifting to fully digital models and facing a surge in cyber-attacks. With generative and agentic AI now in use, managing risk and building trust has never been more urgent. 

Put simply, AI risk has outgrown the CIO’s remit, it’s now a boardroom issue.  

With AI embedded across a business, risks around data privacy, trust, security, and compliance touch every corner of the organisation. According to Forrester, AI risk and data privacy now rank as the second-highest enterprise risk. Yet managing them is far from straightforward: 29% of employees cite a lack of trust in AI systems as the biggest barrier to adoption. Therefore, the C-suite must lead from the front—building trust, engaging teams, and tackling resistance head-on. 

 

Navigating AI regulations and governance  

As AI regulations emerge across regions, organisations must not only comply but also turn to AI tools themselves to help manage this evolving governance landscape. It's a circular advantage: the right technology can help businesses stay ahead of the very systems that govern it. AI governance is becoming increasingly critical, meaning effective change management will be essential to help employees embrace the technology.  

Business leaders must also not lose sight of third-party risks, which are often more complex when AI is involved. Just as importantly, they need to ensure AI use is aligned with the organisation’s values, ethics, and strategic objectives. 

A clear governance structure is key. There should be a well-defined owner of AI governance within the organisation. This role can sit with legal, compliance, the chief data officer, or a Chief AI Officer. As third-party AI tools are introduced, this team role responsibility for implementing consistent frameworks to assess and manage associated risks and regulatory compliance. 

 

Trust starts with transparency and data 

Trust is fundamental to successful AI adoption. Clear communication with both customers and consumers about how AI is being used helps drive acceptance and confidence in the technology. Transparency should be at the heart of every AI initiative. 

A cautious, phased approach is wise, starting with low-risk use cases and expanding as the organisation builds internal expertise and stakeholder trust. Regulatory compliance, such as with the EU’s AI Act, should be seen not just as an obligation, but as a trust-building opportunity. 

Crucially, building trust starts with the data itself. Leaders should seek to address the challenges of disparate systems and prioritise establishing a unified data taxonomy. Strong data quality, visibility and sound practices are the foundation of reliable, ethical, and explainable AI. 

 

An enterprise view 

Centralised software platforms are becoming increasingly important for gathering a real-time enterprise view of these interrelated risks. Unified software platforms offer an enterprise-level view of operational risk postures across the key assets needed to run an organisation, namely people, technology, facilities, third parties, and data. Real-time software platforms also offer the capacity to manage risks in an unobtrusive way. 

Controls can be embedded in workflows, so employees have no idea they are actually mitigating risk. To the employees concerned, they are simply changing a password or completing a training module, all in response to controls within the platform. Strong, and continuous employee training and AI literacy programs will be fundamental to implementing AI safely and legally.  

End-to-end software platforms can also help to manage AI models, particularly in regulated industries. For example, in financial services, there is a significant amount of regulation around AI models, with models requiring regulatory sign-off before they can be put into production. With an end-to-end software platform, models can be managed within the platform, ensuring they align with policies, remain within defined risk tolerance levels, and meet required regulatory standards. 

 

A proactive approach 

In the past, the C-suite may have focused on operational resilience, where they react and respond to adverse conditions. However, given today’s changing demands there’s a need to shift towards a new kind of resilience: proactive resilience. This involves a predictive management environment, where organisations aim to ‘see around corners’ to anticipate and mitigate risks, including those related to AI. This is why integrating governance tools into workflow combined with data is becoming increasingly important. 

Threats can take many forms. Some are straightforward, like expiring software licences which can potentially halt a critical service. Others are far less predictable, such as the CrowdStrike incident, where a third-party software update caused widespread disruption globally. In the past, predicting such threats was challenging due to siloed systems and the difficulty of having an organisation-wide view. Transitioning to integrated software platforms allows the operational and technology leadership to understand the full picture and take this proactive approach. For instance, which team member is responsible for maintaining and repairing specific systems. This visibility is critical for effective risk management and ensuring continuous delivery of services. 

 

Mastering risk 

For the C-suite, mastering AI risk is imperative as well as driving the adoption of AI to improve productivity in serving clients. AI is already embedded across many of today’s operations, and business leaders must now prioritise building trust in the technology, ensuring its use aligns with organisational values, and proactively managing emerging risks. 

Complying with regulations in advance presents a valuable opportunity. It allows businesses to stay ahead of legal requirements, reinforce stakeholder trust, strengthen governance, and future-proof operations. Integrated software platforms are essential enablers, providing a comprehensive, real-time view of risk and resilience across the enterprise. 

Ultimately, the C-suite must lead the way by embedding AI governance, championing transparency, and investing in the tools and processes needed to support a safe, scalable, and trustworthy AI future. 

Anna Mazzone is Vice President - EMEA - Operational Resilience, Risk and Security Business Strategy Leader at ServiceNow. 

 

About Anna Mazzone 

Anna Mazzone is responsible for the business growth strategy of operational resilience, risk, security and AI Governance businesses at ServiceNow, a global Silicon Valley technology company. Anna joined ServiceNow in 2020 to launch the company's Risk business which has grown to be #2 in EMEA, with over 500 clients and a team of 60+. 

About ServiceNow  

ServiceNow (NYSE: NOW) is putting AI to work for people. We move with the pace of innovation to help customers transform organizations across every industry while upholding a trustworthy, human centered approach to deploying our products and services at scale. Our AI platform for business transformation connects people, processes, data, and devices to increase productivity and maximize business outcomes. For more information, visit: www.servicenow.com.    

Please login or register to post comments.
My HomeMembershipMember News