Much of the current focus on AI safety has centred on models – how they are trained and monitored. But as systems become more autonomous, attention is changing toward the data those systems depend on. If the data feeding an AI system is fragmented, outdated, or lacks oversight, the system’s behaviour can become more unpredictable.
Data governance is becoming a core part of how autonomous systems are controlled. Denodo is one of the companies working in this area, focusing on how organisations access and manage data in different sources.
Autonomous AI systems carry out tasks with limited supervision, retrieving information, making decisions based on that information, and triggering actions in business workflows. The challenge is that these systems depend on a steady flow of data. In regulated industries, unpredictable results can create compliance risks. In customer-facing systems, it might result in poor decisions or incorrect responses.
How data alters AI behaviour
Data is often spread in multiple systems. Large organisations store information in cloud platforms, internal databases, and third-party services. This creates silos, where different parts of the business operate on different versions of the same data.
Denodo addresses this problem by providing a way to access data without moving it into a single repository. Its platform creates a unified view of data from different sources for applications, including AI systems.
It lets allows organisations apply consistent policies in all data sources. Access rules, compliance requirements, and use limits can be defined in one place. It also supports approaches that allow AI systems to query enterprise data using defined structures and policies.
The platform logs how data is queried and what is returned, creating an audit trail. This can help organisations understand how an AI system reached a decision and support compliance requirements. It can also help teams monitor data use in real time and identify unusual activity.
If multiple AI systems rely on the same governed data layer, they are more likely to produce aligned results which can help reduce the risk of conflicting outputs in different parts of the business.
Governance in the stack
As autonomous AI systems become more common, governance is being applied at several levels. Data governance, which sits underneath models and applications, helps ensure that the inputs to those systems are reliable. A well-governed model can still produce poor results especially if it relies on flawed data. Strong data governance can support better outcomes even when systems operate with some degree of independence.
This is why data-focused companies are becoming part of the broader AI governance conversation. By controlling how data is accessed and used, they help alter how autonomous systems behave in practice.

