Blog
Articles
Data Governance Goes Agentic: Insights from DGIQ 2026

Data Governance Goes Agentic: Insights from DGIQ 2026

Articles
May 12, 2026
Alon Nafta
Subscribe to our Newsletter
Get the latest from our team delivered to your inbox
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Ready to get started?
Try It Free

Last week at DGIQ, I gave a presentation with the title “Data Governance Goes Agentic: Bringing Agents and Context into Data Governance”.

A lot of the conversations at the conference centered around data governance (obviously), AI readiness, agents, automation, and how quickly AI is changing the enterprise landscape.

But I kept going back to what I think is a key point: What happens when we apply agents to data governance itself? Not as a chatbot sitting on top of metadata. Not as another assistant generating summaries. I mean agents that can actually understand how enterprise data systems work.

Because governance teams are sitting on one of the hardest problems in AI right now: enterprise context. And realistically, most organizations still do not have enough of it.

AI is raising the bar for governance

For years, governance programs focused on documentation, metadata, ownership, observability, and data quality.

Those are still important. But AI (and for some industries, recent regulation) changes the expectations completely.

Now teams need to answer questions like:

  • Where did this data originate?
  • Which systems transformed it, and how?
  • Which models use it?
  • What breaks if this changes?
  • Is sensitive data flowing somewhere it should not?
  • Can we trust the output?

And they need those answers immediately. The problem is that enterprise data environments are deeply fragmented.

Business logic lives in source code. Transformations happen across multiple platforms. Critical dependencies exist in Python, Java, SQL, CRMs, BI tools, and legacy code.

Most governance systems only see a small slice of that reality. That becomes a serious issue when the organization is lacking that visibility, and when AI starts making decisions based on incomplete context.

Metadata alone is not enough

One thing I discussed during the session is that governance agents need much more than metadata to operate effectively.

There are really two layers of context. The first layer is the traditional governance layer:

  • Metadata
  • Documentation
  • Tags
  • Schemas

The second layer is where things get interesting:

  • Source code
  • Column-level lineage
  • Query logs
  • Semantic models
  • Usage patterns
  • Business workflows

That second layer is what gives AI systems actual understanding. Without it, agents can sound intelligent while still missing critical dependencies, transformations, or downstream impact.

This is why so many enterprise AI projects struggle with trust. The models are not necessarily the problem. The missing context is.

Why we focus so much on source code

We made an early decision that governance needed to start closer to where data is actually created and transformed.

That is why we analyze source code. Not because source code analysis is interesting on its own, but because some systems can only be understood by looking at the source code. And typically how data is flowing between systems is code. And lastly because change management is much more important now than it was, and it’s time we bring proper SDLC in the data governance space.

When you combine source code analysis with lineage and metadata, you get a much more accurate picture of how data flows across the enterprise. That changes what governance teams can do.

You can:

  • Understand downstream impact before changes merge
  • Detect hidden dependencies
  • Improve trust in AI systems
  • Reduce production surprises
  • Govern across legacy and modern environments together
  • Give agents real organizational context

This becomes especially important for regulated industries, AI governance initiatives, and large enterprises operating across dozens or hundreds of systems.

Enterprise agents need ground truth

One of the ideas I kept emphasizing during the presentation is that agents are only as useful as the context they receive. If the context is partial, stale, or disconnected from reality, the outputs will be too.

This is where lineage becomes incredibly important. Not just surface-level lineage generated from warehouse logs, but deterministic lineage grounded in source code and operational systems.

That level of context allows agents to reason about:

  • Data movement
  • Policy enforcement
  • Change management
  • AI dependencies
  • Privacy risks
  • Business meaning

And importantly, it allows governance to move at engineering speed.

Governance at AI speed

Another theme that came up repeatedly at DGIQ was speed. AI is accelerating how quickly organizations build, change, and deploy systems. Governance cannot operate as a slow, after-the-fact process anymore. It has to integrate directly into engineering workflows, CI/CD, pull requests, and operational systems.

That is ultimately what I mean when I talk about agentic governance. Not replacing governance teams with AI. Giving governance teams systems that can reason, automate, validate, and operate using the full context of the enterprise.

Because the organizations that succeed with AI will not necessarily be the ones with the biggest models. They will be the ones with the clearest understanding of their own data.

code snippet <goes here>
<style>.horizontal-trigger {height: calc(100% - 100vh);}</style>
<script src="https://cdnjs.cloudflare.com/ajax/libs/gsap/3.8.0/gsap.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/gsap/3.8.0/ScrollTrigger.min.js"></script>
<script>
// © Code by T.RICKS, https://www.timothyricks.com/
// Copyright 2021, T.RICKS, All rights reserved.
// You have the license to use this code in your projects but not to redistribute it to others
gsap.registerPlugin(ScrollTrigger);
let horizontalItem = $(".horizontal-item");
let horizontalSection = $(".horizontal-section");
let moveDistance;
function calculateScroll() {
 // Desktop
 let itemsInView = 3;
 let scrollSpeed = 1.2;  if (window.matchMedia("(max-width: 479px)").matches) {
   // Mobile Portrait
   itemsInView = 1;
   scrollSpeed = 1.2;
 } else if (window.matchMedia("(max-width: 767px)").matches) {
   // Mobile Landscape
   itemsInView = 1;
   scrollSpeed = 1.2;
 } else if (window.matchMedia("(max-width: 991px)").matches) {
   // Tablet
   itemsInView = 2;
   scrollSpeed = 1.2;
 }
 let moveAmount = horizontalItem.length - itemsInView;
 let minHeight =
   scrollSpeed * horizontalItem.outerWidth() * horizontalItem.length;
 if (moveAmount <= 0) {
   moveAmount = 0;
   minHeight = 0;
   // horizontalSection.css('height', '100vh');
 } else {
   horizontalSection.css("height", "200vh");
 }
 moveDistance = horizontalItem.outerWidth() * moveAmount;
 horizontalSection.css("min-height", minHeight + "px");
}
calculateScroll();
window.onresize = function () {
 calculateScroll();
};let tl = gsap.timeline({
 scrollTrigger: {
   trigger: ".horizontal-trigger",
   // trigger element - viewport
   start: "top top",
   end: "bottom top",
   invalidateOnRefresh: true,
   scrub: 1
 }
});
tl.to(".horizontal-section .list", {
 x: () => -moveDistance,
 duration: 1
});
</script>
Share this post
Subscribe to our Newsletter
Get the latest from our team delivered to your inbox
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Ready to get started?
Try It Free

Govern data and AI at the source code