.jpg)
AI Reveals Why BI Still Matters (Hint: It’s Not Dashboards)

Ask a BI engineer what they actually spend their time on: it's not building dashboards. More often: fixing the join that broke in the overnight pipeline, untangling the metric definition that means three different things to three different teams, or getting last week's numbers into an Excel by Monday morning. The dashboard was always the easy part.
This article looks at how BI evolved, how dashboards are actually used today, and what survives when AI enters the picture — starting with the foundation that was never really about dashboards in the first place, and ending with the problem nobody in the AI hype cycle wants to talk about: who maintains it all.
The verdict of people in the field: BI is dead (again)
We've heard it all. Business intelligence (BI), and especially dashboards, are dead. But every time, only to rediscover its power and resurrection whenever we need grounded data analysis in any enterprise and startup space. The same way Excel never dies, which arguably is still the most used BI tool.
If we look at what others from the data world say, it does sound similar. Hex says that dashboards were never the destination and specifies:
Static reporting surfaces were always a workaround for messy data and limited tooling — agentic analytics finally closes the gap between visibility and genuine insight.
In other words, dashboards were a workaround created because data was messy, tooling was restrictive, and asking open-ended questions of the warehouse wasn’t possible without coding. Hex continues that dashboards create more questions than they answer.
But they also say: respect dashboards, but stop treating them as the goal:
Dashboards still matter. They’re excellent for reporting KPIs, surfacing operational signals, and aligning leaders around shared metrics. They allow data teams to be creative in how they display these metrics and will remain a primary surface for need-to-know numbers.
Dashboards don't reason or explain why something happens, as the Hex article says. Hex argues that reliable, timely, context-aware answers are the destination.
Mike, CEO of Rill, says:
Agents are blind—they can’t see dashboards. But they do need access to the primitives behind them. AI, and agentic working won’t kill BI, but it will make dimensional modeling, metrics, OLAP cubes, query performance, and governance more important than pretty charts. Agents will reveal what many of us have known for a long time: BI was never about dashboards.
Benn Stancil was already saying in 2021 that BI is dead, where he drew parallels to the Salesforce "End of Software" declaration in 2000. This is very interesting as it's almost the same statement today with AI, no more software developers needed. Benn argues that the original BI stack was one tool across the stack, but that by 2021, tools were unbundling into dedicated and specialized tools for each layer.
The proposed future of BI should focus only on data consumption for humans, integrating both self-serve applications and deep ad-hoc analysis. This new BI would be "legless" (the opposite of headless, he argued back then), relying on global governance layers rather than proprietary semantic layers, and fostering cross-functional collaboration.
The aim is a universal tool for all data consumption, moving beyond its current diminished state of "visualization and reporting". Again, very similar to today's discussion, where everything is about semantics and context. Benn concluded that "BI is dead, long live BI".
Take any modern data and analytics discipline, and you’ll probably find it has its roots in the work that has been historically carried out by Business Intelligence developers, the OG jack-of-all-trades of the data industry.
Find more Opinions and Articles
Other opinions on the world wide web, whether BI is dead. Long live business intelligence., or a discussion on Reddit about Will Business Intelligence skills (BI) be irrelevant in like 3-4 years? : r/analytics or AI kill BI?.
BI was never just about dashboards
With respect to Benn's article in 2021, have we come full circle five years later, with the end of software engineering again, and everyone demanding semantics and metrics layers?
At least on the surface, it seems we are at the same point, but today we're going back to one "Original BI" stack as drawn in the image in the article, to a fully encompassed data platform. Maybe it does not need to be one single platform, but at least to the user it needs to be a single chat or AI interface, that goes end-to-end through all the layers of discovery, visualization, transformation, storage, and ingestion—or in other words, the full data engineering lifecycle.
So what about dashboards then? Many declare the death of many things, and dashboards are a popular target too. That's even more true when AI, with its generative capabilities, can just one-shot your whole dashboard and create a full-blown web app or custom HTML page. And dashboards might die, as many don't actually want the dashboard itself, but the extracts from your large SAP, linked to the right customers from the CRM, enhancing the decision-making process even more.
They want the insights from the combination of all source data and business insights that your company has over its competitor, for example. It's the primitives behind the dashboards that matter more.
When you still need a dashboard (AI chat is not enough)
Even though a chat interface or an agent can provide you with dashboard information tailored to your question and in an explanatory written form, there's still a need for dashboards in certain situations.
The obvious one is the well-crafted operational dashboard, where you can see your whole company performing in a split second by looking at a single, highly dense dashboard with individual charts and visualizations tailored to convey information about each sub-area in the best possible way. It's the same way a map is still needed in self-driving cars: for quick verification, to get an overview, or in case the car gets lost.

The other obvious benefit of the operational dashboard, or any general dashboard, is that people can agree on numbers, as they're looking at the same agreed-upon dashboard. Hence, the calculations are the same compared to individual Excel sheets with different calculations.
The easy creation of multiple dashboards on the fly or chatting with AI to get insights resembles the old way of using local Excel files. Everyone is doing their own thing, with no alignment, governance, or broader verification.
Maps are a different type of dashboard that can't be replaced. Geospatial data shown on a map still beats text. Or a bar chart where you immediately scan the different proportions of stock in inventory across certain regions, compared to first having to analyze all numbers in a text-only chat reply.
Dashboards as a sanity check
Maybe some even less obvious reasons: serendipitous discovery of anomalies and outliers that accidentally pop up in dashboards are harder to see in chats, but easy to spot visually. The same goes for ad-hoc BI when drilling down into more grain and dimensions for self-service users. A pivot table is a REPL for BI; that's not possible with chats.
Dashboards are also a lifeline to quickly check if the AI is hallucinating. What about determinism? Chat responses are not deterministic, and you might get different responses, hopefully with the same correct answer, but visualized differently because the model made different decisions the second time, or for a different user. AI agents are non-deterministic statistical models and probably always will be, so we need to bring more context and definitions to make outputs more consistent. One way is with a spec-driven development (SDD) first approach that helps define it more accurately each time.
The same applies to stewardship, reviewing and checking outcomes. As an example consider self-driving cars such as Waymos: you don't need a map, but if it's wrong or stuck, the first thing you'd reach for is one.
But what does that mean for the future of BI, and what keeps BI afloat?
The primitives of a dashboard (and BI)
BI was and is never (only) about dashboards. BI is used for a combination of reasons. One of the most important is the metrics themselves, the business KPIs: defining your profit, defining what has to be deducted, how much taxes and salary payments are. All of it is ingrained in a single metric, or better, all of it including the hierarchy, the full tree of metrics, as it's never just one metric.
But the metrics tree with all its hierarchy is not all. We need speed to crunch the data extracted from the source SAP and CRM, cleansed and joined, and aggregated to the exact grain of the metric. No one has a single place to view their data end-to-end, which makes BI work so needed, integrating to either combine all sources into one API or database/warehouse for the business (or now AI chat) to pull from. Again, queries over CSV are easy. Fast queries over 1 TB of Parquet are hard.
This is where we need optimal data modeling and compute power to do it in seconds, best case sub-second. And when we have this, we need a good data architecture with the right tool to compress and compute that data. That's where we need some kind of OLAP or analytical database, optimized for analytical queries and doing aggregation on the fly based on different dimensions and grains.
All of these primitives are still needed, maybe even more so in times of generative AI, even if dashboards were to go away.
Lastly, all of this is under the umbrella of context and semantics. It's encoding the business-related processes into data-like artefacts to make reusable governed definitions of metrics like revenue, MAU (Monthly Active Users), ROAS (Return on Ad Spend), etc. And the right medium for that is metrics and a so-called metrics layer (or semantic layer).
Everyone wants to build dashboards. Nobody wants to maintain them.
The elephant in the room that nobody talks about is that everyone wants to build. Nobody wants to maintain. This is the current AI phase we're in. Even more, nobody even wants to review tons of AI-generated code.
According to the story in the book Maintenance of Everything by Stewart Brand, what makes people do maintenance is when they find joy in it and care for it.
This is greatly illustrated in the opening with the 1968 Golden Globe solo sailboat race, a dramatic contest of maintenance styles under life-or-death conditions, contrasting Robin Knox-Johnston's meticulous upkeep with Donald Crowhurst's neglect. Knox-Johnston, who truly enjoyed his boat Suhaili, reported mid-race that despite the brutal Southern Ocean ordeal, he was "thoroughly enjoying himself". Decades later, he personally restored her, replacing every one of her 1,400 fastenings.
Stewart Brand also argues that you need to care about your product, and that's the key to good maintenance — a lesson drawn from Zen and the Art of Motorcycle Maintenance by Robert Pirsig (1974), a philosophical classic celebrated as bending the culture of the day toward honoring maintenance.
And one way to ease maintenance is to design maintenance-friendly. Like the Rolls-Royce Silver Ghost, which was engineered from the ground up for reliability and ease of upkeep.
So how do we apply these principles to BI? We need to create dashboards that are easy to maintain and have clear ownership, someone who cares. Otherwise, they lose long-term value, and we invest precious time in something short-term.
You just burned $1k in tokens to rebuild the SaaS you refused to pay $29/month for.Great for the ego. Now maintain it for 3 years to break even.Yup, maintenance is a luxury these days. Mehdi
Generating 100s of cool but unused dashboards with AI is clearly working in the opposite direction. We can save a couple of bucks while no longer needing to maintain each custom-created dashboard. Indeed, maintenance is a luxury these days.
Self-maintaining agents?
The question is, do we need agents for the maintenance then? So that we can create new, innovative solutions, do the creative things, and leave the hard parts of BI to us humans such as getting requirements and verifying with the business. And we create agents to do the maintenance? What does maintenance even mean? What does changing the oil and checking the brakes mean for BI, or data pipelines?
It's not troubleshooting in case of an error, that's what agents already do and help a lot with, ideally fixing the errors themselves, in a self-healing process. But maintenance is different. Keeping up with the updates of your software, security, the right integration into your data platform with upgraded, better-performing glue code, avoiding code from becoming legacy code.
All of it means maintenance, and outweighs the work of just creating the data pipeline or BI dashboard by far, probably somewhere on the order of 8:2.
As Mike puts it well when we were discussing:
Few can build a maintainable, scalable data infrastructure for surfacing trusted metrics. E.g. a digital platform like Coinbase isn't going to YOLO its internal reporting over billions of transactions. Even Claude has a usage-based billing portal, consumption metrics need to be precise, deterministic, and fast.
The Basecamp owner similarly said: AI is pushing back too little. Maybe it will be solved in the future with better models, but we need to live in the present, and in that present:
Agents don't finish beautiful, ergonomic, desirable software. They just don't. That human finishing at the end is not just necessary, it's essential.
So the future is soon, but not yet. Back to dashboards and their maintenance: the hard part is not generating visualizations, but having metrics and a strong BI backend. Almost a unified data interface that has an agent with access to source, ETL, and dashboard.
BI-as-Code: One solution to maintenance?
Is the solution to maintenance-friendly design maybe BI-as-Code?
BI-as-Code comes into play because declarative configs can be versioned and maintained, thereby avoiding the limitations of BI-as-clicks. Sure, it will not solve all problems, but having that descriptive state of A. your data infrastructure and B. your data pipelines and BI dashboards helps tremendously. In the event of an error or incorrect state, we can just roll back to the last versioned dashboard or infrastructure.
The only thing hard to make reversible, unless you use some kind of Git for Data workflow with LakeFS, Nessie, or others, or just use Open Table Formats with the Time Travel function, is the actual data.
BI-as-Code isn't the whole answer, but it's the right direction: making dashboards owned, versioned, and recoverable. Code can build the right level of abstractions for ETL, metrics queries (metrics SQL), and visualizations, where raw Python for ETL or D3 is too verbose and too brittle.
With agents, these abstractions come in handy once more, as agents work best with a clear interface like a CLI or API, where the abstraction helps build just that, and tune things themselves through MCP or direct access to the declarative configurations. Much of what GenBI is all about. The question of what comes next: can agents take over the analyst role entirely, or how do we marry the two?
Building BI for agents, not humans
BI-as-Code allows agents to drive BI, or as Mike said: "AI drives compression of the data stack". Meaning observability, cybersecurity, product analytics, and BI are converging. A CEO recently asked him why he couldn't "kill Tableau, Looker, DataDog, Grafana, and QuickSight" in favor of a single system. In my opinion, it doesn't need to be a single tool, but it should feel like a single interface.
Most common today: a chat prompt that autonomously spawns ingestion, transforms data into marts, and surfaces a dashboard or web app, running end-to-end analytics without the user ever thinking about the layers underneath.
But only the speed of faster building with AI won't get us there. Amdahl's Law still applies, as Jeff Dean rightly said in his talk with NVIDIA's Bill Dally:
An AI agent can run 50x faster, but if the tools it depends on were designed for human speed — slow query APIs, brittle CLIs, unversioned metrics — the overall gain collapses to 2-3x.
That's why the primitives behind BI get more important as agents get faster, not less. OLAP needs to be faster, metrics need a reliable API, ETL needs to be composable. The bottleneck shifts from the model to the infrastructure it runs on.
And when agents do take over the analyst workflow, spawning parallel queries, discarding dead ends, surfacing the interesting slices, they'll expose something BI practitioners have known for years: the hard part was never the visualization.
It was always the semantics beneath — the governed metrics, the trusted definitions, and configs that were verified by an actual human being. Agents will just make that gap impossible to ignore.
BI primitives are Infrastructure for AI
Wrapping up, BI was never about dashboards. It was about making sense of a company's data, connecting the source data into something a human can understand, efficiently reusing existing metrics, and governing definitions.
The dashboard was just the visible surface. What survives the hype cycle, from the unbundling of the modern data stack to the rise of AI agents, are the primitives: metrics, semantics, ownership, trust.
The AI era doesn't kill that need. Agents hallucinate without a strong foundational semantic layer or verified human constraints. Non-deterministic chat interfaces collapse without business-wide, agreed-upon definitions. The maintenance problem doesn't disappear either when you generate faster. It compounds the problems and bottlenecks for senior engineers at a company.
BI-as-Code, versioned dashboards, and a governed interface aren't opposite to the AI future, but a necessary foundation that makes working with it easier, not only for AI systems but also for humans in the loop.
If you enjoyed this, there's further related reads that might be interesting to you:
Ready for faster dashboards?
Try for free today.


.jpg)