Loading...

Built for the Wrong Audience: What Line-of-Business Leaders Actually Need from AI

April 2026

Picture a regional VP of operations at a property management firm.

Her company just invested six months and a meaningful budget in a new analytics platform.

Her data team is proud of it.

They walk her through the filters, the drill-downs, the time-range selectors. She nods. She thanks them. She goes back to her desk and opens a spreadsheet.

...

This happens everywhere. Not because the VP is unsophisticated. Not because the platform is bad. Because the platform was built for someone else — and deployed to her anyway.

Understanding why this mismatch is so persistent requires going one level deeper than "the dashboard is hard to use." Engineers and line-of-business leaders have fundamentally different relationships with information. Most enterprise AI and analytics tooling is designed for one audience and then handed to both.

Two Different Cognitive Interfaces

Engineers and IT professionals are trained to work with information in its raw form. They query databases directly. They read schema definitions. They understand what a metric is measuring because they know how it was calculated. When an engineer opens a dashboard, they are comfortable navigating filters, switching time ranges, and cross-referencing multiple panels — because they built those dashboards and understand their internal logic.

The raw data is not an obstacle.

It is the medium.

But you know what? Line-of-business knowledge workers operate differently.

An underwriting manager, a regional VP of operations, or a commercial lending officer has deep expertise in their domain — but that expertise is organized around business judgment, not data infrastructure. They think in terms of risk, performance, and decisions. They do not know the exact name of the metric, which table it comes from, or what the filter toggle labeled "adjusted" actually adjusts for.

Nor should they.

That is not the work they were hired to do.

This is not a skills gap. It is a fundamentally different relationship to information.

In an earlier article in this series, I defined knowledge work as the transformation of information into more valued information — where the output is a decision, a plan, a narrative, not a physical product. Both engineers and LOB leaders are knowledge workers. But they process information through entirely different cognitive interfaces:

--- one built around systems and structure, the other built around judgment and narrative.

The Dashboard Was Designed for One Audience and Deployed to Another

Most enterprise dashboards are built by engineers to satisfy a data access requirement. The design assumptions are technical:

  • structured navigation
  • filter-based exploration
  • multiple panels the user mentally combines to form a view

These assumptions are reasonable for an audience with data fluency.

When the same interface is handed to a line-of-business leader, the mismatch becomes the user's problem. The executive must learn the dashboard's language, translate their actual question into the available filters, and then perform their own synthesis across the panels they've assembled. The cognitive labor of interpretation falls entirely on the person whose time is supposed to be most protected.

The cognitive labor of interpretation falls entirely on the person whose time is supposed to be most protected.

Engineers are used to this translation work. LOB leaders are not — and forcing it on them is waste, not engagement.

In the article on cognitive labor, I described the core operation of knowledge work as mental synthesis:

assembling incomplete, noisy inputs into a coherent model that supports a defensible narrative, plausible causes, and actionable next steps.

For an LOB leader, that synthesis should be happening at the level of business judgment — not at the level of figuring out which filter to click. When the interface forces them to do the latter, the former suffers.

Natural Language Is Not a Workaround — It Is the Native Interface

When a line-of-business leader needs to understand what happened in their business last quarter, they do not write a query. They ask a question: "Why did margin compress in the midwest, and is it a pricing issue or a cost issue?" That phrasing is precise — it reflects exactly what they need to know — but it is expressed in their language, not the data layer's language.

This is the native information-processing mode for knowledge workers whose expertise lives in business judgment rather than data infrastructure. They reason in narrative, communicate in narrative, and make decisions from narrative. The tooling that serves them well is the tooling that speaks back in that mode.

This is not a concession to non-technical users. It is the correct match between interface and audience.

Think about how businesses actually use technology.

In an earlier piece in this series, I described three patterns for how technology enters business operations — and in each one, the critical failure mode is the same: requiring the human to adapt to the machine's interface rather than the machine adapting to the human's work mode. Every time you hand a filter-heavy dashboard to an LOB leader, you are making that mistake.

The technology is asking her to translate herself into its language, rather than translating data into hers.

Conversational Decision Intelligence interfaces are not a simplified version of the dashboard. They are a different type of interface for a different type of user — one where the agent handles the data translation and returns synthesized narrative, and the LOB leader does what they are actually equipped to do: ask the right questions and make the decision.

The Correct Allocation of Cognitive Labor

Designing Decision Intelligence output for LOB leaders means accepting that the interface must meet them where they are.

The agent absorbs the execution-layer work of data retrieval, aggregation, and interpretation. The leader receives a narrative answer — structured, consistent, and expressed in the terms of the business domain they operate in.

This is the right allocation of cognitive labor: machine does the data work, human does the judgment work.

There is a broader principle at work here. In an earlier piece I described AI as a productivity technology that most people don't understand yet: AI doesn't replace knowledge work — it changes which parts of knowledge work require a human.

The parts that require business judgment, contextual reading, and accountability still belong to the LOB leader.

The parts that require data retrieval, metric aggregation, and narrative formatting do not. The question is whether your tooling reflects that distinction — or whether it quietly asks the business leader to do both.

Most dashboards, even well-built ones, ask the leader to do both. The LOB leader is supposed to bring the judgment — but first they have to do the data assembly. That is the work the agent should be absorbing.

What This Means for Decision Intelligence Deployments

When we think about deploying AI Natural Language interfaces ("Decision Intelligence") for line-of-business users, the goal is not to build a more user-friendly dashboard.

The goal is to eliminate the dashboard as the primary interface altogether — replacing it with something that speaks the business leader's language from the first interaction.

That means the agent configuration needs to know the business domain well enough to produce answers at the level of the LOB leader's questions, not at the level of the underlying data schema.

The agent should know that "margin compression in the midwest" is a question about the relationship between contract rate trends and cost structure trends across a specific geographic region — and it should be able to say so, clearly, without requiring the leader to first understand how the data is organized.

(Note: we're not getting away from the work of subtlety capturing business logic nuance anytime soon --- every business has details that are specific to them, and LLMs aren't close to automatically capturing those nuances.)

The right question for every Decision Intelligence deployment is a simple one: whose cognitive labor is being protected? If the answer is "the data team," the tool has been optimized for the wrong audience. The LOB leader's time is where business judgment lives. The agent should be serving that judgment — not asking the leader to serve the interface first.

The right question for every Decision Intelligence deployment: whose cognitive labor is being protected?

The evolution of knowledge work has always followed this pattern:

--- each new information infrastructure tool eventually gets rebuilt around the cognitive interface of the people who actually need to use it, not the interface of the people who built it.

We are at that inflection point again.

The LOB leader's cognitive interface is narrative.

The tools that win will be the ones that figure that out first.

Continue Reading

Cognitive Labor: The Mental Work Behind Knowledge Work

Defining cognitive labor, mental synthesis, and the defensible-but-not-differentiating framework — the foundation for understanding what AI actually changes about knowledge work.

Read article