Decision Intelligence
There is a version of the applied-LLM conversation that goes like this:
"We'll give everyone a chatbot, and they can ask it anything about the business."
It sounds compelling.
It also rarely ships.
And when it does, it rarely gets used.
The failure mode is not the technology. The failure mode is that the problem was never precisely defined. A chatbot that can answer any possible question is not a product — it's a science fair project.
What line-of-business workers actually need is narrower and more concrete: they need help completing specific analytical tasks that take a lot of time and effort to do by hand.
That is a solvable problem.
The term we use for the category of solutions is Decision Intelligence.
The Challenge: Information Systems Built for the Wrong People
To understand why Decision Intelligence matters, you have to understand how most enterprise information systems are built and who they are actually built for.
Engineers build dashboards.
Engineers design filters, define metrics, and decide which panels go next to which other panels. The system is correct from an engineering standpoint. The problem is that it is handed to people who think very differently about information.
A regional operations manager does not reason in metric definitions or data schema. She reasons in terms of performance, risk, and decisions:
Why did throughput drop last Tuesday?
Is this a staffing issue or a maintenance issue?
Do I need to escalate before the leadership meeting, or is this already recovering?
These are judgment questions. They have narrative shape. The answers should come in narrative form.
What most dashboards deliver instead is a set of panels the manager must mentally assemble into a narrative herself. She becomes the analyst. The execution-layer work — the data retrieval, correlation, and synthesis that should be automated — falls to the person whose time should be most carefully protected.
This is the fundamental misallocation at the heart of most enterprise analytics. The tool is optimized for the data team that built it, not for the business leader who uses it.
The Three Layers, and Where Things Break Down
In the knowledge work layer model, cognitive labor distributes across three tiers:
- Execution layer: Routine, rule-bound tasks. Query the data. Apply thresholds. Rank signals. Produce the report.
- Judgment layer: Non-routine analysis. Diagnose root causes. Weigh trade-offs. Interpret ambiguous signals.
- Strategy layer: High-stakes decisions that shape organizational direction.
The execution layer is expensive, not because it is difficult, but because it is time-consuming and because it crowds out everything above it.
A line-of-business manager spending forty-five minutes pulling data from three systems before she can begin diagnosing a problem is not an analytics failure — she is simply doing execution-layer work that a machine should be doing for her.
Decision Intelligence is the application of LLMs to automate the execution layer, and to support — not replace — the judgment layer.
It is not a bot that knows the answer to every question.
It is a system designed around a specific set of tasks for a specific role, delivering the answer you need as you need it.
It's software (natural langauge or not) that knows what the hell its doing and you don't have to wonder if it does.
Decision Intelligence is the application of LLMs to automate the execution layer, and to support — not replace — the judgment layer.
What This Looks Like in Practice
Consider a food manufacturing operations manager who notices a line speed deviation on a production run. Without Decision Intelligence, her workflow looks like this:
- query the sensor log
- cross-reference the digital twin outputs
- identify the top contributing signals
- apply threshold logic
- assemble a root cause narrative in time for shift handover
Call it two to three hours per week.
The execution layer in that workflow — the querying, correlating, ranking, and drafting — represents roughly half the time.
The actual judgment work, the part where her operational expertise matters, is the second half:
deciding whether the pattern is systemic, whether maintenance needs to be escalated, whether corrective action affects the next shift.
A Decision Intelligence workflow runs the execution layer automatically. The system detects the deviation, ranks contributing signals, and delivers a consistent narrative the manager can review, interrogate, and act on. Her time begins at the judgment step, not the assembly step. Her output — the decision — is the same. The cost to get there is a fraction of what it was.
Another example from retail logistics: when SLA compliance drops, a distribution center manager must synthesize data from order management, labor scheduling, IT incident logs, and demand planning — typically residing in four separate systems — before she can produce the explanation her leadership needs or draft the customer communication that should already be sent.
Decision Intelligence produces these things automatically, from a single query run, so the manager's first act is review and judgment rather than a bunch of data work.
What Decision Intelligence Is Not
Decision Intelligence is not automation in the process-workflow sense.
It does not replace the manager. It does not run a workflow to fix things, adjust staffing levels, or decide which customers to contact first. We still need people making those decisions.
The reason is not merely philosophical. Regulators in insurance, banking, and healthcare explicitly require documented human accountability for decisions affecting others. An underwriter must be able to explain and defend her recommendation. A lending officer must own his risk assessment.
The judgment layer is not automatable because institutional legitimacy requires a person who can be held responsible for being wrong.
This is the insight behind three structural arguments that run through this series:
- The Agency Trap (AI capability is not AI agency)
- The Automation Closure (execution-layer automation becomes table stakes, not differentiation)
- The Minimum Viable Person (what remains after maximum automation is irreducibly human: accountability, judgment in novel situations, institutional legitimacy)
Decision Intelligence is built around this reality. The analyst reviews the agent's output and takes ownership of the conclusion. Taking ownership of understanding what happened is not just an "extra thing" — it is the work.
Taking ownership of understanding what happened is not just an "extra thing" — it is the work.
What Decision Intelligence Requires
People love Excel because they could wire up any crazy set of computations visually across a grid that only really made sense to them, but... "worked". The thing has to work, and it has to capture the nuance of a specific business to be worth anything to the business.
Here I'm going to go with the 3 things a Decision Intelligence workflow is going to have to do to earn the trust of business users:
Consistency. The same question should produce the same analytical process every time. The results can't vary "because a tool and loop agent isn't deterministic (oops)".
Auditability. The manager reviewing a synthesized "root cause workflow" needs to trace the output back to source data. Not because she will always inspect it, but because the option to inspect is what makes the output trustworthy.
Encoding of business logic. The encoding of "you"-specific business rules (e.g., "we do margin per quarter this specific way") is what makes a tool worth anything.
If we can do those 3 things with any information process "thing" then we create trust with the user, which is basically the only reason they'd continue to base judgement work on the tool. They will trust the tool, which is critical to becoming part of their daily knowledge work workflow.
It's All Still Just Software
Calling this an "AI story" would make me the best marketer in the room — so I'm not doing that.
Once this wave settles, we'll be left with a category of productivity that needs a name. I'm using "Decision Intelligence" for this one.
The LLM is one component inside a larger system — what we call the agentic harness. Someone has to design and build that system: which signals to monitor, which thresholds matter, how to pull data from different sources, what the output should look like for the person receiving it. None of that falls out of the model. You have to put it in.
The business logic encoding is the actual engineering work; the LLM is the reasoning layer inside it.
This is how every prior generation of information tooling has worked (spreadsheets, RDBMS, etc).
They became the basis for new categories of work because they let people encode their own logic and operate on their own terms. Decision Intelligence is the same pattern — except the interface is now language instead of formulas or query syntax.
That's also why calling this an "AI investment" misses the point.
AI is just productivity you don't understand yet.
The value isn't the model — it's what the system built around it lets the business do differently. Decision Intelligence usually enters as cost reduction (less manual execution-layer work) and quickly reveals itself as capability extension: analysis that was previously too slow or too expensive to run at all.
Natural language is the interface that makes it accessible to the people who actually need it — not because it's technically interesting, but because it's how line-of-business workers think, communicate, and make decisions. Getting that right, reliably enough that they trust what they get, is the whole problem.
That is what Decision Intelligence is designed to solve.
Continue Reading
Built for the Wrong Audience: What Line-of-Business Leaders Actually Need from AI
Engineers and line-of-business leaders have fundamentally different relationships with information. Understanding that difference changes how you deploy Decision Intelligence.
Read article