Loading...

Explain Dining Satisfaction Drops

When guest dining satisfaction drops, the Food & Beverage Manager has to manually pull survey and online review feedback, theme and sentiment patterns, and POS or service operations metrics such as ticket-to-table time, voids, and comp amounts to explain what changed at the outlet. In the example analysis for HTL_0421 / OUTLET_BRASSERIE, this means comparing 2026-03-01 to 2026-03-31 against 2026-02-01 to 2026-02-28, then assembling a defensible explanation for the weekly Ops/QA review. That work happens repeatedly as the primary explanation artifact for weekly reviews, and it is largely data assembly and correlation work rather than management judgment.

The Guest Dissatisfaction Driver Attribution Brief is a Decision Intelligence workflow designed for the Food & Beverage Manager. It automates execution-layer work such as cross-domain querying, theme tallying, compliance-style threshold checks, driver scoring, evidence assembly, and briefing formatting; based on the candidate value model, about 65% of the execution layer is automated, equivalent to 32.5% of total role time. The manager gets back time for validating likely causes, choosing the next test, and aligning actions with Ops and QA leadership.

Interactive Demo

See the Decision Intelligence Workflow in Action

This workflow handles a compound cognitive task: detect whether satisfaction materially declined, localize where the operational break is occurring, rank the most likely drivers, and generate a brief that can be used in the next Ops review. It joins qualitative guest feedback with service metrics so the explanation is based on corroborated signals rather than whichever complaint theme is most visible that week.

👤
Designed for line-of-business leaders — specifically the Food & Beverage Manager who has to explain outlet performance shifts to Operations and QA using evidence from both guest feedback and service operations. This workflow is for the recurring reporting and triage burden that sits between frontline service issues and management action. Why this matters: What line-of-business leaders actually need from decision workflows →
Decision Intelligence Playground · Dining Satisfaction Driver Brief
KWF Runtime
Start with an example
See the driver brief built from your outlet data.
In a working session, we map your guest feedback and service operations sources, define the baseline and alert thresholds your team already uses, and generate a sample driver attribution brief for a recent outlet issue. The session shows exactly which execution-layer steps can be automated and where manager judgment remains essential.
Schedule a Working Session
How It Works

What the Decision Intelligence Workflow Does

This workflow handles a compound cognitive task: detect whether satisfaction materially declined, localize where the operational break is occurring, rank the most likely drivers, and generate a brief that can be used in the next Ops review. It joins qualitative guest feedback with service metrics so the explanation is based on corroborated signals rather than whichever complaint theme is most visible that week.

Click any step below to see the business logic, data query, and sample output for that step of the workflow.

Pre-specified logic, not runtime guessing — Most AI agent frameworks work by figuring things out on the fly. These Decision Intelligence workflows work differently. The Knowledge Work Foundry analyzes the cognitive labor pattern before deployment and encodes the decision logic directly into the configuration — which tables to query, which thresholds define a breach, how signals are ranked, and what the output artifact should contain. That analysis happens once. By the time the workflow runs, there is nothing left to figure out.
1
Detect Drop
Compares current-period guest satisfaction against baseline and localizes the decline to the meal period where operational strain appears.
2
Rank Drivers
Measures which complaint themes increased most and ranks the likely drivers using both guest feedback and operational corroboration.
3
Draft Brief
Assembles the findings into a one-page driver attribution brief for Ops and QA review.
4
Test Actions
Generates measurable next-step tests tied to the highest-ranked drivers and their success metrics.
↑ click a step to explore the logic, query, and output
1
Step Detail

                      

The output is a 1-page Driver Attribution Brief delivered in PDF or HTML and posted before the weekly Ops/QA review. It includes the overall direction of change, ranked drivers, evidence for each driver, confidence tags, and next-action tests tied to measurable success criteria.

What this workflow does NOT do: It does not decide whether to escalate a satisfaction issue to a vendor or brand-standard review, adjust staffing schedules or labor budgets across the property, determine whether menu or training changes should become permanent policy, choose which guest complaints require personal recovery outreach, or replace the manager's judgment on whether the ranked drivers are plausible after spot-checking service conditions on the floor.
Under the Hood

Data Warehouse Integration

The workflow reads directly from warehouseed operational and feedback data already used by hotel teams: guest surveys and online reviews on one side, and outlet service operations on the other. Its job is not to create new source systems, but to assemble the evidence already spread across them into one repeatable analysis path.

That cross-domain join is the core of the problem: sentiment and complaint themes live in feedback data, while ticket-to-table time, covers, voids, and comp dollars live in POS and service operations data, and the manager has to reconcile them in one explanation.

Source system: Guest survey and review feedback systems  ·  Domain: Guest Experience
Role in this workflow: This table provides the qualitative side of the analysis: how guests rated the dining experience, how negative or positive the comments were, and which themes appeared most often. It is the primary signal for detecting that satisfaction changed and for identifying whether service speed, order accuracy, food quality, cleanliness, or value became more prominent.
CREATE TABLE pura_vida_foods_dev.guest_ops.fnb_guest_feedback (
  hotel_id STRING,
  outlet_id STRING,
  feedback_id STRING,
  feedback_ts TIMESTAMP,
  channel STRING,
  rating INT,
  sentiment_score DOUBLE,
  theme STRING,
  comment STRING
);

SELECT
  CASE
    WHEN feedback_ts >= string">'{start_date}' AND feedback_ts < string">'{end_date}' THEN string">'analysis'
    ELSE string">'baseline'
  END AS period,
  COUNT(*) AS feedback_cnt,
  AVG(rating) AS avg_rating,
  AVG(sentiment_score) AS avg_sentiment
FROM pura_vida_foods_dev.guest_ops.fnb_guest_feedback
WHERE hotel_id = string">'{hotel_id}'
  AND outlet_id = string">'{outlet_id}'
  AND (
    (feedback_ts >= string">'{start_date}' AND feedback_ts < string">'{end_date}')
    OR (feedback_ts >= string">'{baseline_start_date}' AND feedback_ts < string">'{baseline_end_date}')
  )
GROUP BY 1;
Source system: POS and service operations reporting  ·  Domain: Restaurant Operations
Role in this workflow: This table provides the operating context needed to test whether guest complaints line up with what actually happened on the floor. It contributes meal-period-level evidence such as covers, ticket-to-table time, void activity, and comp dollars, which are used to corroborate or weaken suspected dissatisfaction drivers.
CREATE TABLE pura_vida_foods_dev.guest_ops.fnb_service_ops (
  hotel_id STRING,
  outlet_id STRING,
  service_date DATE,
  meal_period STRING,
  covers INT,
  avg_ticket_to_table_min DOUBLE,
  void_count INT,
  comp_amount_usd DOUBLE
);

SELECT
  CASE
    WHEN service_date >= string">'{start_date}' AND service_date < string">'{end_date}' THEN string">'analysis'
    ELSE string">'baseline'
  END AS period,
  meal_period,
  SUM(covers) AS covers,
  AVG(avg_ticket_to_table_min) AS avg_ticket_to_table_min,
  SUM(comp_amount_usd) AS comp_amount_usd
FROM pura_vida_foods_dev.guest_ops.fnb_service_ops
WHERE hotel_id = string">'{hotel_id}'
  AND outlet_id = string">'{outlet_id}'
  AND (
    (service_date >= string">'{start_date}' AND service_date < string">'{end_date}')
    OR (service_date >= string">'{baseline_start_date}' AND service_date < string">'{baseline_end_date}')
  )
GROUP BY 1,2
ORDER BY meal_period, period;
Cognitive Labor Analysis

Where the Work Sits in the Labor Stack

Not all cognitive labor is equally automatable. The KWF analysis breaks the workflow into three layers — execution, judgment, and strategic — and maps each step to the layer it belongs to. Execution-layer work is automatable. Judgment and strategic work stays with the manager.

Execution 50% Judgment 35% Strategic 15%
Execution Layer
50%
The execution layer retrieves feedback and service data, calculates baseline deltas and threshold breaches, scores likely drivers, corroborates them with operational signals, and formats the brief.
Judgment Layer
35%
The manager still interprets whether the ranked drivers fit outlet context, validates edge cases, and decides which immediate corrective actions to test.
Strategic Layer
15%
Leadership still owns broader decisions such as staffing model changes, training investment, menu changes, and brand or policy adjustments across outlets.
Value Model

The Business Case for Automation

Time Recovered
32.5% of role time
Based on the candidate value model, automating 65% of the 50% execution layer frees about 32.5% of total Food & Beverage Manager time for judgment and leadership work.
Annual Savings
Comp leakage flagged earlier
The workflow surfaces when comp amounts rise alongside declining sentiment, helping teams intervene sooner on service issues that are already creating recovery cost.
Strategic Upside
Faster Ops reviews
Using one ranked driver brief as the primary explanation artifact reduces back-and-forth in weekly Ops/QA reviews and supports same-day corrective action planning.
Kill Question: Without this workflow, the Food & Beverage Manager explains a satisfaction drop by manually pulling review and survey comments, summarizing themes, checking baseline rating and sentiment changes, comparing those patterns with meal-period ticket times and comp trends, and then making a subjective call about the most likely cause before the Ops review.

Primary Valuation Metric: Percent of weekly Ops/QA reviews where the driver brief is used as the primary explanation artifact, with a target of at least 80%.