



Designing Trust into an AI Agent
ROLE
Lead Product Designer
DURATION
8 Weeks
SCOPE
Manager App Redesign · AI Agent · Review Management
AI Agent · Review Management · Manager App Redesign
TL:DR
I designed an AI system for restaurant managers—not to replace decisions, but to prepare them before the day begins.
The goal wasn't a smarter dashboard. It was an invisible AI staff—earning trust, building habits, and knowing when to stay quiet.
the wrong fix.
When Manager App 1.0 launched, adoption was low. Managers opened it once, then rarely returned.
User feedback was consistent: "I see the numbers, but I don't know what to do with them."
The proposed solution followed the industry trend: a conversational chatbot that could analyze data and answer questions on demand.
But before committing to that direction, I paused to ask a more fundamental question:
Would chat-based AI actually fit how our users work?
When Manager App 1.0 launched, adoption was low. Managers opened it once, then rarely returned.
User feedback was consistent: "I see the numbers, but I don't know what to do with them."
The proposed solution followed the industry trend: a conversational chatbot that could analyze data and answer questions on demand.
But before committing to that direction, I paused to ask a more fundamental question:
Would chat-based AI actually fit how our users work?
When Manager App 1.0 launched, adoption was low. Managers opened it once, then rarely returned.
User feedback was consistent: "I see the numbers, but I don't know what to do with them."
The proposed solution followed the industry trend: a conversational chatbot that could analyze data and answer questions on demand.
But before committing to that direction, I paused to ask a more fundamental question:
Would chat-based AI actually fit how our users work?
reality: attention is the constraint.
86% of our users are independent restaurant owners and managers.
I observed them during service and found something consistent:
6-9am Morning prep — planning mode, quiet focus
11am-2pm Lunch rush — execution mode, on the floor
2-5pm Afternoon lull — low energy, admin tasks
5-9pm Dinner rush — execution mode, on the floor
9-11pm Night close — exhausted
Managers don't check tools during the rush—they're managing people.
They check tools before the rush—in a 5-10 minute window before the morning team meeting. That's when decisions happen.
86% of our users are independent restaurant owners and managers.
I observed them during service and found something consistent:
6-9am Morning prep — planning mode, quiet focus
11am-2pm Lunch rush — execution mode, on the floor
2-5pm Afternoon lull — low energy, admin tasks
5-9pm Dinner rush — execution mode, on the floor
9-11pm Night close — exhausted
Managers don't check tools during the rush—they're managing people.
They check tools before the rush—in a 5-10 minute window before the morning team meeting. That's when decisions happen.


The issue wasn't that chat was "bad."
Chat-based AI assumes managers have time to explore, frame questions, and interpret responses.
Restaurant operations have decision windows—not conversation windows.
The issue wasn't that chat was "bad."
Chat-based AI assumes managers have time to explore, frame questions, and interpret responses.
Restaurant operations have decision windows—not conversation windows.
The issue wasn't that chat was "bad."
Chat-based AI assumes managers have time to explore, frame questions, and interpret responses.
Restaurant operations have decision windows—not conversation windows.






strategy: start small, earn trust.
Chat-based AI wasn't wrong for everyone—just wrong for our users at this stage.
So instead of asking "what should AI do," I first asked: How much responsibility are users willing to give AI right now?
From research, a clear pattern emerged: Users trust AI when they can verify its work. This led to a phased approach:
Chat-based AI wasn't wrong for everyone—just wrong for our users at this stage.
So instead of asking "what should AI do," I first asked: How much responsibility are users willing to give AI right now?
From research, a clear pattern emerged: Users trust AI when they can verify its work.
This led to a phased approach:
Chat-based AI wasn't wrong for everyone—just wrong for our users at this stage.
So instead of asking "what should AI do," I first asked: How much responsibility are users willing to give AI right now?
From research, a clear pattern emerged: Users trust AI when they can verify its work. This led to a phased approach:


I deliberately ruled out high-risk entry points like shift and inventory optimization—they had slow feedback loops and irreversible consequences.
Instead, I proposed review management: fast feedback, low risk, immediate value.
I deliberately ruled out high-risk entry points like shift and inventory optimization—they had slow feedback loops and irreversible consequences.
Instead, I proposed review management: fast feedback, low risk, immediate value.
I deliberately ruled out high-risk entry points like shift and inventory optimization—they had slow feedback loops and irreversible consequences.
Instead, I proposed review management: fast feedback, low risk, immediate value.


phase 1: review management
Review management became the starting point because it combines:
Real business risk — unaddressed reviews damage reputation and loyalty
Fast, visible feedback — managers instantly know if a response is helpful
Rich customer language — perfect for training the agent.
It was the safest place for AI to be useful—and for trust to grow.
To keep the trust cost low in Phase 1, we needed a domain where AI value would be immediate and easy to verify.
Review management became that entry point.
Reviews are frequent, time-sensitive, and written in natural language. Managers can quickly tell whether an AI-generated response is helpful or not.
That made reviews the safest place to introduce a bounded, workflow-embedded AI model—one that prepares context in advance while keeping human judgment clearly in control.
Review management became the starting point because it combines:
Real business risk — unaddressed reviews damage reputation and loyalty
Fast, visible feedback — managers instantly know if a response is helpful
Rich customer language — perfect for training the agent.
It was the safest place for AI to be useful—and for trust to grow.

“If this agent could really help me handle reviews, that would be huge. A lot of reviews sit untouched for too long.” I really want to try this.

“If this agent could really help me handle reviews, that would be huge. A lot of reviews sit untouched for too long. I really want to try this. "

“If this agent could really help me handle reviews, that would be huge. A lot of reviews sit untouched for too long. I really want to try this. "
solution: a 5-minute decision window
One early assumption was that every negative review should trigger an immediate alert.
In the company, this made sense—negative reviews carry real business risk.
In practice, it backfired.
Managers received alerts during service hours, when they couldn’t act.
Instead of speeding up responses, notifications increased stress and were often ignored.
This forced a critical rethink: timing matters as much as accuracy.
Not every problem needs immediate interruption—some need to wait for the right decision window.
So Genie was designed as an automated workflow that runs continuously in the background, only pop up in a right time:
Genie was designed as an automated workflow that runs continuously in the background.
Genie was designed as an automated workflow that runs continuously in the background.









AI works all day.
Managers decide in five minutes.
AI works all day.
Managers decide in five minutes.
validation & impact.
validation & impact.
Efficiency
5 mins
5 mins
Review management time reduced from hours to ~5 minutes/day.
Review management time reduced from hours to ~5 minutes/day.
Review management time reduced from hours to ~5 minutes/day.
Efficiency
Reliability
Reliability
~87%
~87%
of AI-generated drafts approved with minimal edits.
Users
Users
20+
20+
high-engagement merchants opted in and participated in early validation.
ending.
Peppr Genie doesn't change how managers work during the rush.
It changes what happens before.
A few quiet minutes. Clear priorities. Fewer surprises.
By the time the team meeting starts, the hardest thinking is done.
That's how trust is built—not by intelligence alone, but by reliability.
One morning at a time.






