
OVERVIEW
From siloed modules to a cross-retailer review cockpit
OCC turned cross-retailer business reviews into a single workflow across Sales, Retail Media, and Market Share.
Before this, teams stitched the narrative across PRA, RMM, and Market Share. OCC connected those signals into a single workflow so reviews could move from plan → gap → why → next action in one place.
Accounts using Command Center
80 Accounts
Retailer coverage
3 Retailers
Impact on business
15–20% ARR contribution
Next
Why share stories still felt incomplete

CONTEXT
Before Command Center, reviews were a coordination problem
Sales, retail media, and digital shelf signals lived across different modules and retailer views. So weekly and monthly reviews started with alignment work: pick the scope, match filters, reconcile plan context, then build a story.
Command Center was created to turn that prep into a repeatable review workflow across retailers: plan → gap → why → next action.
What wasn't working
01
The review story lived in decks
Teams rebuilt the narrative every week across exports and screenshots.
Manual prep
,
Repeated work
02
Cross-retailer rollups weren’t one pane
Leaders couldn’t quickly compare health across subscribed retailers without hopping views.
Scope drift
,
Slow comparisons
03
Plan and goals were messy inputs
Plans were often missing or partial, and varied by level (brand/category/SKU), pushing teams back to templates.
Plan gaps
,
Template workflow
04
Exceptions didn’t route to diagnosis
Even when something was off, getting to the why meant tab hopping across drivers like availability, content, promo, price, and media.
Driver hunting
,
No clear next step
“It felt like we had data, but getting aligned took longer than deciding what to do.”
What each team wanted from PRA
“I need one cross-retailer scorecard to run reviews against plan.”
-
VP / Director of eCommerce
“I need pacing and spend levers tied to outcomes, not scattered across tools.”
-
Marketing Manager
“I need shelf and availability drivers connected to business impact so I can explain the why.”
-
Category Manager
Next
What success needed to look like for a cross-retailer cockpit

GOALS
What Command Center needed to make possible
Command Center wasn’t about adding new metrics. It was about making cross-retailer reviews run smoothly with consistent scope, clear pacing, and a faster path to decisions.
What success looked like
ALIGNMENT
Hold one review scope
Scope drift
Teams kept reapplying filters and reconciling rollups mid-review.
Stable cockpit
One scope that holds across retailers and dimensions.
PACING
Make vs plan dependable
Plan gaps
Plans were missing, partial, or set at different levels, so vs plan broke quickly.
Plan-aware pacing
A plan-first view that stays readable even with uneven plan.
EXPLANATION
Make the why easy to reach
Tab hopping
Drivers lived across surfaces, so “why” took tool switching.
One drill path
Gap → drivers → diagnosis in one flow.
DECISIONING
End with clear next steps
No closure
Actions were tracked elsewhere, so follow-ups stayed manual.
Action-ready output
Next steps tied to the driver, ready for the next check-in.
“Once plan and scope were clear, the rest of the review became about actions, not alignment.”
Next
How the Command Center workflow came together

Approach
How I approached the design
Command Center needed to make cross-retailer reviews feel effortless. I focused on stabilizing scope first, then building a repeatable flow from plan to gap to diagnosis to next steps, while aligning patterns across parallel design streams.
How I designed this
Start with a stable review frame
Lock retailer coverage, rollups, and filters so every widget reads the same context.
Make plan and pacing the default
Design for uneven plan inputs, but keep vs plan comparisons usable.
Connect the why to the gap
Create a single drill path from gap to drivers to diagnosis without tab hopping.
Ship as one system across teams
Align RCA, reporting, and taxonomy patterns so the cockpit feels cohesive.
Next
Inside the Command Center system

SYSTEM
Inside the Command Center system
Command Center was designed as a connected set of surfaces that make cross-retailer reviews run smoothly. The system keeps scope stable, anchors the story on plan and pacing, and gives a short path from “what changed” to “why” to “what next”.
How we built it in layers
LAYER 01
Review frame
The shared context that keeps the room aligned: retailer coverage, rollups, date range, and persistent filters. Everything else inherits this scope.


LAYER 02
Pacing and gap
The anchor layer that makes performance reviewable: spotlight cues, performance vs plan, and the cuts that explain where the gap is coming from.
LAYER 03
Diagnosis and next steps
The closure layer that prevents follow-up churn: key drivers → RCA drill path → recommendations, tied back to the same scope and metric.

Next
What I owned vs what I guided across V1 and V2

I led the Command Center cockpit experience, shaping the end-to-end review flow across Sales, Retail Media, and Market Share with a cross-retailer lens. I also supported parallel workstreams to keep shared patterns consistent as teams built RCA, reporting surfaces, and taxonomy-driven breakdowns alongside the cockpit.
My contribution across Command Center
Experience
Command Center cockpit (overall flow + hierarchy)
Cross-retailer scope (filters, rollups, consistency)
Performance vs plan framing (pacing + gap visibility)
Drill path from gaps to drivers
Recommendations framing (next steps tied to drivers)
RCA workstream (alignment across modules)
Report Builder / resizable widgets (pattern alignment)
Taxonomy + goal-based optimization (shared structure)
Owned
Guided
Next
Outcomes and adoption after launch

OUTCOMES
What changed after we shipped Command Center
Command Center made cross-retailer business reviews run in one place across Sales, Retail Media, and Market Share, with a repeatable path from goal to gap to next step.
Why Command Center mattered for teams and brands
Review prep time
2–3 hrs
minutes
Teams spent less time stitching the story across separate modules before weekly and monthly reviews.
Cross-retailer alignment
1 stable cockpit
The same scope held across retailers and rollups, so meetings didn’t restart when filters changed mid-way.
Plan pacing
Plan-aware pacing
Even when plans were missing, partial, or defined at different levels, “vs plan” stayed reviewable.
Faster diagnosis
One drill path
Teams could move from a missed KPI to likely drivers and RCA without jumping tools.
Clear next steps
Action-ready output
Recommendations and follow-ups were tied back to the driver, making it easier to assign owners and close the loop.
"Same scope, clear gaps, faster why, better next steps."
What people noticed
Next
Screens and drill paths inside Command Center

REFLECTION
How Command Center changed how I design for weekly decision-making
Command Center taught me that “unification” isn’t a layout problem — it’s a review problem. If the scope stays stable, pacing is clear, and the path to “why” is short, teams can move from discussion to decisions in one sitting.
Hold the review frame
Lock the scope early: same filters, same rollups, same retailer logic — so reviews don’t start with recalibration.
Design for exec scan-first behavior: make the headline state obvious before inviting deeper drilldowns.
Connect signals into one story
Put the workflow on rails: spotlight → gap → driver → evidence, so users don’t have to assemble context across tabs.
Keep “why” reachable: drivers should feel like the next step, not a separate investigation experience.
End with a decision, not a dead end
Make outputs actionable: tie recommendations and next steps back to the driver, with clear ownership cues.
Treat follow-through as part of the cockpit: reduce manual note-taking and “we’ll track it elsewhere” moments.
THANK_YOU