ZS Associates

Making Product Quality Visible

An example of democratizing evaluation by replacing UX gatekeeping with structured XFN accountability


Context

ZS Associates builds client-facing software for the pharmaceutical industry. Internal cross-functional teams, including product managers, developers, and business analysts, were responsible for the quality of that software but had no systematic way to evaluate it. Feedback arrived reactively, through client complaints or anecdotal impressions, and teams often couldn’t tell the difference between a real quality problem and noise. Product confidence was high, but issue clarity wasn’t.


How do we shift product quality from a feeling into a number teams can act on?

Without a shared standard for evaluation, quality decisions were driven by whoever had the loudest opinion or the most recent client call. The risks were real: subpar experiences eroding client trust, and no shared way to prioritize what needed fixing. What was missing wasn’t effort or intention, but a common understanding of quality.


Leadership

Our team was running a two-front measurement program: my colleague focused on client-facing satisfaction metrics, and I focused on structured internal product evaluation. I was already examining internal product quality as a research problem when my director greenlit the effort and pointed me toward Jeff Sauro’s work as a starting place. From there, I analyzed what would actually work at ZS given our culture, team composition, and reporting needs, and adapted rather than adopted the approach, blending task analysis, heuristic evaluation drawn from Nielsen Norman, and cognitive walkthroughs into a single methodology.

It was important that XFN be involved and serve as evaluators, so I designed and ran training sessions with key cross-functional partners over several months, including traveling to India to train teams there.


Findings

The research surfaced three problems that were hiding in plain sight, each more specific and actionable than stakeholders had anticipated.

  • XFN knew more than they thought
    When given clear criteria and a specific user task to evaluate, cross-functional teams engaged seriously. They scored, they debated, and they defended their reasoning. The structure didn’t constrain them, it gave them something to push against.
  • A single score can change the conversation
    The number didn’t make the decision, but it made the right decision harder to avoid. When a product area scored a zero, XFN agreed that there was no room for interpretation.

Impact

TThe methodology held up across different products, teams, and cultural contexts. Cross-functional teams across the US and India were trained and participating in sessions alongside UXR, who coordinated and scored to keep the process honest.

The program piloted with one product and scaled to two others at ZS, and alongside the client-facing satisfaction track my colleague led, was presented as a full measurement program at an industry conference.

Additionally, What started at ZS didn’t stay there, because the task-based benchmarking approach became the foundation for the DPP Usability Program at Meta.

↑ Back to top