Selected Work

A selection of research work focused on framing ambiguity, defining quality, and informing product decisions in complex, system-driven environments.

These examples highlight how I approach research problems, prioritizing decision risk and impact over process walkthroughs or artifacts.


Building a Shared Approach to Usability

Establishing consistent evaluation standards for developer-facing tools across teams

A program-level example showing how I created a repeatable approach to usability evaluation, uncovered systemic gaps, and improved decision quality across complex developer workflows.

Defining Reliability in Simulation Tools

Clarifying ambiguous quality signals to guide product direction

A concept-definition example focused on how developers judge reliability, and how a shared understanding of trust, consistency, and transparency shaped roadmap and product prioritization decisions.

Evaluating Cross-Surface Builder Journeys

Identifying friction and quality gaps across multi-tool developer workflows

A systems-focused example on objectively evaluating end-to-end journeys that span tools and lifecycle phases, enabling clearer prioritization and alignment across teams.

Making Product Quality Visible

From subjective opinions to shared quality benchmarks

An example of democratizing evaluation by replacing UX gatekeeping with structured XFN accountability

Building Trust in Machine-Assisted Review

Improving interpretability and confidence in analytical systems

An enterprise example on making machine-assisted outputs understandable and trustworthy enough to support high-stakes decision-making.

Distinguishing Signal from Noise

Using research to set product direction without defined success criteria

An example of using research to provide direction when organizational objectives are incomplete.