← Back to Expertise

Responsible AI, Trust & Governance

Building AI systems that are fair, transparent, and accountable from day one.

My Approach

I have spent years building responsible AI from the inside, not as an afterthought or a compliance checkbox, but as a core engineering discipline. At Axon, I built the responsible innovation platform that every AI use case had to pass through before reaching production. At ServiceNow, I led the Trust & Governance Lab where we worked on making enterprise AI systems robust, transparent, and privacy-preserving.

The common thread: responsible AI works when it is embedded in the development process, not bolted on at the end.

What This Looks Like in Practice

Evaluation-First Development

Building automated evaluation pipelines that test for fairness, bias, and ethical performance before models ship. At Axon, this meant evaluating ASR models across 4 new locales for ethical performance, and building privacy-preserving ALPR evaluation workflows that worked in eyes-off, GDPR-compliant environments.

Governance That Scales

Designing review processes and tooling that accelerate delivery rather than slow it down. The responsible innovation platform at Axon was built to make doing the right thing the path of least resistance for engineering teams.

Trust Through Transparency

Making AI systems explainable and auditable. At ServiceNow, this included research into robustness, transparency, and data governance for enterprise language models.