← Back to Expertise

Privacy-Preserving ML

Federated learning, differential privacy, and adversarial robustness for enterprise data.

My Approach

Privacy-preserving ML is about building systems that learn from sensitive data without exposing it. I led this work at ServiceNow Research, where I took federated learning from concept to the company's first production model. I also developed adversarial attacks to stress-test the privacy of enterprise language models, because you cannot claim privacy without trying to break it.

What This Looks Like in Practice

Federated Learning

Training models across distributed data sources without centralizing sensitive information. At ServiceNow, I worked with stakeholders to identify the best use cases, trained the first federated model, and designed experiments to evaluate federated enterprise language models.

Adversarial Privacy Evaluation

Developing adversarial attacks to evaluate whether language models leak private training data. This is the only honest way to assess privacy: assume the worst and test for it.

Privacy-Preserving Evaluation Pipelines

At Axon, I built ALPR evaluation workflows that operated in eyes-off, GDPR-compliant environments, proving that you can rigorously evaluate model performance without ever seeing the underlying data.