Privacy-Preserving ML
Federated learning, differential privacy, and adversarial robustness for enterprise data.
My Approach
Privacy-preserving ML is about building systems that learn from sensitive data without exposing it. I led this work at ServiceNow Research, where I took federated learning from concept to the company's first production model. I also developed adversarial attacks to stress-test the privacy of enterprise language models, because you cannot claim privacy without trying to break it.
What This Looks Like in Practice
Federated Learning
Training models across distributed data sources without centralizing sensitive information. At ServiceNow, I worked with stakeholders to identify the best use cases, trained the first federated model, and designed experiments to evaluate federated enterprise language models.
Adversarial Privacy Evaluation
Developing adversarial attacks to evaluate whether language models leak private training data. This is the only honest way to assess privacy: assume the worst and test for it.
Privacy-Preserving Evaluation Pipelines
At Axon, I built ALPR evaluation workflows that operated in eyes-off, GDPR-compliant environments, proving that you can rigorously evaluate model performance without ever seeing the underlying data.
Where I've Done This
ServiceNow
Jan 2021 - Jul 2022
Led applied research in federated learning. Trained ServiceNow's first federated model. Developed adversarial attacks to evaluate privacy of enterprise language models.
Axon
Jul 2022 - Sep 2024
Built privacy-preserving ALPR evaluation workflows for eyes-off, GDPR-compliant environments.