← Back to About
ServiceNow

ServiceNow

Jan 2021 - Jul 2022

Led applied research in federated learning and the AI Trust & Governance Lab. Joined via the Element AI acquisition.

At a Glance

Domain

Enterprise AI / IT Service Management

Team

5-8 researchers and engineers

Reports to

Director of Research

Location

Montreal, Canada

Role Progression

Lead, AI Trust Lab

May 2022 - Jul 2022

Staff Applied Research Scientist

Jan 2021 - May 2022

The Context

I joined Element AI's Decision Support team in December 2020. A month later, ServiceNow acquired Element AI, and I continued my journey as part of ServiceNow Research. The team's mandate was clear: make enterprise AI trustworthy. That meant privacy, robustness, transparency, and data governance, not as buzzwords, but as engineering disciplines.

I led the applied research efforts in Privacy Preserving Machine Learning, with a special focus on federated learning.

What I Built

ServiceNow's First Federated Learning Model

Federated learning was a new concept at ServiceNow. Enterprise customers had sensitive data they could not share, but they still wanted the benefits of ML models trained on diverse datasets. I worked with stakeholders across the company to identify the best use cases, designed the training infrastructure, and trained ServiceNow's first federated learning model.

First

Trained the first Federated Learning model at ServiceNow, proving the viability of privacy-preserving ML for enterprise customers.

I also designed experiments to evaluate the performance of federated enterprise language models, measuring how well models trained across distributed data sources compared to centrally trained baselines.

Adversarial Privacy Evaluation

You cannot claim your models are private without trying to break them. I developed adversarial attacks specifically designed to test whether enterprise language models leaked private training data. This was not a theoretical exercise. We ran these attacks against real models to understand their vulnerability.

The only honest way to assess privacy is to assume the worst and test for it.

This work informed how ServiceNow thought about privacy guarantees for its AI products and contributed to the broader trustworthy AI strategy.

The Trust & Governance Lab

I was promoted to Lead of the AI Trust Lab, a team of research scientists and engineers working on a broad range of topics: Privacy Preserving Machine Learning, Robustness, Transparency, and Data Governance. The lab's mission was to establish ServiceNow as a leader in Trustworthy Enterprise AI.

What I Learned

The Element AI acquisition was my first experience of a company being absorbed into a larger organization. Navigating that transition, keeping the team focused, and finding our place within ServiceNow's research org taught me a lot about organizational dynamics and how to protect a team's mission through uncertainty.

On the technical side, federated learning taught me that the gap between research and production is enormous. The algorithms work in papers. Making them work across real enterprise environments with heterogeneous data, unreliable networks, and strict compliance requirements is a different problem entirely.

Tech & Tools

AI/ML

Federated LearningNLPPrivacy-Preserving MLLanguage Models

Languages

PythonPyTorch

Infrastructure

AWSKubernetes

Practices

Trustworthy AIData GovernanceAdversarial Attacks

Project Deep Dive

Deep dives for this experience coming soon.