Data and AI Security
As organizations move to the hybrid cloud, they must protect sensitive data and comply with regulations that allow them to take advantage of AI. We’re designing systems to monitor and protect data, building trust in AI through robust evaluation, certification, and hardening against attacks.
Our work
Tools + code
Diffprivlib: The IBM Differential Privacy Library
A general-purpose library for experimenting with, investigating, and developing applications in differential privacy.
View project ↗AI Privacy Toolkit
A toolkit for tools and techniques related to the privacy and compliance of AI models.
View project ↗IBM Fully Homomorphic Encryption Toolkits
Toolkits for MacOS, iOS, Android and Linux based on IBM’s HeLib that enables computation possible on fully encrypted data. Each toolkit includes sample programs and IDE integration making it easier to write FHE-based code.
View project ↗ART: Adverserial Robustness Toolbox
A Python library for machine learning security that enables developers and researchers to defend and evaluate machine learning models and applications against the adversarial threats of evasion, poisoning, extraction, and inference.
View project ↗IBM Federated Learning
Community edition of IBM Federated Learning, a Python framework for federated learning in an enterprise environment.
View project ↗
Publications
Omid Aramoon, Gang Qu, et al.2021ICLR 2021
Shiqi Wang, Kevin Eykholt, et al.2021AAAI 2021
Syed Zawad, Ahsan Ali, et al.2021AAAI 2021
Annie Abay, Ebube Chuba, et al.2021AAAI 2021
Runhua Xu, James Joshi, et al.2021IEEE TDSC
Xing Gao, Benjamin Steenkamer, et al.2021IEEE TDSC