AI Testing
We’re designing tools to help ensure that AI systems are trustworthy, reliable and can optimize business processes. We create tests to simulate real-life scenarios and localize the faults in AI systems. We’re working on automating testing, debugging, and repairing AI models across a wide range of scenarios.
Our work
What is red teaming for generative AI?
ExplainerKim Martineau- Adversarial Robustness and Privacy
- AI
- AI Testing
- Fairness, Accountability, Transparency
- Foundation Models
- Natural Language Processing
- Security
- Trustworthy AI
An open-source toolkit for debugging AI models of all data types
Technical noteKevin Eykholt and Taesung Lee- Adversarial Robustness and Privacy
- AI Testing
- Data and AI Security
AI diffusion models can be tricked into generating manipulated images
NewsKim Martineau- AI
- AI Testing
- Data and AI Security
- Foundation Models
- Generative AI
- Security
DOFramework: A testing framework for decision optimization model learners
Technical noteOrit Davidovich- AI
- AI Testing
- Mathematical Sciences
Managing the risk in AI: Spotting the “unknown unknowns”
ResearchOrna Raz, Sam Ackerman, and Marcel Zalmanovici5 minute read- AI
- AI Testing
IBM researchers check AI bias with counterfactual text
ResearchInkit Padhi, Nishtha Madaan, Naveen Panwar, and Diptikalyan Saha5 minute read- AI Testing
- Fairness, Accountability, Transparency
Publications
- Yu-Lin Tsai
- Chia-yi Hsu
- et al.
- 2024
- ICLR 2024
- Turgay Caglar
- Sirine Belhaj
- et al.
- 2024
- AAAI 2024
- David Mayo
- Jesse Cummings
- et al.
- 2023
- NeurIPS 2023
- Kevin Eykholt
- Taesung Lee
- et al.
- 2023
- USENIX Security 2023
- Nishtha Madaan
- Adithya Manjunatha
- et al.
- 2023
- IAAI 2023
- Frank R. Libsch
- Hiroyuki Mori
- 2023
- ECTC 2023