Publication
ESEC/FSE 2021
Conference paper

Probing Model Signal-Awareness via Prediction-Preserving Input Minimization

View publication

Abstract

This work explores the signal awareness of AI models for source code understanding. Using a software vulnerability detection use case, we evaluate the models’ ability to capture the correct vulnerability signals to produce their predictions. Our prediction-preserving input minimization (P2IM) approach systematically reduces the original source code to a minimal snippet which a model needs to maintain its prediction. The model’s reliance on incorrect signals is then uncovered when the vulnerability in the original code is missing in the minimal snippet, both of which the model however predicts as being vulnerable. We measure the signal awareness of models using a new metric we propose- Signal-aware Recall (SAR). SAR’s purpose is to capture how well a model learns the real signals, and not to suggest a shortcoming in the Recall metric. The expectation, in fact, is for SAR to match Recall in the ideal scenario where the model truly captures task-specific signals. We apply P2IM on three different neural network architectures across multiple datasets. The results show a sharp drop in the model’s Recall from the high 90s to sub-60s with the new metric, highlighting that the models are presumably picking up a lot of noise or dataset nuances while learning their vulnerability detection logic.

Date

Publication

ESEC/FSE 2021

Authors

Share