Konstantinos Mavrogiorgos, Shlomit Gur, et al.
DCOSS-IoT 2025
Model extraction attack is one of the most prominent adversarial techniques to target machine learning models along with membership inference attack and model inversion attack. On the other hand, Explainable Artificial Intelligence (XAI) is a set of techniques and procedures to explain the decision making process behind AI. XAI is a great tool to understand the reasoning behind AI models but the data provided for such revelation creates security and privacy vulnerabilities. In this poster, we propose AUTOLYCUS, a model extraction attack that exploits the explanations provided by LIME to infer the decision boundaries of decision tree models and create extracted surrogate models that behave similar to a target model.
Konstantinos Mavrogiorgos, Shlomit Gur, et al.
DCOSS-IoT 2025
Sahil Suneja, Yufan Zhuang, et al.
EuroS&P 2023
Kohei Miyaguchi, Masao Joko, et al.
ASMC 2025
Mateo Espinosa Zarlenga, Gabriele Dominici, et al.
ICML 2025