91做厙

Skip to main content
SHARE
Publication

Lantern: An Interactive Adversarial AI Workbench

by Steven R Young, Joel R Brogan, Colin A Smith, Edmon Begoli, Amir Sadovnik
Publication Type
Journal
Journal Name
Cybersecurity and Information Systems Information Analysis Center Journal
Publication Date
Page Numbers
47 to 54
Volume
9
Issue
1

Lantern, an innovative adversarial artificial intelligence (AI) workbench with unique features, is presented in this article. Lantern allows users to evaluate adversarial attacks quickly and interactively against artificial intelligence (AI) systems to protect them better. While there has been extensive work in assessing the robustness of AI models against attacks at a large scale, using dataset-level performance evaluations, there is a lack of tools available for designing one-off attacks in real-world scenarios by nonexpert users. Although the existing tools help compare algorithms at a general level, they are not optimized for one-off attacks, which can be better customized for specific scenarios and may therefore be much more effective. The framework presented here is designed to be compatible with a wide variety of models and attacks and leverages prior work that enables modularity between them in various machine-learning (ML) frameworks. This tool will allow the adversarial AI assurance community to evaluate its efforts beyond automated generic attacks and better understand the threats posed by real-world attacks, which can be individually crafted in a rapid and interactive manner.