Date of Award

Spring 1-1-2022

Document Type

Dissertation

Degree Name

Doctor of Philosophy in Cyber Operations (PhDCO)

Department

Computer Science

First Advisor

Shengjie Xu

Second Advisor

Austin O'Brien

Third Advisor

Josh Stroschein

Abstract

Machine learning is used in myriad aspects, both in academic research and in everyday life, including safety-critical applications such as robust robotics, cybersecurity products, medial testing and diagnosis where a false positive or negative could have catastrophic results. Despite the increasing prevalence of machine learning applications and their role in critical systems we rely on daily, the security and robustness of machine learning models is still a relatively young field of research with many open questions, particularly on the defensive side of adversarial machine learning. Chief among these open questions is how best to quantify a model’s attack surface against adversarial examples. Knowing how a model will behave under attacks is critical information for personnel charged with securing critical machine learning applications, and yet research towards such an attack surface metric is incredibly sparse. This dissertation addressed this problem by using previous insights into adversarial example attacks against machine learning models as well as the properties and shortcomings of various defensive techniques to formulate a basic definition of a model’s attack surface, one which allows its behavior under adversarial example attack to be generally predicted. The proposed metric was then subjected to a limited validation using six models, three Neural Networks and three Support Vector Machines (SVMs), using three datasets consisting of random clusters of points in an x,y-coordinate plane. Models were trained against each dataset to generate versions of the same model architecture with different attack surfaces, and these versions were then subjected to attack through adversarial examples generated by a Projected Gradient Descent with Line Search (PGDLS) attack, using varying perturbation budgets for the attack to control attack strength. Model performance at each perturbation budget was recorded and analyzed, leading to a limited validation of the metric for the purpose of defining how a given model will behave against adversarial example attacks.

Share

COinS