Computational methods for optimal adversarial learning

Faculty Mentor: Ryan Murray

Prerequisites: Probability, basic programming, Differential equations.

 

Outline: Recent advances in machine learning algorithms have been driven by the use of adversarial training. In this context, a learning algorithm, such as a classifier generated using neural networks, is pitted against a hypothetical adversary which is allowed to alter the data. The use of such an adversary has led to significant improvement in terms of generalizability. However, the inclusion of such an adversary in the training procedure of an algorithm is computationally expensive. Furthermore it is not clear how to select the adversarial power in order to improve generalization without losing too much accuracy.

 

Research Objectives: This project will study computational methods for optimal adversarial classification. In particular, we will investigate the use of i) differential equations methods, and ii) convex optimization methods, in order to construct families of adversarially optimal classifiers. Additional theoretical questions about the global optimality of the solutions we find, as well as their extension to discrete data sets, will also be considered.

 

Outcomes: We’ll develop sharable code which constructs the minimizers for optimal adversarial learning problems across a range of different choices of adversarial strength. We’ll use this to explore the tradeoff between adversarial robustness and accuracy, as well as theoretical questions about global optimality and empirical versions of our methods.

 

References: