Neural Network Verification

Tutorial Speaker

M. Pawan Kumar is an Associate Professor in the Department of Engineering Science at the University of Oxford, a Tutorial Fellow at Lady Margaret Hall, and a Fellow of The Alan Turing Institute. He is also principal researcher in the OVAL group where he works on optimization algorithms for problems that arise in computer vision and machine learning.

Abstract

Since 2012, deep neural networks have been successfully employed to improve the performance of several tasks in computer vision, natural language processing and other related areas of machine learning. This has resulted in the launch of several ambitious projects where a human will be replaced by neural networks. Such projects include safety critical applications such as autonomous navigation and personalised medicine. Given the high risk of a wrong decision in such applications, a key step in the deployment of neural networks is their formal verification: proving that a neural network satisfies a desirable property, or generating a counter-example to show that it does not. This tutorial will summarise the progress made in neural network verification thus far.

The contents of the tutorial are divided in three parts. In the first part, we will formally describe a neural network and take a brief look at how its parameters are estimated using a training data set. This will allow us to establish the computational difficulty of neural network verification, as well as its practical importance. In the second part, we will look at approximate verification techniques that can be used in conjunction with the parameter estimation algorithms to improve the robustness of neural networks. While such techniques cannot be guaranteed to formally prove a property, in practice they have been shown to result in neural networks that admit an efficient formal verification algorithm. In the third part, we will describe a unified framework for all the exact techniques for formal verification that have been proposed in the literature. The framework will allow us to identify the strengths and weaknesses of each approach, which in turn will highlight interesting directions of future research. Time permitting, we will also look at some recent work on statistical verification, that is, computing the probability of property being satisifed given a distribution on the input of the neural network.