Neurosymbolic approaches to safe machine learning

Date

2023-04-20

Authors

Anderson, Greg (Ph. D. computer science)

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Neural networks have shown immense promise in solving a variety of challenging problems including computer vision, security, and robotic control. However these applications often come with substantial risk, and in order to deploy machine learning systems in the real world, we need tools to analyze the behavior of these systems. This presents a problem to researchers because neural networks are generally resistant to traditional approaches to program analysis. From a formal analysis perspective, networks are high-dimensional and existing tools simply cannot scale enough to handle them. From a testing perspective, networks are known to be subject to "adversarial examples", which are specific, sparse inputs that trigger unsafe behavior. In this work, we explore two different approaches to analyze systems with neural network components.

First, we consider the problem of analyzing neural networks directly. In this portion of the work, we develop an efficient approach to verify the robustness of neural networks. In order to do this, we use machine learning techniques to develop heuristics which drastically improve the efficiency of existing program analysis approaches to robustness analysis. We show that this synergystic combination of machine learning and symbolic analysis is able to outperform existing approaches to robustness verification across a large suite of benchmarks.

Second, we develop techniques for bypassing the analysis of neural networks entirely, instead relying on external structures to enforce safety. The core idea here is to develop the network together with a shield, a traditional program which is attempting to achieve the same goal as the network. The shield is unlikely to reach the same level of performance as a neural network, but is more amenable to verification. By carefully combining the network and the shield, we maintain the safety of the shield while incorporating the performance of the neural network. We explore different variations on this idea in different contexts, and show that we are able to achieve safe policies while maintaining most of the performance benefits of neural networks.

Description

LCSH Subject Headings

Citation