Estimating the minimum bit-width precision for stable deep neural networks utilizing numerical linear algebra

Date

2019-06-20

Authors

Maheshwari, Naman

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

Understanding the bit-width precision is critical in compact representation of a Deep Neural Network (DNN) model with minimal degradation in the inference accuracy. While DNNs are resilient to small errors and noise as pointed out by many prior sources, there is a need to develop a generic mathematical framework for evaluating a given DNN’s sensitivity to input bit-width precision. In this work, we derive a bit-width precision estimator which incorporates the sensitivity of DNN inference accuracy to round-off errors, noise, or other perturbations in inputs. We use the tools of numerical linear algebra, particularly stability analysis, to establish the general bounds that can be imposed on the precision. Random perturbations and ‘worst-case’ perturbations, via adversarial attacks, are applied to determine the tightness of the proposed estimator. The experimental results on AlexNet and VGG-19 showed that minimum 11 bits of input bit-width precision is required for these networks to remain stable. The proposed bit-width precision estimator can enable compact yet highly accurate DNN implementations

Description

LCSH Subject Headings

Citation