Efficient signal acquisition and deep learning model compression

Date

2022-11-17

Authors

Sakthi, Madhumitha

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

The ubiquitous use of deep learning models for signal processing has led to an increasing computational and storage cost, especially in edge applications. Although deep learning has delivered improved accuracy for a given task, they have also increased the need for bringing in efficiency during signal processing and acquisition to save on computation, storage and overall power consumption. The main motivation of this thesis is to present algorithms for efficient signal acquisition and compress the deep learning models across various applications and use deep learning models to again increase processing efficiency. Typically, to make a decision on a given signal, it is vital to acquire robust signal information and use a processing chain that is capable of making a decision on the input with minimum resource utilization and present the output to the end user. In order to make this process efficient, we present our solutions in three parts. First, in autonomous driving applications where robust signal acquisition is essential for safety during deployment, we design a novel radar sub-sampling algorithm designed to pick regions of interest that need more accurate reconstruction thereby providing optimal performance at a considerably low sampling rate. We developed this method using images, images and radar, only radar for good and extreme weather conditions respectively. The algorithm is further improved to use the predicted motion of the object for region determination and we analyzed a hardware-efficient Binary Permuted Diagonal measurement matrix in compressed sensing and show competitive performance to the Gaussian measurement matrix. Finally, we trained a novel transformer-based object detection system for image and radar object detection and a state-of-the-art object detection system from only radar data. Second, for signal processing efficiency, we present our novel deep learning model compression algorithm in order to compress the deep learning model either for storage or storage and computational efficiency. The algorithms were designed for two conditions, retraining and non-retraining cases. Our Bin & Quant algorithm is thoroughly designed for compressing float models without retraining for a negligible loss in accuracy. The Bin & Quant algorithm is also applied directly to the integer quantized models and achieves storage compression on the already compressed and computationally efficient model. This method is specifically designed for compressing off-the-shelf models without the knowledge of hyperparameters or the availability of the original training data. Finally, for utmost compression, we present our Gradient Weighted K-means algorithm, designed to train computationally efficient integer quantized models where each weight requires less than 1-bit for storage. Using the above compression methods, we compressed the speech recognition and vision models. Third, we developed lightweight deep learning models for various BCI applications. Our main focus was to develop small networks for various applications in order to accommodate both computational and storage efficiency needs while also providing robust results. Notably, our keyword spotter is designed to invoke a larger EEG-based recognition system only when the user needs and therefore, switch on the larger recognition system on a need basis and a native language classification model can be used to direct EEG data to separate networks that process native EEG and non-native EEG-based speech recognition, thereby decreasing the complexity of the larger network. Similarly, the audio vs. audio-visual stimuli classification based on EEG is again intended for directing the EEG data to separate neural networks solely designed for either audio stimuli-based EEG or audio-visual stimuli-based EEG networks. Therefore, using the above three contributions, we aim at improving signal acquisition and processing efficiency using the deep learning models and also propose a novel deep learning model compression algorithm.

Description

LCSH Subject Headings

Citation