An energy efficient compute-in-memory SRAM for low power CNN based machine learning application

Access full-text files




Li, Wei, M.S. in Engineering

Journal Title

Journal ISSN

Volume Title



With the increase in computational parallelism and low-power Integrated Circuits (ICs) design, neuromorphic technologies and machine learning algorithms have returned to spotlight as practical solutions for implementing complex classification problems. Although GPUs have significantly advanced the ability of modern machines by introducing computational parallelism, they do not overcome the memory-bus bottleneck intrinsic to Von Neumann computer architectures. Compute-in-memory (CiM) is an emerging approach for circumventing memory-bus bottlenecks while simultaneously providing parallelism to perform data-centric neuromorphic, or machine learning tasks. CiM technologies mainly focus on reducing data movement by integrating computational elements near or within the memory blocks. Although various studies have provided solutions to unique problems, the increasing requirement of low-power and high throughput systems make it necessary for us to revisit the state-of-the-art CiM design


LCSH Subject Headings