Main-memory near-data acceleration with concurrent host access

dc.contributor.advisorErez, Mattan
dc.contributor.committeeMemberOrshansky, Michael
dc.contributor.committeeMemberGerstlauer, Andreas
dc.contributor.committeeMemberCaramanis, Constantine
dc.contributor.committeeMemberBeard, Jonathan
dc.creatorCho, Benjamin Youngjae
dc.date.accessioned2021-07-16T21:28:47Z
dc.date.available2021-07-16T21:28:47Z
dc.date.created2021-05
dc.date.issued2021-04-13
dc.date.submittedMay 2021
dc.date.updated2021-07-16T21:28:47Z
dc.description.abstractProcessing-in-memory is attractive for applications that exhibit low temporal locality and low arithmetic intensity. By bringing computation close to data, PIMs utilize proximity to overcome the bandwidth bottleneck of a main memory bus. Unlike discrete accelerators, such as GPUs, PIMs can potentially accelerate within main memory so that the overhead for loading data from main memory to processor/accelerator memories can be saved. There are a set of challenges for realizing processing in the main memory of conventional CPUs, including: (1) mitigating contention/interference between the CPU and PIM as both access the same shared memory devices, and (2) sharing the same address space between the CPU and PIM for efficient in-place acceleration. In this dissertation, I present solutions to these challenges that achieve high PIM performance without significantly affecting CPU performance (up to 2.4\% degradation). Another major contribution is that I identify killer applications that cannot be effectively accelerated with discrete accelerators. I introduce two compelling use cases in the AI domain for the main-memory accelerators where the unique advantage of a PIM over other acceleration schemes can be leveraged.
dc.description.departmentElectrical and Computer Engineeringeng
dc.format.mimetypeapplication/pdf
dc.identifier.urihttps://hdl.handle.net/2152/86866
dc.identifier.urihttp://dx.doi.org/10.26153/tsw/13817
dc.language.isoen
dc.subjectProcessing in memory
dc.subjectNear-data processing
dc.subjectMachine learning
dc.subjectDeep learning
dc.subjectMain memory
dc.subjectMain-memory acceleration
dc.titleMain-memory near-data acceleration with concurrent host access
dc.typeThesis
dc.type.materialtext
thesis.degree.departmentElectrical and Computer Engineering
thesis.degree.disciplineElectrical and Computer Engineering
thesis.degree.grantorThe University of Texas at Austin
thesis.degree.levelDoctoral
thesis.degree.nameDoctor of Philosophy

Access full-text files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
CHO-DISSERTATION-2021.pdf
Size:
3.03 MB
Format:
Adobe Portable Document Format

License bundle

Now showing 1 - 2 of 2
No Thumbnail Available
Name:
PROQUEST_LICENSE.txt
Size:
4.45 KB
Format:
Plain Text
Description:
No Thumbnail Available
Name:
LICENSE.txt
Size:
1.84 KB
Format:
Plain Text
Description: