DRAM-aware prefetching and cache management

Repository

DRAM-aware prefetching and cache management

Show simple record

dc.contributor.advisor Patt, Yale N.
dc.creator Lee, Chang Joo, 1975-
dc.date.accessioned 2011-02-11T17:57:34Z
dc.date.accessioned 2011-02-11T17:57:57Z
dc.date.available 2011-02-11T17:57:34Z
dc.date.available 2011-02-11T17:57:57Z
dc.date.created 2010-12
dc.date.issued 2011-02-11
dc.date.submitted December 2010
dc.identifier.uri http://hdl.handle.net/2152/ETD-UT-2010-12-2492
dc.description.abstract Main memory system performance is crucial for high performance microprocessors. Even though the peak bandwidth of main memory systems has increased through improvements in the microarchitecture of Dynamic Random Access Memory (DRAM) chips, conventional on-chip memory systems of microprocessors do not fully take advantage of it. This results in underutilization of the DRAM system, in other words, many idle cycles on the DRAM data bus. The main reason for this is that conventional on-chip memory system designs do not fully take into account important DRAM characteristics. Therefore, the high bandwidth of DRAM-based main memory systems cannot be realized and exploited by the processor. This dissertation identifies three major performance-related characteristics that can significantly affect DRAM performance and makes a case for DRAM characteristic-aware on-chip memory system design. We show that on-chip memory resource management policies (such as prefetching, buffer, and cache policies) that are aware of these DRAM characteristics can significantly enhance entire system performance. The key idea of the proposed mechanisms is to send out to the DRAM system useful memory requests that can be serviced with low latency or in parallel with other requests rather than requests that are serviced with high latency or serially. Our evaluations demonstrate that each of the proposed DRAM-aware mechanisms significantly improves performance by increasing DRAM utilization for useful data. We also show that when employed together, the performance benefit of each mechanism is achieved additively: they work synergistically and significantly improve the overall system performance of both single-core and Chip MultiProcessor (CMP) systems.
dc.format.mimetype application/pdf
dc.language.iso eng
dc.subject Microprocessor
dc.subject Memory system
dc.subject DRAM
dc.subject Dynamic Random Access Memory chips
dc.subject On-chip memory system
dc.subject Prefetching
dc.subject Cache management
dc.subject Buffer
dc.title DRAM-aware prefetching and cache management
dc.date.updated 2011-02-11T17:57:57Z
dc.contributor.committeeMember Touba, Nur A.
dc.contributor.committeeMember Chiou, Derek
dc.contributor.committeeMember Namazi, Hossein
dc.contributor.committeeMember Mutlu, Onur
dc.description.department Electrical and Computer Engineering
dc.type.genre thesis
dc.type.material text
thesis.degree.department Electrical and Computer Engineering
thesis.degree.discipline Electrical and Computer Engineering
thesis.degree.grantor University of Texas at Austin
thesis.degree.level Doctoral
thesis.degree.name Doctor of Philosophy

Files in this work

Download File: LEE-DISSERTATION.pdf
Size: 1.798Mb
Format: application/pdf

This work appears in the following Collection(s)

Show simple record


Advanced Search

Browse

My Account

Statistics

Information