Prefetch mechanisms by application memory access pattern
dc.contributor.advisor | Keckler, Stephen W. | en |
dc.creator | Agaram, Kartik Kandadai | en |
dc.date.accessioned | 2011-08-16T21:48:58Z | en |
dc.date.available | 2011-08-16T21:48:58Z | en |
dc.date.issued | 2007-05 | en |
dc.description | text | en |
dc.description.abstract | Modern computer systems spend a substantial fraction of their running time waiting for data from memory. While prefetching has been a promising avenue of research for reducing and tolerating latencies to memory, it has also been a challenge to implement. This challenge exists largely because of the growing complexity of memory hierarchies and the wide variety of application behaviors. In this dissertation we propose a new methodology that emphasizes decomposing complex behavior at the application level into regular components that are intelligible at a high level to the architect. This dissertation is divided into three stages. In the first, we build tools to help decompose application behavior by data structure and phase, and use these tools to create a richer picture of application behavior than with conventional simulation tools, yielding compressed summaries of dominant access patterns. The variety of a access patterns drives the next stage: design of a prefetch system that improves on the state of the art. Every prefetching system must make low-overhead decisions on what to prefetch, when to prefetch it, and where to store prefetched data. Visualizing application a access patterns allows us to articulate the subtleties in making these decisions and the many ways that a mechanism that improves one decision for one set of applications may degrade the quality of another decision for a different set. Our insights lead us to a new system called TwoStep with a small set of independent but synergistic mechanisms. In the third stage we perform a detailed evaluation of TwoStep. We find that while it outperforms past approaches for the most irregular applications in our benchmark suite, it is unable to improve on the speedups for more regular applications. Understanding why leads to an improved understanding of two general categories of prefetch techniques. Prefetching an either look back at past history or look forward by precomputing an application's future requirements. Applications with a low compute-a ccess ratio an benefit from history-based prefetching if their access pattern is not too irregular. Applications with irregular access patterns may benefit from precomputation-based prefetching, as long as their compute-access ratio is not too low. | |
dc.description.department | Computer Science | |
dc.format.medium | electronic | en |
dc.identifier.uri | http://hdl.handle.net/2152/13140 | en |
dc.language.iso | eng | en |
dc.rights | Copyright is held by the author. Presentation of this material on the Libraries' web site by University Libraries, The University of Texas at Austin was made possible under a limited license grant from the author who has retained all copyrights in the works. | en |
dc.subject | Cache memory | en |
dc.subject | Memory management (Computer science) | en |
dc.title | Prefetch mechanisms by application memory access pattern | en |
thesis.degree.department | Computer Sciences | en |
thesis.degree.discipline | Computer Sciences | en |
thesis.degree.grantor | The University of Texas at Austin | en |
thesis.degree.level | Doctoral | en |
thesis.degree.name | Doctor of Philosophy | en |