• Login
    View Item 
    •   SMARTech Home
    • College of Computing (CoC)
    • College of Computing Technical Reports
    • View Item
    •   SMARTech Home
    • College of Computing (CoC)
    • College of Computing Technical Reports
    • View Item
    JavaScript is disabled for your browser. Some features of this site may not work without it.

    A Framework for Data Prefetching using Off-line Training of Markovian Predictors

    Thumbnail
    View/Open
    GIT-CC-02-16.pdf (719.2Kb)
    Date
    2002
    Author
    Kim, Jinwoo
    Wong, Weng Fai
    Palem, Krishna V.
    Metadata
    Show full item record
    Abstract
    An important technique for alleviating the memory bottleneck is data prefetching. Data prefetching solutions ranging from pure software approach by inserting prefetch instructions through program analysis to purely hardware mechanisms have been proposed. The degrees of success of those techniques are dependent on the nature of the applications. The need for innovative approach is rapidly growing with the introduction of applications such as object-oriented applications that show dynamically changing memory access behavior. In this paper, we propose a novel framework for the use of data prefetchers that are trained off-line. In particular, we propose two techniques for building small prediction tables off-line and the hardware support needed to deploy them at runtime. Our first technique is an adaptation of the Hidden Markov Model that has been used successfully in many diverse areas including molecular biology, speech, fingerprint and a wide range of recognition problems to find hidden patterns. Our second proposed technique is called the Window Markov Predictor, which seeks to identify relationships between miss addresses within a fixed window. Sample traces of applications are fed into these sophisticated off-line learning schemes to find hidden memory access patterns and prediction models are constructed. Once built, the predictor models are loaded into a data prefetching unit in the CPU at the appropriate point during the runtime to drive the prefetching. We will propose a general architecture for such a process and report on the results of the experiments we performed, comparing them against other hardware prefetching schemes. On average by using table size of about 8KB size, we were able to achieve prediction accuracy of about 68% through our own proposed method and performance was boosted about 37% on average on the benchmarks we tested. Furthermore, we believe our proposed framework is amenable to other predictors and can be done as a phase of the profiling-optimizing-compiler.
    URI
    http://hdl.handle.net/1853/6530
    Collections
    • College of Computing Technical Reports [506]

    Browse

    All of SMARTechCommunities & CollectionsDatesAuthorsTitlesSubjectsTypesThis CollectionDatesAuthorsTitlesSubjectsTypes

    My SMARTech

    Login

    Statistics

    View Usage StatisticsView Google Analytics Statistics
    facebook instagram twitter youtube
    • My Account
    • Contact us
    • Directory
    • Campus Map
    • Support/Give
    • Library Accessibility
      • About SMARTech
      • SMARTech Terms of Use
    Georgia Tech Library266 4th Street NW, Atlanta, GA 30332
    404.894.4500
    • Emergency Information
    • Legal and Privacy Information
    • Human Trafficking Notice
    • Accessibility
    • Accountability
    • Accreditation
    • Employment
    © 2020 Georgia Institute of Technology