Show simple item record

dc.contributor.authorMahajan, Divya
dc.contributor.authorPark, Jongse
dc.contributor.authorAmaro, Emmanuel
dc.contributor.authorSharma, Hardik
dc.contributor.authorYazdanbakhsh, Amir
dc.contributor.authorKim, Joon
dc.contributor.authorEsmaeilzadeh, Hadi
dc.date.accessioned2015-09-22T17:52:56Z
dc.date.available2015-09-22T17:52:56Z
dc.date.issued2015
dc.identifier.urihttp://hdl.handle.net/1853/54043
dc.descriptionResearch areas: Heterogeneous Computing, Statistical Machine Learning, Accelerator Designen_US
dc.description.abstractA growing number of commercial and enterprise systems increasingly rely on compute-intensive machine learning algorithms. While the demand for these compute-intensive applications is growing, the performance benefits from general-purpose platforms are diminishing. To accommodate the needs of machine learning algorithms, Field Programmable Gate Arrays (FPGAs) provide a promising path forward and represent an intermediate point between the efficiency of ASICs and the programmability of general-purpose processors. However, acceleration with FPGAs still requires long design cycles and extensive expertise in hardware design. To tackle this challenge, instead of designing an accelerator for machine learning algorithms, we develop TABLA, a framework that generates accelerators for a class of machine learning algorithms. The key is to identify the commonalities across a wide range of machine learning algorithms and utilize this commonality to provide a high-level abstraction for programmers. TABLA leverages the insight that many learning algorithms can be expressed as stochastic optimization problems. Therefore, a learning task becomes solving an optimization problem using stochastic gradient descent that minimizes an objective function. The gradient solver is fixed while the objective function changes for different learning algorithms. TABLA provides a template-based framework for accelerating this class of learning algorithms. With TABLA, the developer uses a high-level language to only specify the learning model as the gradient of the objective function. TABLA then automatically generates the synthesizable implementation of the accelerator for FPGA realization. We use TABLA to generate accelerators for ten different learning task that are implemented on a Xilinx Zynq FPGA platform. We rigorously compare the benefits of the FPGA acceleration to both multicore CPUs (ARMCortex A15 and Xeon E3) and to many-core GPUs (Tegra K1, GTX 650 Ti, and Tesla K40) using real hardware measurements. TABLA-generated accelerators provide 15.0x and 2.9x average speedup over the ARM and the Xeon processors, respectively. These accelerator provide 22.7x, 53.7x, and 30.6x higher performance-per-Watt compare to Tegra, GTX 650, and Tesla, respectively. These benefits are achieved while the programmers write less than 50 lines of code.en_US
dc.language.isoen_USen_US
dc.publisherGeorgia Institute of Technologyen_US
dc.relation.ispartofseriesSCS Technical Report ; GT-CS-15-07en_US
dc.subjectData flow graphsen_US
dc.subjectFPGA acceleratorsen_US
dc.subjectStochastic gradient descenten_US
dc.subjectTemplate-based designen_US
dc.titleTABLA: A Unified Template-based Framework for Accelerating Statistical Machine Learningen_US
dc.typeTechnical Reporten_US
dc.contributor.corporatenameGeorgia Institute of Technology. College of Computingen_US
dc.contributor.corporatenameGeorgia Institute of Technology. School of Computer Scienceen_US
dc.embargo.termsnullen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record