Benchmark Framework for 2-D/3-D Integrated Compute-in-Memory Based Machine Learning Accelerator
MetadataShow full item record
This these presents a series of end-to-end benchmark frameworks, to evaluate the state-of-the-art compute-in-memory (CIM) accelerators, by considering the hardware performance with the impacts from device options, circuit topologies, architecture hierarchy and data flow, as well as the software accuracy with non-ideal hardware properties. The DNN+NeuroSim V1.0 is an end-to-end benchmark framework for CIM inference engine, with hierarchical design options from device-level, to circuit-level and up to algorithm-level. The DNN+NeuroSim V2.0 is proposed to evaluate CIM on-chip training accelerators, with the behavior models of non-linearity and asymmetry, device-to-device and cycle-to-cycle variations during weight update; as well as peripheral modules to support on-chip feed-forward and back-propagation. The work is extended to 3D integration, as 3D+NeuroSim framework to support electrical-thermal co-simulation of 3D integrated CIM accelerators, for both of monolithic and heterogeneous 3D integration. The proposed NeuroSim family is publicly available at https://github.com/neurosim for the research community. It is a series of simulators for users to explore various hardware specifications in different engineering levels (including technology, memory device, circuit, architecture, 3D packaging and algorithm) and to find optimal design options for early-stage research on various CIM accelerators. Due to its comprehensive capabilities, the NeuroSim family has attracted hundreds of users including industry researchers from Intel, TSMC, IBM, Samsung and SK Hynix, and from universities and academia worldwide.