A general yet accurate approach for energy-efficient processing-in-memory architecture computations
Resistive random-access memory(ReRAM)is promising to break the memory wall due to its processing-in-memory capability and is widely studied to accelerate various applications.The energy consumption of ReRAM-based accelerators stems mainly from ADC/DACs and computational operations on ReRAM crossbars.The former has been adequately studied in recent years,and a new bottleneck of energy consumption has been shifted to ReRAM operations.In this paper,we observe the asymmetry of energy consumption for ReRAM operations,that the energy of operating upon the low resistance state(LRS)ReRAM cell can be several orders of magnitude higher than that on the high resistance state(HRS)ReRAM cell.This opens an opportunity for saving computational energy by reducing the number of LRS cells.To end this,we propose a general energy-efficient ReRAM-based computation scheme that can be seamlessly integrated into any existing ReRAM-based accelerators without affecting its computation results.The key insight lies in reducing the LRS cells by converting them into HRS.It implements the LRS-HRS encoding through a subtraction-based encoder,representing the encoding problem as a graph traversal problem to achieve optimized results.It is also equipped with a lightweight hardware-based decoder to restore the encoded computation results.We have evaluated our approach across graph processing and neural networks on the ReRAM-based accelerators,and the results show that our approach achieves up to 31%and 56.0%energy savings,respectively.
processing in memorymemristoracceleratorenergy efficiencymachine learninggraph processing