loading page

SEVDA: Singular Value Decomposition Based Parallel Write Scheme for Memristive CNN Accelerators
  • Ali Al-Shaarawy,
  • Roman Genov,
  • Amirali Amirsoleimani
Ali Al-Shaarawy
Roman Genov
Amirali Amirsoleimani
Department of Electrical Engineering and Computer Science, York University

Corresponding Author:[email protected]

Author Profile


Von Neumann architecture-based deep neural network architectures are fundamentally bottlenecked by the need to transfer data from memory to compute units. Memristor crossbar-based accelerators overcome this by leveraging Kir-choff's law to perform matrix-vector multiplication (MVM) in-memory. They still, however, are relatively inefficient in their device programming schemes, requiring individual devices to be written sequentially or row-by-row. Parallel writing schemes have recently emerged, which program entire crossbars simultaneously through the outer product of bit-line and word-line voltages and pulse widths respectively. We propose a scheme that leverages singular value decomposition and low-rank approximation to generate all word-line and bit-line vectors needed to program a convolutional neural network (CNN) onto a memristive crossbar-based accelerator. Our scheme reduces programming latency by 90% from row-by-row programming schemes, while maintaining high test accuracy on state of the art image classification models.