--- layout: figure figureUrl: dnn.svg figureCaption: A fully connected DNN layer figureFootnoteNumber: 1 --- ## Processing-in-Memory ### Applicable Workloads
He et al. „Newton: A DRAM-maker’s Accelerator-in-Memory (AiM) Architecture for Machine Learning“, 2020. --- ## Processing-in-Memory ### Architectures


Possible placements of compute logic1: - Inside the memory subarray - In the PSA region near a subarray - Outside the bank in its peripheral region - In the I/O region of the memory
The nearer the computation is to the memory array, the higher the achievable bandwidth!
Sudarshan et al. „A Critical Assessment of DRAM-PIM Architectures - Trends, Challenges and Solutions“, 2022.