Files
master-thesis-presentation/slides/pim.md
2024-04-03 18:08:02 +02:00

951 B
Raw Blame History

layout, figureUrl, figureCaption, figureFootnoteNumber
layout figureUrl figureCaption figureFootnoteNumber
figure dnn.svg A fully connected DNN layer 1

Processing-in-Memory

Applicable Workloads


He et al. „Newton: A DRAM-makers Accelerator-in-Memory (AiM) Architecture for Machine Learning“, 2020.

Processing-in-Memory

Architectures




Possible placements of compute logic1:

  • Inside the memory subarray
  • In the PSA region near a subarray
  • Outside the bank in its peripheral region
  • In the I/O region of the memory

The nearer the computation is to the memory array, the higher the achievable bandwidth!
Sudarshan et al. „A Critical Assessment of DRAM-PIM Architectures - Trends, Challenges and Solutions“, 2022.