Algorithm and architecture of fully-parallel associative memories based on sparse clustered networks

Hooman Jarollahi, Naoya Onizawa, Vincent Gripon, Warren J. Gross

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)


Associative memories retrieve stored information given partial or erroneous input patterns. A new family of associative memories based on Sparse Clustered Networks (SCNs) has been recently introduced that can store many more messages than classical Hopfield-Neural Networks (HNNs). In this paper, we propose fully-parallel hardware architectures of such memories for partial or erroneous inputs. The proposed architectures eliminate winner-take-all modules and thus reduce the hardware complexity by consuming 65 % fewer FPGA lookup tables and increase the operating frequency by approximately 1.9 times compared to that of previous work. Furthermore, the scaling behaviour of the implemented architectures for various design choices are investigated. We explore the effect of varying design variables such as the number of clusters, network nodes, and erased symbols on the error performance and the hardware resources.

Original languageEnglish
Pages (from-to)235-247
Number of pages13
JournalJournal of Signal Processing Systems
Issue number3
Publication statusPublished - 2014 Sept


  • Associative memories
  • FPGA-based VLSI implementation
  • Hopfield neural networks
  • Sparse clustered networks

ASJC Scopus subject areas

  • Control and Systems Engineering
  • Theoretical Computer Science
  • Signal Processing
  • Information Systems
  • Modelling and Simulation
  • Hardware and Architecture


Dive into the research topics of 'Algorithm and architecture of fully-parallel associative memories based on sparse clustered networks'. Together they form a unique fingerprint.

Cite this