Brain-inspired computing exploiting carbon nanotube FETs and resistive RAM: Hyperdimensional computing case study

We demonstrate an end-to-end brain-inspired hyperdimensional (HD) computing nanosystem, effective for cognitive tasks such as language recognition, using heterogeneous integration of multiple emerging nanotechnologies. It uses monolithic 3D integration of carbon nanotube field-effect transistors (CN...

Full description

Saved in:
Bibliographic Details
Published inDigest of technical papers - IEEE International Solid-State Circuits Conference pp. 492 - 494
Main Authors Wu, Tony F., Haitong Li, Ping-Chen Huang, Rahimi, Abbas, Rabaey, Jan M., Wong, H.-S Philip, Shulaker, Max M., Mitra, Subhasish
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.02.2018
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:We demonstrate an end-to-end brain-inspired hyperdimensional (HD) computing nanosystem, effective for cognitive tasks such as language recognition, using heterogeneous integration of multiple emerging nanotechnologies. It uses monolithic 3D integration of carbon nanotube field-effect transistors (CNFETs, an emerging logic technology with significant energy-delay product (EDP) benefit vs. silicon CMOS [1]) and Resistive RAM (RRAM, an emerging memory that promises dense non-volatile and analog storage [2]). Due to their low fabrication temperature (<;250°C), CNFETs and RRAM naturally enable monolithic 3D integration with fine-grained and dense vertical connections (exceeding various chip stacking and packaging approaches) between computation and storage layers using back-end-of-line inter-layer vias [3]. We exploit RRAM and CNFETs to create area-and energy-efficient circuits for HD computing: approximate accumulation circuits using gradual RRAM reset operation (in addition to RRAM single-bit storage) and random projection circuits that embrace inherent variations in RRAM and CNFETs. Our results demonstrate: 1. pairwise classification of 21 European languages with measured accuracy of up to 98% on >20,000 sentences (6.4 million characters) per language pair. 2. One-shot learning (i.e., learning from few examples) using one text sample (~100,000 characters) per language. 3. Resilient operation (98% accuracy) despite 78% hardware errors (circuit outputs stuck at 0 or 1). Our HD nanosystem consists of 1,952 CNFETs integrated with 224 RRAM cells.
ISSN:2376-8606
DOI:10.1109/ISSCC.2018.8310399