Mitigating the impact of faults in unreliable memories for error-resilient applications

Inherently error-resilient applications in areas such as signal processing, machine learning and data analytics provide opportunities for relaxing reliability requirements, and thereby reducing the overhead incurred by conventional error correction schemes. In this paper, we exploit the tolerable im...

Full description

Saved in:
Bibliographic Details
Published in2015 52nd ACM/EDAC/IEEE Design Automation Conference (DAC) pp. 1 - 6
Main Authors Ganapathy, Shrikanth, Karakonstantis, Georgios, Teman, Adam, Burg, Andreas
Format Conference Proceeding
LanguageEnglish
Published IEEE 07.06.2015
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:Inherently error-resilient applications in areas such as signal processing, machine learning and data analytics provide opportunities for relaxing reliability requirements, and thereby reducing the overhead incurred by conventional error correction schemes. In this paper, we exploit the tolerable imprecision of such applications by designing an energy-efficient fault-mitigation scheme for unreliable data memories to meet target yield. The proposed approach uses a bit-shuffling mechanism to isolate faults into bit locations with lower significance. This skews the bit-error distribution towards the low order bits, substantially limiting the output error magnitude. By controlling the granularity of the shuffling, the proposed technique enables trading-off quality for power, area, and timing overhead. Compared to error-correction codes, this can reduce the overhead by as much as 83% in read power, 77% in read access time, and 89% in area, when applied to various data mining applications in 28nm process technology.
ISSN:0738-100X
DOI:10.1145/2744769.2744871