Estimating Entropy Rates with Bayesian Confidence Intervals

The entropy rate quantifies the amount of uncertainty or disorder produced by any dynamical system. In a spiking neuron, this uncertainty translates into the amount of information potentially encoded and thus the subject of intense theoretical and experimental investigation. Estimating this quantity...

Full description

Saved in:
Bibliographic Details
Published inNeural computation Vol. 17; no. 7; pp. 1531 - 1576
Main Authors Kennel, Matthew B., Shlens, Jonathon, Abarbanel, Henry D. I., Chichilnisky, E. J.
Format Journal Article
LanguageEnglish
Published One Rogers Street, Cambridge, MA 02142-1209, USA MIT Press 01.07.2005
MIT Press Journals, The
Subjects
Online AccessGet full text

Cover

Loading…
More Information
Summary:The entropy rate quantifies the amount of uncertainty or disorder produced by any dynamical system. In a spiking neuron, this uncertainty translates into the amount of information potentially encoded and thus the subject of intense theoretical and experimental investigation. Estimating this quantity in observed, experimental data is difficult and requires a judicious selection of probabilistic models, balancing between two opposing biases. We use a model weighting principle originally developed for lossless data compression, following the minimum description length principle. This weighting yields a direct estimator of the entropy rate, which, compared to existing methods, exhibits significantly less bias and converges faster in simulation. With Monte Carlo techinques, we estimate a Bayesian confidence interval for the entropy rate. In related work, weap-ply these ideas to estimate the information rates between sensory stimuli and neural responses in experimental data (Shlens, Kennel, Abarbanel, & Chichilnisky, in preparation).
Bibliography:July, 2005
ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0899-7667
1530-888X
DOI:10.1162/0899766053723050