Bayesian entropy estimators for spike trains

dc.creatorPark, Il Memmingen
dc.creatorArcher, Evanen
dc.creatorPilow, Jonathanen
dc.date.accessioned2014-12-15T17:10:57Zen
dc.date.available2014-12-15T17:10:57Zen
dc.date.issued2013-07-08en
dc.descriptionIl Memming Park and Jonathan Pillow are with the Institute for Neuroscience and Department of Psychology, The University of Texas at Austin, TX 78712, USA -- Evan Archer is with the Institute for Computational and Engineering Sciences, The University of Texas at Austin, TX 78712, USA -- Jonathan Pillow is with the Division of Statistics and Scientific Computation, The University of Texas at Austin, Austin, TX 78712, USAen
dc.description.abstractPoster presentation: Information theoretic quantities have played a central role in neuroscience for quantifying neural codes [1]. Entropy and mutual information can be used to measure the maximum encoding capacity of a neuron, quantify the amount of noise, spatial and temporal functional dependence, learning process, and provide a fundamental limit for neural coding. Unfortunately, estimating entropy or mutual information is notoriously difficult--especially when the number of observations N is less than the number of possible symbols K [2]. For the neural spike trains, this is often the case due to the combinatorial nature of the symbols: for n simultaneously recorded neurons on m time bins, the number of possible symbols is K = 2n+m. Therefore, the question is how to extrapolate when you may have a severely under-sampled distribution. Here we describe a couple of recent advances in Bayesian entropy estimation for spike trains. Our approach follows that of Nemenman et al. [2], who formulated a Bayesian entropy estimator using a mixture-of-Dirichlet prior over the space of discrete distributions on K bins. We extend this approach to formulate two Bayesian estimators with different strategies to deal with severe under-sampling. For the first estimator, we design a novel mixture prior over countable distributions using the Pitman-Yor (PY) process [3]. The PY process is useful when the number of parameters is unknown a priori, and as a result finds many applications in Bayesian nonparametrics. PY process can model the heavy, power-law distributed tails which often occur in neural data. To reduce the bias of the estimator we analytically derive a set of mixing weights so that the resulting improper prior over entropy is approximately flat. We consider the posterior over entropy given a dataset (which contains some observed number of words but an unknown number of unobserved words), and show that the posterior mean can be efficiently computed via a simple numerical integral. The second estimator incorporates the prior knowledge about the spike trains. We use a simple Bernoulli process as a parametric model of the spike trains, and use a Dirichlet process to allow arbitrary deviation from the Bernoulli process. Under this model, very sparse spike trains are a priori orders of magnitude more likely than those with many spikes. Both estimators are computationally efficient, and statistically consistent. We applied those estimators to spike trains from early visual system to quantify neural coding characteristics.en
dc.description.catalogingnotememming@austin.utexas.eduen
dc.description.departmentPsychologyen
dc.description.sponsorshipen
dc.identifier.Filename1471-2202-14-S1-P316.pdfen
dc.identifier.citationPark, Il M., Evan Archer, and Jonathan Pillow. “Bayesian Entropy Estimators for Spike Trains.” BMC Neuroscience 14, no. Suppl 1 (July 8, 2013): P316. doi:10.1186/1471-2202-14-S1-P316.en
dc.identifier.doidoi:10.1186/1471-2202-14-S1-P316en
dc.identifier.urihttp://hdl.handle.net/2152/27957en
dc.language.isoEnglishen
dc.publisherBMC Neuroscienceen
dc.rightsAdministrative deposit of works to UT Digital Repository: This works author(s) is or was a University faculty member, student or staff member; this article is already available through open access at http://www.biomedcentral.com. The public license is specified as CC-BY: http://creativecommons.org/licenses/by/4.0/. The library makes the deposit as a matter of fair use (for scholarly, educational, and research purposes), and to preserve the work and further secure public access to the works of the University.en
dc.subjectBayesian entropyen
dc.subjectspike trainsen
dc.subjectnueral codesen
dc.subjecten
dc.titleBayesian entropy estimators for spike trainsen
dc.typeArticleen

Access full-text files

Original bundle

Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
1471-2202-14-S1-P316.pdf
Size:
93.72 KB
Format:
Adobe Portable Document Format