Uncategorized

Methods for Neural Ensemble Recordings, Second Edition (Frontiers in Neuroscience)

Would you like to tell us about a lower price? If you are a seller for this product, would you like to suggest updates through seller support? In the last ten years neural ensemble recording grew into a well-respected and highly data-lucrative science. New experimental paradigms, including the fabrication of high-density microelectrodes, new surgical implantation techniques, multi-channel signal processing, and the establishment of direct real-time brain-machine interfaces, hold promise not just for neurophysiology research, but also for new-generation prosthetic devices aimed at restoring mobility and communication skills in severely disabled patients.

Extensively updated and expanded, Methods for Neural Ensemble Recording, Second Edition distills the current state-of-the-science and provides the nuts and bolts foundation from which to advance the field for the next ten years. With contributions from pioneering researchers, this second edition begins with an overview of microwire array design for chronic neural recordings. Demonstrating the diversity now enjoyed in the field, the book reviews new surgical techniques for chronic implantation of microwire arrays in not just rodents, but primates as well. It explores microelectrode microstimulation of brain tissue, discusses multielectrode recordings in the somatosensory system and during learning, and analyzes neural ensemble recordings from the central gustatory-reward pathways in awake and behaving animals.

An exploration of new strategies for neural ensemble data analysis for Brain-Machine Interface BMI applications foreshadows an investigation into employing BMI to restore neurological function. Using multielectrode field potential recordings, contributions define global brain states and propose conceptual and technical approaches to human neural ensemble recordings in the future.

Read more Read less. Here's how restrictions apply. Frontiers in Neuroscience Hardcover: Try the Kindle edition and experience these great reading features: Share your thoughts with other customers. Write a customer review. Showing of 1 reviews. Top Reviews Most recent Top Reviews. There was a problem filtering reviews right now. Please try again later. If you are interested in Neuroscience, electrophysiology I'm assuring you, you will enjoy it. A The variance for draws from a stationary distribution decreases as a function of the size of the sample and the bias increases as a function of the number of neurons within the ensemble.

B The bias for draws from a stationary distribution decreases with increasing correlations among the variables. First, the mean and SD of the KL-divergence is inversely proportional to the window size. Second, the mean is directly proportional to the number of variables in the ensemble Figure 5 A. This means that as the ensemble size increases, relative to the sample size, the likelihood of mistaking pairs of samples from a single distribution for samples from different distributions increases.

Alternatively, stationary systems appear more variable if brief observations are made instead longer ones. Third, the mean of our estimates of the KL-divergence are inversely proportional to the degree of correlation among the variables Figure 5 B. In the extreme, if the variables within a system are completely correlated, the distribution reduces to a binomial distribution, greatly reducing the expected KL-divergence.

These intuitions are important if we are to differentiate interesting features of the dataset from expected fluctuations in its activity. It is important to emphasis that the Bayesian estimator of the KL-divergence is positively biased, especially for large ensembles. This is in line with observations made by others about the calculation of Shannon Entropy from ensemble data Paninski, This bias is less of a concern for us because we are interested in tracking differences in the time-series of the KL-divergence values, not their absolute values.

With these observations in hand, we now estimate the rejection threshold relative to the null hypothesis upon the time-series of KL-divergence values. Our general goal is to detect epochs in which the dynamics of the ensemble move away from the distribution of the null hypothesis. The calculation of the rejection threshold will depend upon the null hypothesis under consideration. For example, for the null hypothesis of homogeneity among adjacent samples, a surrogate dataset is created by time shuffling the ensemble patterns to break up any temporal structure in the sequence of ensemble data.

These surrogate data are then processed according to the method and the mean and SD of the resulting time-series of KL-divergence values are used to set the rejection threshold see Algorithm 2 for a test of homogeneity among adjacent samples.

When considering the null hypothesis of independence among the neurons within an ensemble we generate pairs of surrogate independent samples by shuffling the time indices of each neuron within each window. This preserves the firing rates of all the neurons within the window while disrupting any correlations among them.

These data are then processed and the rejection threshold is calculated as above. The method detects changes in both the strength and structure of ensemble correlations. A A subset of the dataset from a sample simulated ensemble of 10 units wherein each column sum is equal to 5. In the complete simulation from samples — there is the alternating pattern seen in the middle panel. B The resulting KL-divergence after the method was applied to the simulated ensemble. C The ensemble firing rate output with a sample running average.

Methods for Neural Ensemble Recordings

D A simulated ensemble exhibiting a change in correlation structure. The four subpanels show subsets of 20 samples from epochs of samples. These epochs were independence, correlation among variables 1—5, correlation among variables 6—10, and independence, respectively. E The resulting KL-divergence after evaluating the null hypothesis of independence among the variables. F The resulting KL-divergence after evaluating the null hypothesis of homogeneity between adjacent samples. Vertical black lines indicate the beginning and end of sample epochs.

The horizontal black line and gray shaded areas indicate the mode and the SD, respectively, of the time-series of the analyses. In Figure 6 A, a simulated ensemble of 10 variables was generated so that the column sum for each sample was constant five samples. From samples —, samples exhibit a stereotyped correlation structure Figure 6 A, middle panel: This was done to provide an example of a disassociation of a change in ensemble firing rate from a change in ensemble correlations. Our method and the ensemble firing rate were calculated from these simulated data. For both analyses a 1 sample bin and a sample window were used.

For the ensemble firing rate this sample window was used to calculate a running average.

SearchWorks Catalog

This matched the time-scales of the analyses. For the kdq-tree , the sample density parameter was set at 5. The null hypothesis tested was homogeneity among samples. The KL-divergence was able to detect this change in the correlation structure Figure 6 B , and by design the ensemble firing rate could not Figure 6 C.

In Figure 6 D, a simulated ensemble of 10 variables was generated such that they were independent from samples 1—, from samples — variables 1—5 were correlated, from samples — variables 6—10 were correlated, and they were all independent again from samples — Here we used a window size of samples. To detect this change in correlation structure using our method, we evaluated two null hypotheses.

The first null hypothesis was that the variables were independent Figure 6 E. The second null hypothesis was that adjacent samples were homogeneous Figure 6 F. Figure 6 E shows that our method was able to reject the null of independence between samples and Figure 6 F shows a rejection of the null of homogeneity almost immediately as the leading edge enters the first epoch of correlated data. It then decreases as both windows enter this epoch before increasing again and reaching a maximum around sample , as each of the adjacent windows occupies one of the two differently correlated epochs.

Taken together, this demonstrates the ability of our method to detect a change in the structure of neural ensemble correlations. To demonstrate the performance of our method under difficult conditions, Table 1 details the performance of the complete method at detecting small changes in the degree of correlation among variables. Again, the firing rates of the variables within the simulated ensembles were drawn from an empirical distribution derived from chronic extracellular recordings in five behaving rats unpublished data.

Recommended For You

The parameters considered were the number of variables in the ensemble, the window size, the significance level for detection, and the sample density parameter for the kdq-tree. In all cases, epochs of samples each were generated, and an increase in ensemble correlations from 0 to 0. The null hypothesis evaluated was that the samples all came from the same distribution that generated the samples within the first window.

The significance level was set relative to the rejection threshold of the null distribution calculated as described above. The confidence interval method of Dasu et al. Detections were marked when the number of significant KL-divergence values within a sample window exceeded the proportion expected by chance. The most important result of these simulations is that it is far easier to detect small, widespread changes in the correlations among units within large ensembles than within smaller ensembles. This implies that if small, widespread changes in ensemble correlations are coincident with behavior, then increasing the number of recorded units in combination with this method should increase the ability of experimenters to detect this feature.

This point will be considered further in the discussion. From the simulations, it was also clear that matching the window size to the duration of the change increased the number of detections, which recommends considering multiple time-scales when investigating ensemble correlations. As might be expected, increasing the significance level for detection decreased the number of false alarms but increased the number of false negatives. The threshold upon sample density was found to have only a small effect upon detection performance for the range of values considered.

Having explicated our method, we now demonstrate its application to real neural ensemble data Figure 7. To begin, the bin width parameter was set to 20 ms and a window of binned samples was used. For comparison against a comparable estimate of ensemble firing rate, the ensemble data was binned as both binary activations and spike counts. The KL-divergence was set to evaluate the null hypothesis that the neurons within the ensemble fire independently of one another.

The kdq-tree was constructed using the complete data sorted in descending order by firing rate Figure 7 B. For the kdq-tree , the integer threshold upon data density was set at five samples. After compression, the resulting multinomial samples were then used to estimate the KL-divergence. B The simultaneously recorded ensemble of 10 units. C The ensemble firing rate calculated in a sample window of spike counts binned at 20 ms and slid one sample at a time.

D The resulting KL-divergence after evaluating the null hypothesis of independence. The KL-divergence clearly signals this change while the ensemble firing rate does not. The ensemble firing rate signals the burst of activity. The horizontal black line and gray shaded area indicate the mode and the SD, respectively, of the time-series of the analyses. These epochs are of particular interest because they signal a disassociation of changes in pairwise interactions from changes in neural firing rates.

An examination of the ensemble rasters validates the description of the dataset provided by these features. Moreover, by augmenting this method with the calculation of the ensemble firing rate, the spike count information lost when transforming the ensemble data to generate the joint distribution is recovered. This example shows that tracking both the KL-divergence Figure 7 D and the ensemble firing rate Figure 7 C makes it straightforward to disassociate changes in firing rates from changes in the higher order moments of ensemble data.

Altogether, our method provides an automated process for generating a succinct summary of neural ensembles dynamics. Contemporary neurophysiological techniques for recording from behaving subjects track the output of ensembles of neurons. Put simply, the ensemble is the set of recorded neurons. This is done with minimal knowledge about the anatomical connections among the recorded neurons or any unobserved inputs that drive them. Until the advent of technology capable of detailing the relevant neural networks in vivo , progress will depend upon the ability of neuroscientists to make sound inferences about the structure and influences upon neural ensemble activity.

This is to say, the impressive parametric models that have been developed for describing ensemble interactions Brown et al. While the body of work demonstrating some relationship between the structure of ensemble data and behavior is growing Deadwyler and Hampson, ; Durstewitz et al. This led us to develop a computational method that utilizes unsupervised learning algorithms for the purpose of tracking changes in the dynamics of neural ensembles on time-scales comparable with behavior.

Our approach was to synthesize the non-parametric method of Dasu et al. These components were chosen to match well the practice of exploratory data analysis Tukey, ; Mallows, As such, they are non-parametric and unsupervised, reflecting the fact that the mechanisms generating ensemble data are largely unknown and their covariance with behavior remains to be investigated.

The kdq-tree was chosen for its ability to aggregate poorly estimated regions of the data space Figure 2 D. Moreover, because it scales linearly in the number of variables and data points, it is appropriate for a wide range of ensemble and window sizes Figure 5. Furthermore, the use of the Bayesian estimator of the KL-divergence Kullback, ; Wolpert and Wolf, provided us with a sound framework for evaluating possible differences between ensemble data sampled over intervals short enough for making comparisons with behavior.

Methods for Neural Ensemble Recordings, Second Edition by Miguel A. L. Nicolelis - 2blesd Library

Moreover, the Bayesian estimator allowed us to incorporate prior information about the dataset. In particular, the use of the Dirichlet prior with parameters all set to 0. Together, this allowed us to track changes in the dynamics of ensemble data by inspecting the time-series of the KL-divergence values relative to the corresponding expected variance of the null distribution Figure 7. Methods such as principal component analysis Jolliffe, and factor analysis Yu et al. Because the set of ensemble patterns is unordered, smoothing methods such as kernel density estimation Rosenblatt, ; Botev et al.

It is worth noting that the order in which the kdq-tree evaluates variables is arbitrary, and other data compression schemes are worth considering if they are well suited to the peculiarities of ensemble data. Moreover, simulations demonstrated the free parameter upon sample density to be rather robust Table 1. Lastly, the choice to leave the spike bin and window size as user-specified free parameters reflects the view that these require expert knowledge for their specification, and will depend upon the experimental preparation under observation.

These include detecting changes in the structure of pairwise interactions among neurons within an ensemble, distinct from changes in neural firing rates Figure 6. Throughout, we made the assumption that the features of interest would manifest as transient changes in the dynamics of the ensemble activity. This reflects that general observation that changes in the dynamics of neural ensembles are observable as transient modulations of neural firing rates and pairwise interactions.

On the contrary, one might imagine an ensemble could shift from one sustained equilibrium state to another. Such a change would be clear from an inspection of the times-series of KL-divergence values under the null hypothesis of stationarity and would recommend a partitioning of the time-series of these data prior to further analysis. An alternative to this unsupervised approach would be a supervised learning scheme in which a classifier is built using training data to validate whether some ensemble data carries information about a behavior of interest Churchward et al.

The KL-divergence has been used extensively for classification Kullback, and our method could easily be adapted to such a framework by an appropriate partitioning of the dataset to generate training data for each presumed class. There are a few differences between our method and those of others, which are both principled and methodological. For instance, analyses such as the ISI distance method of Kreuz et al. We did not take this approach for three reasons. First, we wished to avoid treating the recorded ensemble as a neural network, because of the experimental limitations listed above.

Therefore, we adopted a framework that is agnostic to whether changes are caused by interactions between the recorded neurons or by unobserved inputs. Second, sets of neurons do not appear to fire in rigid patterns, i. Third, outside of a few areas within the brain which do show a high degree of synchrony, e. The norm is the observation of weak pairwise correlations Schneidman et al.

It remains unclear why these periodic synchronizations are not observed at their downstream targets. Is it a due to random delays between the neural oscillator and its target s? If so, our method is capable of detecting the influence of such an upstream neural oscillator without having to model the explicit neuron-to-neuron interactions. This could be done by first applying our method to neural ensemble data from the downstream area under the null hypothesis of independence and then comparing the resulting time-series of KL-divergence values to the time-series of the neural oscillator.

Methodologically, by being grounded within the framework of statistical hypothesis testing, our method captures a notion of prior expectation many ad hoc methods lack. Some form of a prior expectation is important when analyzing complex systems, because simple changes in variance can result in incredible variability, producing myriad red herrings.

This being said, other methods may be more sensitive to novel patterns of interest in the dataset, and in future work we will extend the set of null hypotheses to include a wider range of neural features. Ultimately, which method will most clearly illustrate the relationship between neural ensemble activity and behavior is an empirical question.

Methods ARTICLE

In particular, the use of maximum entropy models in neuroscience has been extended to include both temporal interactions among ensemble patterns Marre et al. In addition, a forthcoming extension will calculate the inverse from significant changes in the ensemble dynamics to the best estimate of the set of neurons that contributed to the change.

In conclusion, we presented a flexible method for signaling changes in the dynamics of neural ensemble data on time-scales comparable with behavior. We demonstrated the validity and utility of this method and recommend its use to complement existing analyses.

This method is particularly sensitive to widespread, transient fluctuations in the correlations among neurons within an ensemble Table 1. Importantly, it is capable of disassociating changes in ensemble correlation structure from changes in ensemble firing rate Figure 6.

This makes it an excellent candidate for mining ensemble data in search of evidence for hypotheses ranging from the reader mechanisms governing neural computation Buzsaki, to the role of oscillations in the brain Fries, The application left to experimentalists is to observe the covariance between large values of the KL-divergence and other variables of interest.

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Cortical activity flips among quasi-stationary states. Neural correlations, population coding and computation. Spontaneous cortical activity reveals hallmarks of an optimal internal model of the environment. Kernel density estimation via diffusion. An analysis of neural receptive field plasticity by point process adaptive filtering. A comparison of methods used to detect changes in neuronal discharge patterns.

Elements of Information Theory. Generalized iterative scaling for log-linear models. The significance of neural ensemble codes during behavior and cognition. Activity in posterior parietal cortex is correlated with the relative subjective desirability of action. Abrupt transitions between prefrontal neural ensemble states accompany behavioral transitions during rule learning.

Identifying functional connectivity in large-scale neural ensemble recordings: Dynamic optimization of odor representations by slow temporal patterning of mitral cell activity. A mechanism for cognitive dynamics: Information theory and statistical mechanics. Natural stimuli evoke dynamic sequences of states in sensory cortical ensembles. Measuring spike train synchrony. The performance of universal encoding.

Cortical ensemble activity increasingly predicts behaviour outcomes during learning of a motor task. Two enhancements of the gravity algorithm for multiple spike train analysis. Generating spike trains with specified correlation coefficients. Prediction of spatiotemporal patterns of neural activity from pairwise correlations. Methods for Neural Ensemble Recordings , 2nd Edn. Chronic, multisite, multielectrode recordings in macaque monkeys.

Sparse coding and high-order correlations in fine-scale cortical networks. On the significance of correlations among neuronal spike trains. Estimation of entropy and mutual information. Remarks on some nonparametric estimates of a density function. Weak pairwise correlations imply strongly correlated network states in a neural population. The structure of large-scale synchronized firing in primate retina. Reliability of signals from a chronically implanted, silicon-based electrode array in non-human primate primary motor cortex.

A maximum entropy model applied to spatial and temporal correlations from cortical networks in vitro. Cortical networks produce three distinct Hz rhythms during single sensory responses in the awake rat.


  1. Methods for Neural Ensemble Recordings - CRC Press Book.
  2. Special offers and product promotions.
  3. Am I Important?.

A point process framework for relating neural spiking activity to spiking history, neural ensemble, and extrinsic covariate effects. Collective dynamics in human and monkey sensorimotor cortex: The future of data analysis. Behavioral detection of tactile stimuli during Hz cortical oscillations in awake rats. Silver Kluwer Academic press. Estimating functions of probability distributions from a finite set of samples. MIT Press , — In this appendix the Bayesian estimator for the first and second moments of the KL-divergence over discrete distributions is derived according to the method of Wolpert and Wolf This appendix is meant to stand alone and so some of the results of Wolpert and Wolf are recapitulated.

The interested reader should consult the original work of Wolpert and Wolf The original result of Wolpert and Wolf was derived for a single, discrete distribution. When applying this method to derive the Bayesian estimator for the KL-divergence it is necessary to consider two discrete distributions, which share the same domain. We are interested in using a data set n to estimate some function of a probability distribution Q p.

To estimate Q p from the data n we must first determine the probability density function P p n. Note that because of cancelation, the constant does not appear in P p n. In the following, for simplicity P p will be assumed to be uniform, i. Lastly, to be consistent with the notation of Wolpert and Wolf we define.

The p i may not be independently integrated since the constraint exists. This constraint is crucial for deriving the closed form solution, and is reflected in the explicit definition of the integral. Since our integral may be rewritten as. Since the convolution operator is both commutative and associative, we can repeat this procedure and write the integral above as. Theorem 2 restates the important Laplace Convolution Theorem.

The Laplace transform operator L is defined as. Theorems 1 and 2 allow for the calculation of integrals of the form I [ Q k p , n ] for functions Q p that may be factored as Both the Shannon entropy and the KL-divergence may be factored in this manner.