Siêu thị PDFTải ngay đi em, trời tối mất

Thư viện tri thức trực tuyến

Kho tài liệu với 50,000+ tài liệu học thuật

© 2023 Siêu thị PDF - Kho tài liệu học thuật hàng đầu Việt Nam

Neural Engineering - Computation, Representation and Dynamics in Neurobiological Systems
PREMIUM
Số trang
377
Kích thước
8.9 MB
Định dạng
PDF
Lượt xem
794

Neural Engineering - Computation, Representation and Dynamics in Neurobiological Systems

Nội dung xem thử

Mô tả chi tiết

TLFeBOOK

Neural Engineering

TLFeBOOK

Computational Neuroscience

Terrence J. Sejnowski and Tomaso A. Poggio, editors

Neural Nets in Electric Fish, Walter Heiligenberg, 1991

The Computational Brain, Patricia S. Churchland and Terrence J. Sejnowski, 1992

Dynamic Biological Networks: The Stomatogastric Nervous System, edited by Ronald M.

Harris-Warrick, Eve Marder, Allen I. Selverston, and Maurice Moulins, 1992

The Neurobiology of Neural Networks, edited by Daniel Gardner, 1993

Large-Scale Neuronal Theories of the Brain, edited by Christof Koch and Joel L. Davis,

1994

The Theoretical Foundations of Dendritic Function: Selected Papers of Wilfrid Rall with

Commentaries, edited by Idan Segev, John Rinzel, and Gordon M. Shepherd, 1995

Models of Information Processing in the Basal Ganglia, edited by James C. Houk, Joel L.

Davis, and David G. Beiser, 1995

Spikes: Exploring the Neural Code, Fred Rieke, David Warland, Rob de Ruyter van

Steveninck, and William Bialek, 1997

Neurons, Networks, and Motor Behavior, edited by Paul S. Stein, Sten Grillner, Allen I.

Selverston, and Douglas G. Stuart, 1997

Methods in Neuronal Modeling: From Ions to Networks, second edition, edited by Christof

Koch and Idan Segev, 1998

Fundamentals of Neural Network Modeling: Neuropsychology and Cognitive Neuro￾science, edited by Randolph W. Parks, Daniel S. Levine, and Debra L. Long, 1998

Neural Codes and Distributed Representations: Foundations of Neural Computation,

edited by Laurence Abbott and Terrence J. Sejnowski, 1999

Unsupervised Learning: Foundations of Neural Computation, edited by Geoffrey Hinton

and Terrence J. Sejnowski, 1999

Fast Oscillations in Cortical Circuits, Roger D. Traub, John G. R. Jefferys, and Miles A.

Whittington, 1999

Computational Vision: Information Processing in Perception and Visual Behavior, Hanspeter

A. Mallot, 2000

Graphical Models: Foundations of Neural Computation, edited by Michael I. Jordan and

Terrence J. Sejnowski, 2001

Self-Organizing Map Formation: Foundations of Neural Computation, edited by Klaus

Obermayer and Terrence J. Sejnowski, 2001

Neural Engineering: Computation, Representation, and Dynamics in Neurobiological Sys￾tems, Chris Eliasmith and Charles H. Anderson, 2003

TLFeBOOK

Neural Engineering

Computation, Representation, and Dynamics in Neurobiological Systems

Chris Eliasmith and Charles H. Anderson

A Bradford Book

The MIT Press

Cambridge, Massachusetts

London, England

TLFeBOOK

c 2003 Massachusetts Institute of Technology

All rights reserved. No part of this book may be reproduced in any form by any electronic or

mechanical means (including photocopying, recording, or information storage and retrieval) without

permission in writing from the publisher.

This book was typeset in Times by the authors using LYX and LATEX and was printed and bound in

the United States of America.

Library of Congress Cataloging-in-Publication Data

Eliasmith, Chris.

Neural engineering : computation, representation, and dynamics

in neurobiological systems / Chris Eliasmith and Charles H.

Anderson.

p. cm. – (Computational neuroscience)

“A Bradford book.”

Includes bibliographical references and index.

ISBN 0-262-05071-4 (hc.)

1. Neural networks (Neurobiology) 2. Neural networks (Computer

science) 3. Computational neuroscience. I. Anderson, Charles H.

II. Title. III. Series.

QP363.3 .E454 2002

573.8’5–dc21

2002070166

10 9 8 7 6 5 4 3 2 1

TLFeBOOK

To Jen, Alana, Alex, and Charlie

and

To David Van Essen

TLFeBOOK

This page intentionally left blank

TLFeBOOK

Contents

Preface xiii

Using this book as a course text xvii

Acknowledgments xix

1 Of neurons and engineers 1

1.1 Explaining neural systems 3

1.2 Neural representation 5

1.2.1 The single neuron . . . . . . . . . . . . . . . . . . . . . 9

1.2.2 Beyond the single neuron . . . . . . . . . . . . . . . . . 11

1.3 Neural transformation 13

1.4 Three principles of neural engineering 15

1.4.1 Principle 1 . . . . . . . . . . . . . . . . . . . . . . . . 16

1.4.2 Principle 2 . . . . . . . . . . . . . . . . . . . . . . . . 17

1.4.3 Principle 3 . . . . . . . . . . . . . . . . . . . . . . . . 18

1.4.4 Addendum . . . . . . . . . . . . . . . . . . . . . . . . 18

1.5 Methodology 19

1.5.1 System description . . . . . . . . . . . . . . . . . . . . 19

1.5.2 Design specification . . . . . . . . . . . . . . . . . . . 21

1.5.3 Implementation . . . . . . . . . . . . . . . . . . . . . . 21

1.5.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . 22

1.6 A possible theory of neurobiological systems 23

I REPRESENTATION

2 Representation in populations of neurons 29

2.1 Representing scalar magnitudes 30

2.1.1 Engineered representation . . . . . . . . . . . . . . . . 30

2.1.2 Biological representation . . . . . . . . . . . . . . . . . 33

2.2 Noise and precision 40

2.2.1 Noisy neurons . . . . . . . . . . . . . . . . . . . . . . 40

2.2.2 Biological representation and noise . . . . . . . . . . . 42

2.3 An example: Horizontal eye position 44

2.3.1 System description . . . . . . . . . . . . . . . . . . . . 44

2.3.2 Design specification . . . . . . . . . . . . . . . . . . . 46

2.3.3 Implementation . . . . . . . . . . . . . . . . . . . . . . 47

TLFeBOOK

viii Contents

2.3.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . 48

2.4 Representing vectors 49

2.5 An example: Arm movements 52

2.5.1 System description . . . . . . . . . . . . . . . . . . . . 53

2.5.2 Design specification . . . . . . . . . . . . . . . . . . . 54

2.5.3 Implementation . . . . . . . . . . . . . . . . . . . . . . 55

2.5.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . 55

2.6 An example: Semicircular canals 57

2.6.1 System description . . . . . . . . . . . . . . . . . . . . 57

2.6.2 Implementation . . . . . . . . . . . . . . . . . . . . . . 58

2.7 Summary 59

3 Extending population representation 61

3.1 A representational hierarchy 61

3.2 Function representation 63

3.3 Function spaces and vector spaces 69

3.4 An example: Working memory 72

3.4.1 System description . . . . . . . . . . . . . . . . . . . . 73

3.4.2 Design specification . . . . . . . . . . . . . . . . . . . 74

3.4.3 Implementation . . . . . . . . . . . . . . . . . . . . . . 77

3.4.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . 78

3.5 Summary 79

4 Temporal representation in spiking neurons 81

4.1 The leaky integrate-and-fire (LIF) neuron 81

4.1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 81

4.1.2 Characterizing the LIF neuron . . . . . . . . . . . . . . 83

4.1.3 Strengths and weaknesses of the LIF neuron model . . . 88

4.2 Temporal codes in neurons 89

4.3 Decoding neural spikes 92

4.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 92

4.3.2 Neuron pairs . . . . . . . . . . . . . . . . . . . . . . . 94

4.3.3 Representing time dependent signals with spikes . . . . 96

4.3.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . 103

TLFeBOOK

Contents ix

4.4 Information transmission in LIF neurons 105

4.4.1 Finding optimal decoders in LIF neurons . . . . . . . . 105

4.4.2 Information transmission . . . . . . . . . . . . . . . . . 109

4.4.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . 114

4.5 More complex single neuron models 115

4.5.1 Adapting LIF neuron . . . . . . . . . . . . . . . . . . . 116

4.5.2 -neuron . . . . . . . . . . . . . . . . . . . . . . . . . 118

4.5.3 Adapting, conductance-based neuron . . . . . . . . . . 123

4.5.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . 126

4.6 Summary 127

5 Population-temporal representation 129

5.1 Putting time and populations together again 129

5.2 Noise and precision: Dealing with distortions 132

5.3 An example: Eye position revisited 136

5.3.1 Implementation . . . . . . . . . . . . . . . . . . . . . . 136

5.3.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . 137

5.4 Summary 139

II TRANSFORMATION

6 Feed-forward transformations 143

6.1 Linear transformations of scalars 143

6.1.1 A communication channel . . . . . . . . . . . . . . . . 143

6.1.2 Adding two variables . . . . . . . . . . . . . . . . . . . 148

6.2 Linear transformations of vectors 151

6.3 Nonlinear transformations 153

6.3.1 Multiplying two variables . . . . . . . . . . . . . . . . 154

6.4 Negative weights and neural inhibition 160

6.4.1 Analysis . . . . . . . . . . . . . . . . . . . . . . . . . 161

6.4.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . 166

6.5 An example: The vestibular system 168

6.5.1 System description . . . . . . . . . . . . . . . . . . . . 169

6.5.2 Design specification . . . . . . . . . . . . . . . . . . . 174

TLFeBOOK

x Contents

6.5.3 Implementation . . . . . . . . . . . . . . . . . . . . . . 175

6.5.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . 180

6.6 Summary 182

7 Analyzing representation and transformation 185

7.1 Basis vectors and basis functions 185

7.2 Decomposing  192

7.3 Determining possible transformations 196

7.3.1 Linear tuning curves . . . . . . . . . . . . . . . . . . . 200

7.3.2 Gaussian tuning curves . . . . . . . . . . . . . . . . . . 204

7.4 Quantifying representation 206

7.4.1 Representational capacity . . . . . . . . . . . . . . . . 206

7.4.2 Useful representation . . . . . . . . . . . . . . . . . . . 208

7.5 The importance of diversity 210

7.6 Summary 216

8 Dynamic transformations 219

8.1 Control theory and neural models 221

8.1.1 Introduction to control theory . . . . . . . . . . . . . . 221

8.1.2 A control theoretic description of neural populations . . 222

8.1.3 Revisiting levels of analysis . . . . . . . . . . . . . . . 225

8.1.4 Three principles of neural engineering quantified . . . . 230

8.2 An example: Controlling eye position 232

8.2.1 Implementation . . . . . . . . . . . . . . . . . . . . . . 233

8.2.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . 240

8.3 An example: Working memory 244

8.3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 244

8.3.2 Implementation . . . . . . . . . . . . . . . . . . . . . . 244

8.3.2.1 Dynamics of the vector representation . . . . 244

8.3.2.2 Simulation results . . . . . . . . . . . . . . . 245

8.3.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . 248

8.4 Attractor networks 250

8.4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 250

8.4.2 Generalizing representation . . . . . . . . . . . . . . . 254

8.4.3 Generalizing dynamics . . . . . . . . . . . . . . . . . . 256

TLFeBOOK

Contents xi

8.4.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . 258

8.5 An example: Lamprey locomotion 260

8.5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 260

8.5.2 System description . . . . . . . . . . . . . . . . . . . . 261

8.5.3 Design specification . . . . . . . . . . . . . . . . . . . 264

8.5.4 Implementation . . . . . . . . . . . . . . . . . . . . . . 265

8.5.5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . 271

8.6 Summary 273

9 Statistical inference and learning 275

9.1 Statistical inference and neurobiological systems 275

9.2 An example: Interpreting ambiguous input 281

9.3 An example: Parameter estimation 283

9.4 An example: Kalman filtering 287

9.4.1 Two versions of the Kalman filter . . . . . . . . . . . . 288

9.4.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . 291

9.5 Learning 293

9.5.1 Learning a communication channel . . . . . . . . . . . 294

9.5.2 Learning from learning . . . . . . . . . . . . . . . . . . 298

9.6 Summary 300

Appendix A:

Chapter 2 derivations 301

A.1 Determining optimal decoding weights 301

Appendix B:

Chapter 4 derivations 303

B.1 Opponency and linearity 303

B.2 Leaky integrate-and-fire model derivations 303

B.3 Optimal filter analysis with a sliding window 305

B.4 Information transmission of linear estimators for

nonlinear systems 309

Appendix C:

Chapter 5 derivations 313

TLFeBOOK

xii Contents

C.1 Residual fluctuations due to spike trains 313

Appendix D:

Chapter 6 derivations 317

D.1 Coincidence detection 317

Appendix E:

Chapter 7 derivations 319

E.1 Practical considerations for finding linear decoders for  and   319

E.2 Finding the useful representational space 323

Appendix F:

Chapter 8 derivations 327

F.1 Synaptic dynamics dominate neural dynamics 327

F.2 Derivations for the lamprey model 327

F.2.1 Determining muscle tension . . . . . . . . . . . . . . . 327

F.2.2 Error . . . . . . . . . . . . . . . . . . . . . . . . . . . 329

F.2.3 Oscillator dynamics . . . . . . . . . . . . . . . . . . . 331

F.2.4 Coordinate changes with matrices . . . . . . . . . . . . 332

F.2.5 Projection matrices . . . . . . . . . . . . . . . . . . . . 333

References 335

Index 351

TLFeBOOK

Preface

This book is a rudimentary attempt to generate a coherent understanding of neurobiological

systems from the perspective of what has become known as ‘systems neuroscience.’ What

is described in these pages is the result of a five year collaboration aimed at trying

to characterize the myriad, fascinating neurobiological systems that we encounter every

day. Not surprisingly, this final (for now) product is vastly different from its ancestors.

But, like them, it is first and foremost a synthesis of current ideas in computational,

or theoretical, neuroscience. We have adopted and extended ideas about neural coding,

neural computation, physiology, communications theory, control theory, representation,

dynamics, and probability theory. The value of presenting a synthesis of this material,

rather than presenting it as a series of loosely connected ideas, is to provide, we hope,

both theoretical and practical insight into the functioning of neural systems not otherwise

available. For example, we are not only interested in knowing what a particular neuron’s

tuning curve looks like, or how much information that neuron could transmit, we want to

understand how to combine this evidence to learn about the possible function of the system,

and the likely physiological characteristics of its component parts. Attempting to construct

a general framework for understanding neurobiological systems provides a novel way to

address these kinds of issues.

Our intended audience is quite broad, ranging from physiologists to physicists, and

advanced undergraduates to seasoned researchers. Nevertheless, we take there to be three

main audiences for this book. The first consists of neuroscientists who are interested in

learning more about how to best characterize the systems they explore experimentally. Of￾ten the techniques used by neuroscientists are chosen for their immediate convenience—

e.g., the typical ‘selectivity index’ calculated from some ratio of neuron responses—but

the limitations inherent in these choices for characterizing the systemic coding properties

of populations of neurons are often serious, though not immediately obvious (Mechler and

Ringach 2002). By adopting the three principles of neural engineering that we present,

these sorts of measures can be replaced by others with a more solid theoretical foundation.

More practically speaking, we also want to encourage the recent trend for experimentalists

to take seriously the insights gained from using detailed computational models. Unfortu￾nately, there is little literature aimed at providing clear, general methods for developing

such models at the systems level. The explicit methodology we provide, and the many

examples we present, are intended to show precisely how these three principles can be

used to build the kinds of models that experimental neuroscientists can exploit. To aid the

construction of such models, we have developed a simulation environment for large-scale

neural models that is available at http://compneuro.uwaterloo.ca/.

The second audience consists of the growing number of engineers, physicists, and com￾puter scientists interested in learning more about how their quantitative tools relate to the

brain. In our view, the major barrier these researchers face in applying proven mathematical

TLFeBOOK

techniques to neurobiological systems is an appreciation of the important differences be￾tween biological and traditionally engineered systems. We provide quantitative examples,

and discuss how to understand biological systems using the familiar techniques of linear

algebra, signal processing, control theory, and statistical inference. As well, the examples

we present give a sense of which neural systems are appropriate targets for particular kinds

of computational modeling, and how to go about modeling such systems; this is important

for those readers less familiar with the neurosciences in general.

Our third audience is the computational neuroscience community; i.e., those familiar

with the kind of approach we are taking towards characterizing neurobiological systems.

Because we claim to develop a general approach to understanding neural systems, we sus￾pect that researchers already familiar with the current state of computational neuroscience

may be interested in our particular synthesis, and our various extensions of current results.

These readers will be most interested in how we bring together considerations of single

neuron signal processing and population codes, how we characterize neural systems as

(time-varying nonlinear) control structures, and how we apply our techniques for gener￾ating large-scale, realistic simulations. As well, we present a number of novel models of

commonly modeled systems (e.g., the lamprey locomotor system, the vestibular system,

and working memory systems) which should provide these readers with a means of com￾paring our framework to other approaches.

Computational neuroscience is a rapidly expanding field, with new books being pub￾lished at a furious rate. However, we think, as do others, that there is still something miss￾ing: a general, unified framework (see section 1.6 for further discussion). For instance,

past books on neural coding tend to focus on the analysis of individual spiking neurons

(or small groups of neurons), and texts on simulation techniques in neuroscience focus ei￾ther at that same low level or on higher-level cognitive models. In contrast, we attempt to

bridge the gap between low-level and high-level modeling. As well, we do not focus on

models of a specific neural system as a number of recent books have, but rather on princi￾ples and methods for modeling and understanding diverse systems. Furthermore, this work

is not a collection of previously published papers, or an edited volume consisting of many,

often conflicting, perspectives. Rather, it presents a single, coherent picture of how to un￾derstand neural function from single cells to complex networks. Lastly, books intended as

general overviews of the field tend to provide a summary of common single cell models,

representational assumptions, and analytical and modeling techniques. We have chosen to

present only that material relevant to constructing a unified framework. We do not want

to insinuate that these various approaches are not essential; indeed we draw very heavily

on much of this work. However, these are not attempts to provide a unified framework—

one which synthesizes common models, assumptions, and techniques—for understanding

neural systems. We, in contrast, have this as a central goal.

TLFeBOOK

Tải ngay đi em, còn do dự, trời tối mất!