Terrence J. Sejnowski

Terrence J. Sejnowski holds the Francis Crick Chair at the Salk Institute for Biological Studies and is a Distinguished Professor at the University of California, San Diego. He was a member of the advisory committee for the Obama administration's BRAIN initiative and is President of the Neural Information Processing (NIPS) Foundation. He has published twelve books, including (with Patricia Churchland) The Computational Brain (25th Anniversary Edition, MIT Press).

  • The Deep Learning Revolution

    The Deep Learning Revolution

    Terrence J. Sejnowski

    How deep learning—from Google Translate to driverless cars to personal cognitive assistants—is changing our lives and transforming every sector of the economy.

    The deep learning revolution has brought us driverless cars, the greatly improved Google Translate, fluent conversations with Siri and Alexa, and enormous profits from automated trading on the New York Stock Exchange. Deep learning networks can play poker better than professional poker players and defeat a world champion at Go. In this book, Terry Sejnowski explains how deep learning went from being an arcane academic field to a disruptive technology in the information economy.

    Sejnowski played an important role in the founding of deep learning, as one of a small group of researchers in the 1980s who challenged the prevailing logic-and-symbol based version of AI. The new version of AI Sejnowski and others developed, which became deep learning, is fueled instead by data. Deep networks learn from data in the same way that babies experience the world, starting with fresh eyes and gradually acquiring the skills needed to navigate novel environments. Learning algorithms extract information from raw data; information can be used to create knowledge; knowledge underlies understanding; understanding leads to wisdom. Someday a driverless car will know the road better than you do and drive with more skill; a deep learning network will diagnose your illness; a personal cognitive assistant will augment your puny human brain. It took nature many millions of years to evolve human intelligence; AI is on a trajectory measured in decades. Sejnowski prepares us for a deep learning future.

  • The Computational Brain, 25th Anniversary Edition

    The Computational Brain, 25th Anniversary Edition

    Patricia S. Churchland and Terrence J. Sejnowski

    An anniversary edition of the classic work that influenced a generation of neuroscientists and cognitive neuroscientists.

    Before The Computational Brain was published in 1992, conceptual frameworks for brain function were based on the behavior of single neurons, applied globally. In The Computational Brain, Patricia Churchland and Terrence Sejnowski developed a different conceptual framework, based on large populations of neurons. They did this by showing that patterns of activities among the units in trained artificial neural network models had properties that resembled those recorded from populations of neurons recorded one at a time. It is one of the first books to bring together computational concepts and behavioral data within a neurobiological framework. Aimed at a broad audience of neuroscientists, computer scientists, cognitive scientists, and philosophers, The Computational Brain is written for both expert and novice. This anniversary edition offers a new preface by the authors that puts the book in the context of current research.

    This approach influenced a generation of researchers. Even today, when neuroscientists can routinely record from hundreds of neurons using optics rather than electricity, and the 2013 White House BRAIN initiative heralded a new era in innovative neurotechnologies, the main message of The Computational Brain is still relevant.

  • New Directions in Statistical Signal Processing

    New Directions in Statistical Signal Processing

    From Systems to Brains

    Simon Haykin, Jose C. Principe, Terrence J. Sejnowski, and John McWhirter

    Leading researchers in signal processing and neural computation present work aimed at promoting the interaction and cross-fertilization between the two fields.

    Signal processing and neural computation have separately and significantly influenced many disciplines, but the cross-fertilization of the two fields has begun only recently. Research now shows that each has much to teach the other, as we see highly sophisticated kinds of signal processing and elaborate hierachical levels of neural computation performed side by side in the brain. In New Directions in Statistical Signal Processing, leading researchers from both signal processing and neural computation present new work that aims to promote interaction between the two disciplines.The book's 14 chapters, almost evenly divided between signal processing and neural computation, begin with the brain and move on to communication, signal processing, and learning systems. They examine such topics as how computational models help us understand the brain's information processing, how an intelligent machine could solve the "cocktail party problem" with "active audition" in a noisy environment, graphical and network structure modeling approaches, uncertainty in network communications, the geometric approach to blind signal processing, game-theoretic learning algorithms, and observable operator models (OOMs) as an alternative to hidden Markov models (HMMs).

  • Self-Organizing Map Formation

    Self-Organizing Map Formation

    Foundations of Neural Computation

    Klaus Obermayer and Terrence J. Sejnowski

    This book provides an overview of self-organizing map formation, including recent developments. Self-organizing maps form a branch of unsupervised learning, which is the study of what can be determined about the statistical properties of input data without explicit feedback from a teacher. The articles are drawn from the journal Neural Computation.The book consists of five sections. The first section looks at attempts to model the organization of cortical maps and at the theory and applications of the related artificial neural network algorithms. The second section analyzes topographic maps and their formation via objective functions. The third section discusses cortical maps of stimulus features. The fourth section discusses self-organizing maps for unsupervised data analysis. The fifth section discusses extensions of self-organizing maps, including two surprising applications of mapping algorithms to standard computer science problems: combinatorial optimization and sorting.

    Contributors J. J. Atick, H. G. Barrow, H. U. Bauer, C. M. Bishop, H. J. Bray, J. Bruske, J. M. L. Budd, M. Budinich, V. Cherkassky, J. Cowan, R. Durbin, E. Erwin, G. J. Goodhill, T. Graepel, D. Grier, S. Kaski, T. Kohonen, H. Lappalainen, Z. Li, J. Lin, R. Linsker, S. P. Luttrell, D. J. C. MacKay, K. D. Miller, G. Mitchison, F. Mulier, K. Obermayer, C. Piepenbrock, H. Ritter, K. Schulten, T. J. Sejnowski, S. Smirnakis, G. Sommer, M. Svensen, R. Szeliski, A. Utsugi, C. K. I. Williams, L. Wiskott, L. Xu, A. Yuille, J. Zhang

  • Graphical Models

    Graphical Models

    Foundations of Neural Computation

    Michael I. Jordan and Terrence J. Sejnowski

    This book exemplifies the interplay between the general formal framework of graphical models and the exploration of new algorithm and architectures. The selections range from foundational papers of historical importance to results at the cutting edge of research.

    Graphical models use graphs to represent and manipulate joint probability distributions. They have their roots in artificial intelligence, statistics, and neural networks. The clean mathematical formalism of the graphical models framework makes it possible to understand a wide variety of network-based approaches to computation, and in particular to understand many neural network algorithms and architectures as instances of a broader probabilistic methodology. It also makes it possible to identify novel features of neural network algorithms and architectures and to extend them to more general graphical models.This book exemplifies the interplay between the general formal framework of graphical models and the exploration of new algorithms and architectures. The selections range from foundational papers of historical importance to results at the cutting edge of research.

    Contributors H. Attias, C. M. Bishop, B. J. Frey, Z. Ghahramani, D. Heckerman, G. E. Hinton, R. Hofmann, R. A. Jacobs, Michael I. Jordan, H. J. Kappen, A. Krogh, R. Neal, S. K. Riis, F. B. Rodríguez, L. K. Saul, Terrence J. Sejnowski, P. Smyth, M. E. Tipping, V. Tresp, Y. Weiss

  • Neural Codes and Distributed Representations

    Neural Codes and Distributed Representations

    Foundations of Neural Computation

    Laurence F. Abbott and Terrence J. Sejnowski

    Since its founding in 1989 by Terrence Sejnowski, Neural Computation has become the leading journal in the field. Foundations of Neural Computation collects, by topic, the most significant papers that have appeared in the journal over the past nine years.

    The present volume focuses on neural codes and representations, topics of broad interest to neuroscientists and modelers. The topics addressed are: how neurons encode information through action potential firing patterns, how populations of neurons represent information, and how individual neurons use dendritic processing and biophysical properties of synapses to decode spike trains. The papers encompass a wide range of levels of investigation, from dendrites and neurons to networks and systems.

  • Unsupervised Learning

    Unsupervised Learning

    Foundations of Neural Computation

    Geoffrey Hinton and Terrence J. Sejnowski

    Since its founding in 1989 by Terrence Sejnowski, Neural Computation has become the leading journal in the field. Foundations of Neural Computation collects, by topic, the most significant papers that have appeared in the journal over the past nine years. This volume of Foundations of Neural Computation, on unsupervised learning algorithms, focuses on neural network learning algorithms that do not require an explicit teacher. The goal of unsupervised learning is to extract an efficient internal representation of the statistical structure implicit in the inputs. These algorithms provide insights into the development of the cerebral cortex and implicit learning in humans. They are also of interest to engineers working in areas such as computer vision and speech recognition who seek efficient representations of raw input data.

  • The Computational Brain

    The Computational Brain

    Patricia S. Churchland and Terrence J. Sejnowski

    Churchland and Sejnowski address the foundational ideas of the emerging field of computational neuroscience, examine a diverse range of neural network models, and consider future directions of the field.

    How do groups of neurons interact to enable the organism to see, decide, and move appropriately? What are the principles whereby networks of neurons represent and compute? These are the central questions probed by The Computational Brain. Churchland and Sejnowski address the foundational ideas of the emerging field of computational neuroscience, examine a diverse range of neural network models, and consider future directions of the field. The Computational Brain is the first unified and broadly accessible book to bring together computational concepts and behavioral data within a neurobiological framework.

    Computer models constrained by neurobiological data can help reveal how—networks of neurons subserve perception and behavior—bow their physical interactions can yield global results in perception and behavior, and how their physical properties are used to code information and compute solutions. The Computational Brain focuses mainly on three domains: visual perception, learning and memory, and sensorimotor integration. Examples of recent computer models in these domains are discussed in detail, highlighting strengths and weaknesses, and extracting principles applicable to other domains. Churchland and Sejnowski show how both abstract models and neurobiologically realistic models can have useful roles in computational neuroscience, and they predict the coevolution of models and experiments at many levels of organization, from the neuron to the system.

    The Computational Brain addresses a broad audience: neuroscientists, computer scientists, cognitive scientists, and philosophers. It is written for both the expert and novice. A basic overview of neuroscience and computational theory is provided, followed by a study of some of the most recent and sophisticated modeling work in the context of relevant neurobiological research. Technical terms are clearly explained in the text, and definitions are provided in an extensive glossary. The appendix contains a précis of neurobiological techniques.

    The Computational Brain is the first unified and broadly accessible book to bring together computational concepts and behavioral data within a neurobiological framework. Churchland and Sejnowski address the foundational ideas of the emerging field of computational neuroscience, examine a diverse range of neural network models, and consider future directions of the field.

    A Bradford Book

    Computational Neuroscience series

    • Hardcover $87.50
    • Paperback $45.00


  • An Introductory Course in Computational Neuroscience

    An Introductory Course in Computational Neuroscience

    Paul Miller

    A textbook for students with limited background in mathematics and computer coding, emphasizing computer tutorials that guide readers in producing models of neural behavior.

    This introductory text teaches students to understand, simulate, and analyze the complex behaviors of individual neurons and brain circuits. It is built around computer tutorials that guide students in producing models of neural behavior, with the associated Matlab code freely available online. From these models students learn how individual neurons function and how, when connected, neurons cooperate in a circuit. The book demonstrates through simulated models how oscillations, multistability, post-stimulus rebounds, and chaos can arise within either single neurons or circuits, and it explores their roles in the brain.

    The book first presents essential background in neuroscience, physics, mathematics, and Matlab, with explanations illustrated by many example problems. Subsequent chapters cover the neuron and spike production; single spike trains and the underlying cognitive processes; conductance-based models; the simulation of synaptic connections; firing-rate models of large-scale circuit operation; dynamical systems and their components; synaptic plasticity; and techniques for analysis of neuron population datasets, including principal components analysis, hidden Markov modeling, and Bayesian decoding.

    Accessible to undergraduates in life sciences with limited background in mathematics and computer coding, the book can be used in a “flipped” or “inverted” teaching approach, with class time devoted to hands-on work on the computer tutorials. It can also be a resource for graduate students in the life sciences who wish to gain computing skills and a deeper knowledge of neural function and neural circuits.

    • Hardcover $50.00
  • Quantitative Biology

    Quantitative Biology

    Theory, Computational Methods, and Models

    Brian Munsky, William S. Hlavacek, and Lev S. Tsimring

    An introduction to the quantitative modeling of biological processes, presenting modeling approaches, methodology, practical algorithms, software tools, and examples of current research.

    The quantitative modeling of biological processes promises to expand biological research from a science of observation and discovery to one of rigorous prediction and quantitative analysis. The rapidly growing field of quantitative biology seeks to use biology's emerging technological and computational capabilities to model biological processes. This textbook offers an introduction to the theory, methods, and tools of quantitative biology.

    The book first introduces the foundations of biological modeling, focusing on some of the most widely used formalisms. It then presents essential methodology for model-guided analyses of biological data, covering such methods as network reconstruction, uncertainty quantification, and experimental design; practical algorithms and software packages for modeling biological systems; and specific examples of current quantitative biology research and related specialized methods. Most chapters offer problems, progressing from simple to complex, that test the reader's mastery of such key techniques as deterministic and stochastic simulations and data analysis. Many chapters include snippets of code that can be used to recreate analyses and generate figures related to the text. Examples are presented in the three popular computing languages: Matlab, R, and Python. A variety of online resources supplement the the text.

    The editors are long-time organizers of the Annual q-bio Summer School, which was founded in 2007. Through the school, the editors have helped to train more than 400 visiting students in Los Alamos, NM, Santa Fe, NM, San Diego, CA, Albuquerque, NM, and Fort Collins, CO. This book is inspired by the school's curricula, and most of the contributors have participated in the school as students, lecturers, or both.

    Contributors John H. Abel, Roberto Bertolusso, Daniela Besozzi, Michael L. Blinov, Clive G. Bowsher, Fiona A. Chandra, Paolo Cazzaniga, Bryan C. Daniels, Bernie J. Daigle, Jr., Maciej Dobrzynski, Jonathan P. Doye, Brian Drawert, Sean Fancer, Gareth W. Fearnley, Dirk Fey, Zachary Fox, Ramon Grima, Andreas Hellander, Stefan Hellander, David Hofmann, Damian Hernandez, William S. Hlavacek, Jianjun Huang, Tomasz Jetka, Dongya Jia, Mohit Kumar Jolly, Boris N. Kholodenko, Markek Kimmel, Michał Komorowski, Ganhui Lan, Heeseob Lee, Herbert Levine, Leslie M Loew, Jason G. Lomnitz, Ard A. Louis, Grant Lythe, Carmen Molina-París, Ion I. Moraru, Andrew Mugler, Brian Munsky, Joe Natale, Ilya Nemenman, Karol Nienałtowski, Marco S. Nobile, Maria Nowicka, Sarah Olson, Alan S. Perelson, Linda R. Petzold, Sreenivasan Ponnambalam, Arya Pourzanjani, Ruy M. Ribeiro, William Raymond, William Raymond, Herbert M. Sauro, Michael A. Savageau, Abhyudai Singh, James C. Schaff, Boris M. Slepchenko, Thomas R. Sokolowski, Petr  Šulc, Andrea Tangherloni, Pieter Rein ten Wolde, Philipp Thomas, Karen Tkach Tuzman, Lev S. Tsimring, Dan Vasilescu, Margaritis Voliotis, Lisa Weber

    • Hardcover $70.00
  • From Neuron to Cognition via Computational Neuroscience

    From Neuron to Cognition via Computational Neuroscience

    Michael A. Arbib and James J. Bonaiuto

    A comprehensive, integrated, and accessible textbook presenting core neuroscientific topics from a computational perspective, tracing a path from cells and circuits to behavior and cognition.

    This textbook presents a wide range of subjects in neuroscience from a computational perspective. It offers a comprehensive, integrated introduction to core topics, using computational tools to trace a path from neurons and circuits to behavior and cognition. Moreover, the chapters show how computational neuroscience—methods for modeling the causal interactions underlying neural systems—complements empirical research in advancing the understanding of brain and behavior.

    The chapters—all by leaders in the field, and carefully integrated by the editors—cover such subjects as action and motor control; neuroplasticity, neuromodulation, and reinforcement learning; vision; and language—the core of human cognition.

    The book can be used for advanced undergraduate or graduate level courses. It presents all necessary background in neuroscience beyond basic facts about neurons and synapses and general ideas about the structure and function of the human brain. Students should be familiar with differential equations and probability theory, and be able to pick up the basics of programming in MATLAB and/or Python. Slides, exercises, and other ancillary materials are freely available online, and many of the models described in the chapters are documented in the brain operation database, BODB (which is also described in a book chapter).

    Contributors Michael A. Arbib, Joseph Ayers, James Bednar, Andrej Bicanski, James J. Bonaiuto, Nicolas Brunel, Jean-Marie Cabelguen, Carmen Canavier, Angelo Cangelosi, Richard P. Cooper, Carlos R. Cortes, Nathaniel Daw, Paul Dean, Peter Ford Dominey, Pierre Enel, Jean-Marc Fellous, Stefano Fusi, Wulfram Gerstner, Frank Grasso, Jacqueline A. Griego, Ziad M. Hafed, Michael E. Hasselmo, Auke Ijspeert, Stephanie Jones, Daniel Kersten, Jeremie Knuesel, Owen Lewis, William W. Lytton, Tomaso Poggio, John Porrill, Tony J. Prescott, John Rinzel, Edmund Rolls, Jonathan Rubin, Nicolas Schweighofer, Mohamed A. Sherif, Malle A. Tagamets, Paul F. M. J. Verschure, Nathan Vierling-Claasen, Xiao-Jing Wang, Christopher Williams, Ransom Winder, Alan L. Yuille

  • Case Studies in Neural Data Analysis

    Case Studies in Neural Data Analysis

    A Guide for the Practicing Neuroscientist

    Mark A. Kramer and Uri T. Eden

    A practical guide to neural data analysis techniques that presents sample datasets and hands-on methods for analyzing the data.

    As neural data becomes increasingly complex, neuroscientists now require skills in computer programming, statistics, and data analysis. This book teaches practical neural data analysis techniques by presenting example datasets and developing techniques and tools for analyzing them. Each chapter begins with a specific example of neural data, which motivates mathematical and statistical analysis methods that are then applied to the data. This practical, hands-on approach is unique among data analysis textbooks and guides, and equips the reader with the tools necessary for real-world neural data analysis.

    The book begins with an introduction to MATLAB, the most common programming platform in neuroscience, which is used in the book. (Readers familiar with MATLAB can skip this chapter and might decide to focus on data type or method type.) The book goes on to cover neural field data and spike train data, spectral analysis, generalized linear models, coherence, and cross-frequency coupling. Each chapter offers a stand-alone case study that can be used separately as part of a targeted investigation. The book includes some mathematical discussion but does not focus on mathematical or statistical theory, emphasizing the practical instead. References are included for readers who want to explore the theoretical more deeply. The data and accompanying MATLAB code are freely available on the authors' website. The book can be used for upper-level undergraduate or graduate courses or as a professional reference.

  • Visual Cortex and Deep Networks

    Visual Cortex and Deep Networks

    Learning Invariant Representations

    Tomaso A. Poggio and Fabio Anselmi

    A mathematical framework that describes learning of invariant representations in the ventral stream, offering both theoretical development and applications.

    The ventral visual stream is believed to underlie object recognition in primates. Over the past fifty years, researchers have developed a series of quantitative models that are increasingly faithful to the biological architecture. Recently, deep learning convolution networks—which do not reflect several important features of the ventral stream architecture and physiology—have been trained with extremely large datasets, resulting in model neurons that mimic object recognition but do not explain the nature of the computations carried out in the ventral stream. This book develops a mathematical framework that describes learning of invariant representations of the ventral stream and is particularly relevant to deep convolutional learning networks.

    The authors propose a theory based on the hypothesis that the main computational goal of the ventral stream is to compute neural representations of images that are invariant to transformations commonly encountered in the visual environment and are learned from unsupervised experience. They describe a general theoretical framework of a computational theory of invariance (with details and proofs offered in appendixes) and then review the application of the theory to the feedforward path of the ventral stream in the primate visual cortex.

  • Brain Computation as Hierarchical Abstraction

    Brain Computation as Hierarchical Abstraction

    Dana H. Ballard

    An argument that the complexities of brain function can be understood hierarchically, in terms of different levels of abstraction, as silicon computing is.

    The vast differences between the brain's neural circuitry and a computer's silicon circuitry might suggest that they have nothing in common. In fact, as Dana Ballard argues in this book, computational tools are essential for understanding brain function. Ballard shows that the hierarchical organization of the brain has many parallels with the hierarchical organization of computing; as in silicon computing, the complexities of brain computation can be dramatically simplified when its computation is factored into different levels of abstraction.

    Drawing on several decades of progress in computational neuroscience, together with recent results in Bayesian and reinforcement learning methodologies, Ballard factors the brain's principal computational issues in terms of their natural place in an overall hierarchy. Each of these factors leads to a fresh perspective. A neural level focuses on the basic forebrain functions and shows how processing demands dictate the extensive use of timing-based circuitry and an overall organization of tabular memories. An embodiment level organization works in reverse, making extensive use of multiplexing and on-demand processing to achieve fast parallel computation. An awareness level focuses on the brain's representations of emotion, attention and consciousness, showing that they can operate with great economy in the context of the neural and embodiment substrates.

    • Hardcover $58.00
    • Paperback $46.00
  • Principles of Brain Dynamics

    Principles of Brain Dynamics

    Global State Interactions

    Mikhail I. Rabinovich, Karl J. Friston, and Pablo Varona

    Experimental and theoretical approaches to global brain dynamics that draw on the latest research in the field.

    The consideration of time or dynamics is fundamental for all aspects of mental activity—perception, cognition, and emotion—because the main feature of brain activity is the continuous change of the underlying brain states even in a constant environment. The application of nonlinear dynamics to the study of brain activity began to flourish in the 1990s when combined with empirical observations from modern morphological and physiological observations. This book offers perspectives on brain dynamics that draw on the latest advances in research in the field. It includes contributions from both theoreticians and experimentalists, offering an eclectic treatment of fundamental issues.

    Topics addressed range from experimental and computational approaches to transient brain dynamics to the free-energy principle as a global brain theory. The book concludes with a short but rigorous guide to modern nonlinear dynamics and their application to neural dynamics.

  • Biological Learning and Control

    Biological Learning and Control

    How the Brain Builds Representations, Predicts Events, and Makes Decisions

    Reza Shadmehr and Sandro Mussa-Ivaldi

    A novel theoretical framework that describes a possible rationale for the regularity in how we move, how we learn, and how our brain predicts events.

    In Biological Learning and Control, Reza Shadmehr and Sandro Mussa-Ivaldi present a theoretical framework for understanding the regularity of the brain's perceptions, its reactions to sensory stimuli, and its control of movements. They offer an account of perception as the combination of prediction and observation: the brain builds internal models that describe what should happen and then combines this prediction with reports from the sensory system to form a belief.

    Considering the brain's control of movements, and variations despite biomechanical similarities among old and young, healthy and unhealthy, and humans and other animals, Shadmehr and Mussa-Ivaldi review evidence suggesting that motor commands reflect an economic decision made by our brain weighing reward and effort. This evidence also suggests that the brain prefers to receive a reward sooner than later, devaluing or discounting reward with the passage of time; then as the value of the expected reward changes in the brain with the passing of time (because of development, disease, or evolution), the shape of our movements will also change.

    The internal models formed by the brain provide the brain with an essential survival skill: the ability to predict based on past observations. The formal concepts presented by Shadmehr and Mussa-Ivaldi offer a way to describe how representations are formed, what structure they have, and how the theoretical concepts can be tested.

  • Neural Control Engineering

    Neural Control Engineering

    The Emerging Intersection between Control Theory and Neuroscience

    Steven J. Schiff

    How powerful new methods in nonlinear control engineering can be applied to neuroscience, from fundamental model formulation to advanced medical applications.

    Over the past sixty years, powerful methods of model-based control engineering have been responsible for such dramatic advances in engineering systems as autolanding aircraft, autonomous vehicles, and even weather forecasting. Over those same decades, our models of the nervous system have evolved from single-cell membranes to neuronal networks to large-scale models of the human brain. Yet until recently control theory was completely inapplicable to the types of nonlinear models being developed in neuroscience. The revolution in nonlinear control engineering in the late 1990s has made the intersection of control theory and neuroscience possible. In Neural Control Engineering, Steven Schiff seeks to bridge the two fields, examining the application of new methods in nonlinear control engineering to neuroscience. After presenting extensive material on formulating computational neuroscience models in a control environment—including some fundamentals of the algorithms helpful in crossing the divide from intuition to effective application—Schiff examines a range of applications, including brain-machine interfaces and neural stimulation. He reports on research that he and his colleagues have undertaken showing that nonlinear control theory methods can be applied to models of single cells, small neuronal networks, and large-scale networks in disease states of Parkinson's disease and epilepsy. With Neural Control Engineering the reader acquires a working knowledge of the fundamentals of control theory and computational neuroscience sufficient not only to understand the literature in this trandisciplinary area but also to begin working to advance the field. The book will serve as an essential guide for scientists in either biology or engineering and for physicians who wish to gain expertise in these areas.

  • Visual Population Codes

    Visual Population Codes

    Toward a Common Multivariate Framework for Cell Recording and Functional Imaging

    Nikolaus Kriegeskorte and Gabriel Kreiman

    How visual content is represented in neuronal population codes and how to analyze such codes with multivariate techniques.

    Vision is a massively parallel computational process, in which the retinal image is transformed over a sequence of stages so as to emphasize behaviorally relevant information (such as object category and identity) and deemphasize other information (such as viewpoint and lighting). The processes behind vision operate by concurrent computation and message passing among neurons within a visual area and between different areas. The theoretical concept of "population code" encapsulates the idea that visual content is represented at each stage by the pattern of activity across the local population of neurons. Understanding visual population codes ultimately requires multichannel measurement and multivariate analysis of activity patterns. Over the past decade, the multivariate approach has gained significant momentum in vision research. Functional imaging and cell recording measure brain activity in fundamentally different ways, but they now use similar theoretical concepts and mathematical tools in their modeling and analyses.

    With a focus on the ventral processing stream thought to underlie object recognition, this book presents recent advances in our understanding of visual population codes, novel multivariate pattern-information analysis techniques, and the beginnings of a unified perspective for cell recording and functional imaging. It serves as an introduction, overview, and reference for scientists and students across disciplines who are interested in human and primate vision and, more generally, in understanding how the brain represents and processes information.

  • Dynamic Coordination in the Brain

    Dynamic Coordination in the Brain

    From Neurons to Mind

    Christoph von der Malsburg, William A. Phillips, and Wolf Singer

    An examination of how widely distributed and specialized activities of the brain are flexibly and effectively coordinated.

    A fundamental shift is occurring in neuroscience and related disciplines. In the past, researchers focused on functional specialization of the brain, discovering complex processing strategies based on convergence and divergence in slowly adapting anatomical architectures. Yet for the brain to cope with ever-changing and unpredictable circumstances, it needs strategies with richer interactive short-term dynamics. Recent research has revealed ways in which the brain effectively coordinates widely distributed and specialized activities to meet the needs of the moment. This book explores these findings, examining the functions, mechanisms, and manifestations of distributed dynamical coordination in the brain and mind across different species and levels of organization. The book identifies three basic functions of dynamic coordination: contextual disambiguation, dynamic grouping, and dynamic routing. It considers the role of dynamic coordination in temporally structured activity and explores these issues at different levels, from synaptic and local circuit mechanisms to macroscopic system dynamics, emphasizing their importance for cognition, behavior, and psychopathology.

    Contributors Evan Balaban, György Buzsáki, Nicola S. Clayton, Maurizio Corbetta, Robert Desimone, Kamran Diba, Shimon Edelman, Andreas K. Engel, Yves Fregnac, Pascal Fries, Karl Friston, Ann Graybiel, Sten Grillner, Uri Grodzinski, John-Dylan Haynes, Laurent Itti, Erich D. Jarvis, Jon H. Kaas, J.A. Scott Kelso, Peter König, Nancy J. Kopell, Ilona Kovács, Andreas Kreiter, Anders Lansner, Gilles Laurent, Jörg Lücke, Mikael Lundqvist, Angus MacDonald, Kevan Martin, Mayank Mehta, Lucia Melloni, Earl K. Miller, Bita Moghaddam, Hannah Monyer, Edvard I. Moser, May-Britt Moser, Danko Nikolic, William A. Phillips, Gordon Pipa, Constantin Rothkopf, Terrence J. Sejnowski, Steven M. Silverstein, Wolf Singer, Catherine Tallon-Baudry, Roger D. Traub, Jochen Triesch, Peter Uhlhaas, Christoph von der Malsburg, Thomas Weisswange, Miles Whittington, Matthew Wilson

  • Computational Modeling Methods for Neuroscientists

    Computational Modeling Methods for Neuroscientists

    Erik De Schutter

    A guide to computational modeling methods in neuroscience, covering a range of modeling scales from molecular reactions to large neural networks.

    This book offers an introduction to current methods in computational modeling in neuroscience. The book describes realistic modeling methods at levels of complexity ranging from molecular interactions to large neural networks. A “how to” book rather than an analytical account, it focuses on the presentation of methodological approaches, including the selection of the appropriate method and its potential pitfalls. It is intended for experimental neuroscientists and graduate students who have little formal training in mathematical methods, but it will also be useful for scientists with theoretical backgrounds who want to start using data-driven modeling methods. The mathematics needed are kept to an introductory level; the first chapter explains the mathematical methods the reader needs to master to understand the rest of the book. The chapters are written by scientists who have successfully integrated data-driven modeling with experimental work, so all of the material is accessible to experimentalists. The chapters offer comprehensive coverage with little overlap and extensive cross-references, moving from basic building blocks to more complex applications.

    Contributors Pablo Achard, Haroon Anwar, Upinder S. Bhalla, Michiel Berends, Nicolas Brunel, Ronald L. Calabrese, Brenda Claiborne, Hugo Cornelis, Erik De Schutter, Alain Destexhe, Bard Ermentrout, Kristen Harris, Sean Hill, John R. Huguenard, William R. Holmes, Gwen Jacobs, Gwendal LeMasson, Henry Markram, Reinoud Maex, Astrid A. Prinz, Imad Riachi, John Rinzel, Arnd Roth, Felix Schürmann, Werner Van Geit, Mark C. W. van Rossum, Stefan Wils

  • Bayesian Brain

    Bayesian Brain

    Probabilistic Approaches to Neural Coding

    Kenji Doya, Shin Ishii, Alexandre Pouget, and Rajesh P.N. Rao

    Experimental and theoretical neuroscientists use Bayesian approaches to analyze the brain mechanisms of perception, decision-making, and motor control.

    A Bayesian approach can contribute to an understanding of the brain on multiple levels, by giving normative predictions about how an ideal sensory system should combine prior knowledge and observation, by providing mechanistic interpretation of the dynamic functioning of the brain circuit, and by suggesting optimal ways of deciphering experimental data. Bayesian Brain brings together contributions from both experimental and theoretical neuroscientists that examine the brain mechanisms of perception, decision making, and motor control according to the concepts of Bayesian estimation.After an overview of the mathematical concepts, including Bayes' theorem, that are basic to understanding the approaches discussed, contributors discuss how Bayesian concepts can be used for interpretation of such neurobiological data as neural spikes and functional brain imaging. Next, contributors examine the modeling of sensory processing, including the neural coding of information about the outside world. Finally, contributors explore dynamic processes for proper behaviors, including the mathematics of the speed and accuracy of perceptual decisions and neural models of belief propagation.

    • Hardcover $12.75
    • Paperback $35.00
  • Dynamical Systems in Neuroscience

    Dynamical Systems in Neuroscience

    The Geometry of Excitability and Bursting

    Eugene M. Izhikevich

    Explains the relationship of electrophysiology, nonlinear dynamics, and the computational properties of neurons, with each concept presented in terms of both neuroscience and mathematics and illustrated using geometrical intuition.

    In order to model neuronal behavior or to interpret the results of modeling studies, neuroscientists must call upon methods of nonlinear dynamics. This book offers an introduction to nonlinear dynamical systems theory for researchers and graduate students in neuroscience. It also provides an overview of neuroscience for mathematicians who want to learn the basic facts of electrophysiology.

    Dynamical Systems in Neuroscience presents a systematic study of the relationship of electrophysiology, nonlinear dynamics, and computational properties of neurons. It emphasizes that information processing in the brain depends not only on the electrophysiological properties of neurons but also on their dynamical properties. The book introduces dynamical systems, starting with one- and two-dimensional Hodgkin-Huxley-type models and continuing to a description of bursting systems. Each chapter proceeds from the simple to the complex, and provides sample problems at the end. The book explains all necessary mathematical concepts using geometrical intuition; it includes many figures and few equations, making it especially suitable for non-mathematicians. Each concept is presented in terms of both neuroscience and mathematics, providing a link between the two disciplines.

    Nonlinear dynamical systems theory is at the core of computational neuroscience research, but it is not a standard part of the graduate neuroscience curriculum—or taught by math or physics department in a way that is suitable for students of biology. This book offers neuroscience students and researchers a comprehensive account of concepts and methods increasingly used in computational neuroscience.

    An additional chapter on synchronization, with more advanced material, can be found at the author's website, www.izhikevich.com.

    • Hardcover $13.75
    • Paperback $40.00
  • The Computational Neurobiology of Reaching and Pointing

    The Computational Neurobiology of Reaching and Pointing

    A Foundation for Motor Learning

    Reza Shadmehr and Steven P. Wise

    An introduction to the computational biology of reaching and pointing, with an emphasis on motor learning.

    Neuroscience involves the study of the nervous system, and its topics range from genetics to inferential reasoning. At its heart, however, lies a search for understanding how the environment affects the nervous system and how the nervous system, in turn, empowers us to interact with and alter our environment. This empowerment requires motor learning. The Computational Neurobiology of Reaching and Pointing addresses the neural mechanisms of one important form of motor learning. The authors integrate material from the computational, behavioral, and neural sciences of motor control that is not available in any other single source. The result is a unified, comprehensive model of reaching and pointing. The book is intended to be used as a text by graduate students in both neuroscience and bioengineering and as a reference source by experts in neuroscience, robotics, and other disciplines.

    The book begins with an overview of the evolution, anatomy, and physiology of the motor system, including the mechanisms for generating force and maintaining limb stability. The sections that follow, "Computing Locations and Displacements", "Skills, Adaptations, and Trajectories", and "Predictions, Decisions, and Flexibility", present a theory of sensorially guided reaching and pointing that evolves organically based on computational principles rather than a traditional structure-by-structure approach. The book also includes five appendixes that provide brief refreshers on fundamentals of biology, mathematics, physics, and neurophysiology, as well as a glossary of relevant terms. The authors have also made supplemental materials available on the Internet. These web documents provide source code for simulations, step-by-step derivations of certain mathematical formulations, and expanded explanations of some concepts.

  • Neural Engineering

    Neural Engineering

    Computation, Representation, and Dynamics in Neurobiological Systems

    Chris Eliasmith and Charles H. Anderson

    For years, researchers have used the theoretical tools of engineering to understand neural systems, but much of this work has been conducted in relative isolation. In Neural Engineering, Chris Eliasmith and Charles Anderson provide a synthesis of the disparate approaches current in computational neuroscience, incorporating ideas from neural coding, neural computation, physiology, communications theory, control theory, dynamics, and probability theory. This synthesis, they argue, enables novel theoretical and practical insights into the functioning of neural systems. Such insights are pertinent to experimental and computational neuroscientists and to engineers, physicists, and computer scientists interested in how their quantitative tools relate to the brain.

    The authors present three principles of neural engineering based on the representation of signals by neural ensembles, transformations of these representations through neuronal coupling weights, and the integration of control theory and neural dynamics. Through detailed examples and in-depth discussion, they make the case that these guiding principles constitute a useful theory for generating large-scale models of neurobiological function. A software package written in MatLab for use with their methodology, as well as examples, course notes, exercises, documentation, and other material, are available on the Web.

    • Hardcover $52.00
    • Paperback $49.00
  • Theoretical Neuroscience

    Theoretical Neuroscience

    Computational and Mathematical Modeling of Neural Systems

    Laurence F. Abbott and Peter Dayan

    Theoretical neuroscience provides a quantitative basis for describing what nervous systems do, determining how they function, and uncovering the general principles by which they operate. This text introduces the basic mathematical and computational methods of theoretical neuroscience and presents applications in a variety of areas including vision, sensory-motor integration, development, learning, and memory.

    The book is divided into three parts. Part I discusses the relationship between sensory stimuli and neural responses, focusing on the representation of information by the spiking activity of neurons. Part II discusses the modeling of neurons and neural circuits on the basis of cellular and synaptic biophysics. Part III analyzes the role of plasticity in development and learning. An appendix covers the mathematical methods used, and exercises are available on the book's Web site.

    • Hardcover $55.00
    • Paperback $55.00
  • Computational Vision

    Computational Vision

    Information Processing in Perception and Visual Behavior

    Hanspeter A. Mallot

    This text provides an introduction to computational aspects of early vision, in particular, color, stereo, and visual navigation. It integrates approaches from psychophysics and quantitative neurobiology, as well as theories and algorithms from machine vision and photogrammetry. When presenting mathematical material, it uses detailed verbal descriptions and illustrations to clarify complex points. The text is suitable for upper-level students in neuroscience, biology, and psychology who have basic mathematical skills and are interested in studying the mathematical modeling of perception.

  • Fast Oscillations in Cortical Circuits

    Fast Oscillations in Cortical Circuits

    Roger D. Traub, Miles A. Whittington, and John G. R. Jefferys

    The study of cortical oscillations is of great interest to those working in many areas of neuroscience. A fast coherent EEG rhythm called gamma or "40 Hz" has been implicated in cognition, as it may play a role in binding together features of objects. This rhythm may also be important for consciousness, as a number of drugs that induce general anesthesia disrupt the synchronization of the rhythm at clinically relevant concentrations. There is also suggestive evidence implicating dysfunction of gamma rhythms in Alzheimer's disease, and perhaps in other neuropsychiatric disorders.

    In Fast Oscillations in Cortical Circuits, the authors use a combination of electrophysiological and computer modeling techniques to analyze how large networks of neurons can produce both epileptic seizures and functionally relevant synchronized oscillations. Specific topics covered include single hippocampal pyramid cells, hippocampal interneurons, synaptic interactions, gamma oscillations in brain slices as well as in vivo, the mechanisms of oscillation-synchronization (both local and long-range), the switch from gamma to beta frequencies and its implications for memory, and the significance of gamma oscillations for brain function.

  • Fundamentals of Neural Network Modeling

    Fundamentals of Neural Network Modeling

    Neuropsychology and Cognitive Neuroscience

    Randolph Parks, Daniel S. Levine, and Debra L. Long

    Provides an introduction to the neural network modeling of complex cognitive and neuropsychological processes.

    Over the past few years, computer modeling has become more prevalent in the clinical sciences as an alternative to traditional symbol-processing models. This book provides an introduction to the neural network modeling of complex cognitive and neuropsychological processes. It is intended to make the neural network approach accessible to practicing neuropsychologists, psychologists, neurologists, and psychiatrists. It will also be a useful resource for computer scientists, mathematicians, and interdisciplinary cognitive neuroscientists. The editors (in their introduction) and contributors explain the basic concepts behind modeling and avoid the use of high-level mathematics.

    The book is divided into four parts. Part I provides an extensive but basic overview of neural network modeling, including its history, present, and future trends. It also includes chapters on attention, memory, and primate studies. Part II discusses neural network models of behavioral states such as alcohol dependence, learned helplessness, depression, and waking and sleeping. Part III presents neural network models of neuropsychological tests such as the Wisconsin Card Sorting Task, the Tower of Hanoi, and the Stroop Test. Finally, part IV describes the application of neural network models to dementia: models of acetycholine and memory, verbal fluency, Parkinsons disease, and Alzheimer's disease.

    Contributors J. Wesson Ashford, Rajendra D. Badgaiyan, Jean P. Banquet, Yves Burnod, Nelson Butters, John Cardoso, Agnes S. Chan, Jean-Pierre Changeux, Kerry L. Coburn, Jonathan D. Cohen, Laurent Cohen, Jose L. Contreras-Vidal, Antonio R. Damasio, Hanna Damasio, Stanislas Dehaene, Martha J. Farah, Joaquin M. Fuster, Philippe Gaussier, Angelika Gissler, Dylan G. Harwood, Michael E. Hasselmo, J, Allan Hobson, Sam Leven, Daniel S. Levine, Debra L. Long, Roderick K. Mahurin, Raymond L. Ownby, Randolph W. Parks, Michael I. Posner, David P. Salmon, David Servan-Schreiber, Chantal E. Stern, Jeffrey P. Sutton, Lynette J. Tippett, Daniel Tranel, Bradley Wyble

  • Methods in Neuronal Modeling, Second Edition

    Methods in Neuronal Modeling, Second Edition

    From Ions to Networks

    Christof Koch and Idan Segev

    Much research focuses on the question of how information is processed in nervous systems, from the level of individual ionic channels to large-scale neuronal networks, and from "simple" animals such as sea slugs and flies to cats and primates. New interdisciplinary methodologies combine a bottom-up experimental methodology with the more top-down-driven computational and modeling approach. This book serves as a handbook of computational methods and techniques for modeling the functional properties of single and groups of nerve cells. The contributors highlight several key trends: (1) the tightening link between analytical/numerical models and the associated experimental data, (2) the broadening of modeling methods, at both the subcellular level and the level of large neuronal networks that incorporate real biophysical properties of neurons as well as the statistical properties of spike trains, and (3) the organization of the data gained by physical emulation of the nervous system components through the use of very large scale circuit integration (VLSI) technology. The field of neuroscience has grown dramatically since the first edition of this book was published nine years ago. Half of the chapters of the second edition are completely new; the remaining ones have all been thoroughly revised. Many chapters provide an opportunity for interactive tutorials and simulation programs. They can be accessed via Christof Koch's Website.

    Contributors Larry F. Abbott, Paul R. Adams, Hagai Agmon-Snir, James M. Bower, Robert E. Burke, Erik de Schutter, Alain Destexhe, Rodney Douglas, Bard Ermentrout, Fabrizio Gabbiani, David Hansel, Michael Hines, Christof Koch, Misha Mahowald, Zachary F. Mainen, Eve Marder, Michael V. Mascagni, Alexander D. Protopapas, Wilfrid Rall, John Rinzel, Idan Segev, Terrence J. Sejnowski, Shihab Shamma, Arthur S. Sherman, Paul Smolen, Haim Sompolinsky, Michael Vanier, Walter M. Yamada

    • Hardcover $92.00
    • Paperback $46.00
  • Neurons, Networks, and Motor Behavior

    Neurons, Networks, and Motor Behavior

    Sten Grillner, Allen I. Selverston, Paul S.G. Stein, and Douglas G. Stuart

    Recent advances in motor behavior research rely on detailed knowledge of the characteristics of the neurons and networks that generate motor behavior. At the cellular level, Neurons, Networks, and Motor Behavior describes the computational characteristics of individual neurons and how these characteristics are modified by neuromodulators. At the network and behavioral levels, the volume discusses how network structure is dynamically modulated to produce adaptive behavior. Comparisons of model systems throughout the animal kingdom provide insights into general principles of motor control. Contributors describe how networks generate such motor behaviors as walking, swimming, flying, scratching, reaching, breathing, feeding, and chewing. An emerging principle of organization is that nervous systems are remarkably efficient in constructing neural networks that control multiple tasks and dynamically adapt to change.The volume contains six sections: selection and initiation of motor patterns; generation and formation of motor patterns: cellular and systems properties; generation and formation of motor patterns: computational approaches; modulation and reconfiguration; short-term modulation of pattern generating circuits; and sensory modification of motor output to control whole body orientation.

    • Hardcover $85.00
    • Paperback $9.75
  • Spikes


    Exploring the Neural Code

    William Bialek, Rob de Ruyter van Steveninck, Fred Rieke, and David Warland

    What does it mean to say that a certain set of spikes is the right answer to a computational problem? In what sense does a spike train convey information about the sensory world? Spikes begins by providing precise formulations of these and related questions about the representation of sensory signals in neural spike trains. The answers to these questions are then pursued in experiments on sensory neurons. Intended for neurobiologists with an interest in mathematical analysis of neural data as well as the growing number of physicists and mathematicians interested in information processing by "real" nervous systems, Spikes provides a self-contained review of relevant concepts in information theory and statistical decision theory.

    Our perception of the world is driven by input from the sensory nerves. This input arrives encoded as sequences of identical spikes. Much of neural computation involves processing these spike trains. What does it mean to say that a certain set of spikes is the right answer to a computational problem? In what sense does a spike train convey information about the sensory world? Spikes begins by providing precise formulations of these and related questions about the representation of sensory signals in neural spike trains. The answers to these questions are then pursued in experiments on sensory neurons.The authors invite the reader to play the role of a hypothetical observer inside the brain who makes decisions based on the incoming spike trains. Rather than asking how a neuron responds to a given stimulus, the authors ask how the brain could make inferences about an unknown stimulus from a given neural response. The flavor of some problems faced by the organism is captured by analyzing the way in which the observer can make a running reconstruction of the sensory stimulus as it evolves in time. These ideas are illustrated by examples from experiments on several biological systems.

    Intended for neurobiologists with an interest in mathematical analysis of neural data as well as the growing number of physicists and mathematicians interested in information processing by "real" nervous systems, Spikes provides a self-contained review of relevant concepts in information theory and statistical decision theory. A quantitative framework is used to pose precise questions about the structure of the neural code. These questions in turn influence both the design and analysis of experiments on sensory neurons.

    • Hardcover $95.00
    • Paperback $55.00
  • The Theoretical Foundation of Dendritic Function

    The Theoretical Foundation of Dendritic Function

    The Collected Papers of Wilfrid Rall with Commentaries

    Idan Segev, John Rinzel, and Gordon M. Shepherd

    This collection of fifteen previously published papers, some of them not widely available, have been carefully chosen and annotated by Rall's colleagues and other leading neuroscientists.

    Wilfrid Rall was a pioneer in establishing the integrative functions of neuronal dendrites that have provided a foundation for neurobiology in general and computational neuroscience in particular. This collection of fifteen previously published papers, some of them not widely available, have been carefully chosen and annotated by Rall's colleagues and other leading neuroscientists. It brings together Rall's work over more than forty years, including his first papers extending cable theory to complex dendritic trees, his ground-breaking paper introducing compartmental analysis to computational neuroscience, and his studies of synaptic integration in motoneurons, dendrodendritic interactions, plasticity of dendritic spines, and active dendritic properties. Today it is well known that the brain's synaptic information is processed mostly in the dendrites where many of the plastic changes underlying learning and memory take place. It is particularly timely to look again at the work of a major creator of the field, to appreciate where things started and where they have led, and to correct any misinterpretations of Rall's work. The editors' introduction highlights the major insights that were gained from Rall's studies as well as from those of his collaborators and followers. It asks the questions that Rall proposed during his scientific career and briefly summarizes the answers.

    The papers include commentaries by Milton Brightman, Robert E. Burke, William R. Holmes, Donald R. Humphrey, Julian J. B. Jack, John Miller, Stephen Redman, John Rinzel, Idan Segev, Gordon M. Shepherd, and Charles Wilson.

    • Hardcover $80.00
    • Paperback $40.00
  • Models of Information Processing in the Basal Ganglia

    Models of Information Processing in the Basal Ganglia

    James C. Houk, Joel L. Davis, and David G. Beiser

    This book brings together the biology and computational features of the basal ganglia and their related cortical areas along with select examples of how this knowledge can be integrated into neural network models.

    Recent years have seen a remarkable expansion of knowledge about the anatomical organization of the part of the brain known as the basal ganglia, the signal processing that occurs in these structures, and the many relations both to molecular mechanisms and to cognitive functions. This book brings together the biology and computational features of the basal ganglia and their related cortical areas along with select examples of how this knowledge can be integrated into neural network models. Organized in four parts - fundamentals, motor functions and working memories, reward mechanisms, and cognitive and memory operations - the chapters present a unique admixture of theory, cognitive psychology, anatomy, and both cellular- and systems- level physiology written by experts in each of these areas. The editors have provided commentaries as a helpful guide to each part. Many new discoveries about the biology of the basal ganglia are summarized, and their impact on the computational role of the forebrain in the planning and control of complex motor behaviors discussed. The various findings point toward an unexpected role for the basal ganglia in the contextual analysis of the environment and in the adaptive use of this information for the planning and execution of intelligent behaviors. Parallels are explored between these findings and new connectionist approaches to difficult control problems in robotics and engineering.

    Contributors James L. Adams, P. Apicella, Michael Arbib, Dana H. Ballard, Andrew G. Barto, J. Brian Burns, Christopher I. Connolly, Peter F. Dominey, Richard P. Dum, John Gabrieli, M. Garcia-Munoz, Patricia S. Goldman-Rakic, Ann M. Graybiel, P. M. Groves, Mary M. Hayhoe, J. R. Hollerman, George Houghton, James C. Houk, Stephen Jackson, Minoru Kimura, A. B. Kirillov, Rolf Kotter, J. C. Linder, T. Ljungberg, M. S. Manley, M. E. Martone, J. Mirenowicz, C. D. Myre, Jeff Pelz, Nathalie Picard, R. Romo, S. F. Sawyer, E Scarnat, Wolfram Schultz, Peter L. Strick, Charles J. Wilson, Jeff Wickens, Donald J. Woodward, S. J. Young

  • Large-Scale Neuronal Theories of the Brain

    Large-Scale Neuronal Theories of the Brain

    Christof Koch and Joel L. Davis

    The authors encompass a broad background, from biophysics and electrophysiology to psychophysics, neurology, and computational vision. However, all the chapters focus on a common issue: the role of the primate (including human) cerebral cortex in memory, visual perception, focal attention, and awareness.

    Large-Scale Neuronal Theories of the Brain brings together thirteen original contributions by some of the top scientists working in neuroscience today. It presents models and theories that will most likely shape and influence the way we think about the brain, the mind, and interactions between the two in the years to come. Chapters consider global theories of the brain from the bottom up—providing theories that are based on real nerve cells, their firing properties, and their anatomical connections. This contrasts with attempts that have been made by psychologists and by theorists in the artificial intelligence community to understand the brain strictly from a psychological or computational point of view. The authors encompass a broad background, from biophysics and electrophysiology to psychophysics, neurology, and computational vision. However, all the chapters focus on a common issue: the role of the primate (including human) cerebral cortex in memory, visual perception, focal attention, and awareness.

    Contributors Horace Barlow. Patricia Churchland, V. S. Ramachandran, and Terrence J. Sejnowski. Antonio R. Damasio and Hanna Damasio. Robert Desimone, Earl K. Miller, and Leonardo Chelazzi. Christof Koch and Francis Crick. Rodolfo R. Llinas and Urs Ribary. David Mumford. Tomaso Poggio and Anya Hurlbert. Michael I. Posner and Mary K. Rothbart. Wolf Singer. Charles F. Stevens. Shimon Ullman. David C. Van Essen, Charles W. Anderson, and Bruno A. Olshausen

    • Hardcover $75.00
    • Paperback $38.00
  • Neurobiology of Neural Networks

    Neurobiology of Neural Networks

    Daniel Gardner

    This timely overview and synthesis of recent work in both artificial neural networks and neurobiology seeks to examine neurobiological data from a network perspective and to encourage neuroscientists to participate in constructing the next generation of neural networks. Individual chapters were commissioned from selected authors to bridge the gap between present neural network models and the needs of neurophysiologists who are trying to use these models as part of their research on how the brain works.

    Contents Introduction: Toward Neural Neural Networks, Daniel Gardner • Two Principles of Brain Organization: A Challenge for Artificial Neural Networks, Charles F. Stevens • Static Determinants of Synaptic Strength, Daniel Gardner • Learning Rules From Neurobiology, Douglas A. Baxter and John H. Byrne • Realistic Network Models of Distributed Processing in the Leech, Shawn R. Lockery and Terrence J. Sejnowski • Neural and Peripheral Dynamics as Determinants of Patterned Motor Behavior, Hillel J. Chiel and Randall D. Beer • Dynamic Neural Network Models of Sensorimotor Behavior, Eberhard E. Fetz

    • Hardcover $52.00
    • Paperback $26.00
  • Neural Nets in Electric Fish

    Neural Nets in Electric Fish

    Walter Heiligenberg

    Heiligenberg's pioneering research describes the behavior of one species, the jamming avoidance response in the electric fish Eigenmannia, providing a rich mine of data that documents the first vertebrate example of the workings of the entire behavioral system from sensory input to motor output. Neural Nets in Electric Fish presents the principles and detailed results that have emerged from this exciting program. Heiligenberg's introduction familiarizes the reader with the unusual sensory modality electroreception, demonstrating the rationale and the motive behind the research. The text, which includes many helpful new pedagogical graphs, takes up the behavioral work done in the early 1980s, from explorations of peripheral receptors, the hindbrain, the midbrain, and finally diencephalon, to the most recent studies of motor output. Neural Nets in Electric Fish clearly describes Heiligenberg's analysis of the complex nature of the electrical stimulus delivered to Eigenmannia during jamming avoidance, and explains the novel two-parameter notation he uses to represent the different stages in information processing, giving many examples of the notation's power. The book relates all known behavioral phenomena of the jamming avoidance response to specific properties of the underlying neural network organization and draws interesting parallels between the electric sense and other sensory processing systems, such as the barn owl's sound localization system, motion detection systems in vision, and bat echolocation.

    • Hardcover $45.00
    • Paperback $26.00