Siêu thị PDFTải ngay đi em, trời tối mất

Thư viện tri thức trực tuyến

Kho tài liệu với 50,000+ tài liệu học thuật

© 2023 Siêu thị PDF - Kho tài liệu học thuật hàng đầu Việt Nam

Time-Space, Spiking Neural Networks and Brain-Inspired Artificial Intelligence (Springer Series on Bio- and Neurosystems - Volume 7)
PREMIUM
Số trang
742
Kích thước
31.0 MB
Định dạng
PDF
Lượt xem
895

Time-Space, Spiking Neural Networks and Brain-Inspired Artificial Intelligence (Springer Series on Bio- and Neurosystems - Volume 7)

Nội dung xem thử

Mô tả chi tiết

Springer Series on Bio- and Neurosystems 7

Time-Space, Spiking

Neural Networks and

Brain-Inspired Artificial

Intelligence

Nikola K. Kasabov

Springer Series on Bio- and Neurosystems

Volume 7

Series editor

Nikola K. Kasabov, Auckland University of Technology, Auckland, New Zealand

The Springer Series on Bio- and Neurosystems publishes fundamental principles

and state-of-the-art research at the intersection of biology, neuroscience, informa￾tion processing and the engineering sciences. The series covers general informatics

methods and techniques, together with their use to answer biological or medical

questions. Of interest are both basics and new developments on traditional methods

such as machine learning, artificial neural networks, statistical methods, nonlinear

dynamics, information processing methods, and image and signal processing. New

findings in biology and neuroscience obtained through informatics and engineering

methods, topics in systems biology, medicine, neuroscience and ecology, as well as

engineering applications such as robotic rehabilitation, health information tech￾nologies, and many more, are also examined. The main target group includes

informaticians and engineers interested in biology, neuroscience and medicine, as

well as biologists and neuroscientists using computational and engineering tools.

Volumes published in the series include monographs, edited volumes, and selected

conference proceedings. Books purposely devoted to supporting education at the

graduate and post-graduate levels in bio- and neuroinformatics, computational

biology and neuroscience, systems biology, systems neuroscience and other related

areas are of particular interest.

More information about this series at http://www.springer.com/series/15821

Nikola K. Kasabov

Time-Space, Spiking Neural

Networks and Brain-Inspired

Artificial Intelligence

123

Nikola K. Kasabov

Knowledge Engineering and Discovery

Research Institute (KEDRI)

Auckland University of Technology

Auckland, New Zealand

ISSN 2520-8535 ISSN 2520-8543 (electronic)

Springer Series on Bio- and Neurosystems

ISBN 978-3-662-57713-4 ISBN 978-3-662-57715-8 (eBook)

https://doi.org/10.1007/978-3-662-57715-8

Library of Congress Control Number: 2018946569

© Springer-Verlag GmbH Germany, part of Springer Nature 2019

This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part

of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,

recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission

or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar

methodology now known or hereafter developed.

The use of general descriptive names, registered names, trademarks, service marks, etc. in this

publication does not imply, even in the absence of a specific statement, that such names are exempt from

the relevant protective laws and regulations and therefore free for general use.

The publisher, the authors and the editors are safe to assume that the advice and information in this

book are believed to be true and accurate at the date of publication. Neither the publisher nor the

authors or the editors give a warranty, express or implied, with respect to the material contained herein or

for any errors or omissions that may have been made. The publisher remains neutral with regard to

jurisdictional claims in published maps and institutional affiliations.

Printed on acid-free paper

This Springer imprint is published by the registered company Springer-Verlag GmbH, DE part of

Springer Nature

The registered company address is: Heidelberger Platz 3, 14197 Berlin, Germany

Time lives inside us and we live inside Time.

Vasil Levski-Apostola (1837–1873)

Bulgarian Educator and Revolutionary

To my mother Kapka Nikolova

Mankova-Kasabova (1920–2012) and my

father Kiril Ivanov Kasabov (1914–1996),

who gave me the light of life, and for those

who came earlier in time; to my family,

Diana, Kapka and Assia, who give me the

light of love; and to those who will come later

in time; I hope they will enjoy the light of life

and the light of love as much as I do.

Foreword

Professor Furber is ICL Professor of Computer Engineering in the School of

Computer Science at the University of Manchester, UK. After completing his

education at the University of Cambridge (BA, MA, MMath, Ph.D.), he spent the

1980s at Acorn Computers, where he was a principal designer of the BBC Micro

and the ARM 32-bit RISC microprocessor. As of 2018, over 120 billion variants

of the ARM processor have been manufactured, powering much of the world’s

mobile computing and embedded systems. He pioneered the development of

SpiNNaker, a neuromorphic computer architecture that enables the implementation

of massively parallel spiking neural network systems with a wide range of

applications.

The last decade has seen an explosion in the deployment of artificial neural

networks for machine learning applications ranging from consumer speech recog￾nition systems through to vision systems for autonomous vehicles. These artificial

neural systems differ from biological neural systems in many important aspects, but

most notably in their use of neurons with continuously varying outputs where

biology predominantly uses spiking neurons—neurons that emit a pure

electro-chemical unit impulse in response to recognising an input pattern. The

continuous output of the artificial neuron can be thought of as representing the

ix

mean firing rate of its biological equivalent, but in using rates rather than spikes, the

artificial network loses the ability to access the detailed spatio-temporal information

that can be conveyed in a time sequence of spikes. Biological systems can clearly

access this information, but how they use it effectively remains a mystery to

science.

Nik Kasabov has done as much as anyone to begin to unlock the secrets of the

biological spatio-temporal patterns of spikes, and in this book, he reveals what he

has learnt about those secrets and how he has applied that knowledge in exciting

new ways. This is deep knowledge, and if we can harness such knowledge in

brain-inspired AI systems, then the explosion in AI witnessed over the last decade

will look like a damp squib in comparison with what is to follow. This book is not

just a record of past work, but also a guidebook for an exciting future!

Steve Furber

CBE, FRS, FREng

Computer Science Department

University of Manchester, UK

x Foreword

Preface

Everything exists and evolves within time–space and time–space is within every￾thing, from a molecule to the universe. Understanding the complex relationship

between time and space has been one of the biggest scientific challenges of all

times, including the understanding and modelling the time–space information

processes in the human brain and understanding life. This is the strive for deep

knowledge that has always been the main goal of the human race.

Now that an enormous amount of time–space data is available, science needs

new methods to deal with the complexity of such data across domain areas. Risk

mitigation strategies from health to civil defence often depend on simple models.

But recent advances in machine learning offer the intriguing possibility that dis￾astrous events, as diverse as strokes, earthquakes, financial market crises, or

degenerative brain diseases, could be predicted early if the patterns hidden deeply in

the intricate and complex interactions between spatial and temporal components

could be understood. Although such interactions are manifested at different spatial

or temporal scales in different applications or domain areas, the same information￾processing principles may be applied.

A radically new approach to modelling such data and to obtaining deep

knowledge is needed that could enable the creation of faster and significantly better

machine learning and pattern recognition systems, offering the realistic prospect of

much more accurate and earlier event prediction, and a better understanding of

causal time–space relationships.

The term time–space coined in this book has two meanings:

– The problem space, where temporal processes evolve in time;

– The functional space of time, as it goes by.

This book looks at evolving processes in time–space. It talks about how deep

learning of time–space data is achieved in the human brain and how this results in

deep knowledge, which is taken as inspiration to develop methods and systems for

deep learning and deep knowledge representation in spiking neural networks

(SNN). And furthermore, how this could be used to develop a new type of artificial

intelligence (AI) systems, here called brain-inspired AI (BI-AI). In turn, these BI-AI

xi

systems can help us understand better the human brain and the universe and for us

to gain new deep knowledge.

BI-AI systems adopt structures and methods from the human brain to intelli￾gently learn time–space data. BI-AI systems have six main distinctive features:

(1) They have their structures and functionality inspired by the human brain; they

consist of spatially located neurons that create connections between them

through deep learning in time–space by exchanging information—spikes. They

are built of spiking neural networks (SNNs), as explained in Chaps. 4–6 in the

book.

(2) Being brain-inspired, BI-AI systems can achieve not only deep learning, but

deep knowledge representation in time–space.

(3) They can manifest cognitive behaviour.

(4) They can be used for knowledge transfer between humans and machines as a

foundation for the creation of symbiosis between humans and machines, ulti￾mately leading to the integration of human intelligence and artificial intelli￾gence (HI+AI) as discussed in the last chapter of the book.

(5) BI-AI systems are universal data learning machines, being superior to tradi￾tional machine learning techniques when dealing with time–space data.

(6) BI-AI systems can help us understand, protect and cure the human brain.

At the more technical level, the book presents background knowledge, new

generic methods for SNN, evolving SNN (eSNN) and brain-inspired SNN

(BI-SNN) and new specific methods for the creation of BI-AI systems for mod￾elling and analysis of time–space data across applications.

I strongly believe that progress in information sciences is mostly an evolutionary

process, that is, building up on what has already been created. In order to under￾stand the principles of deep learning and deep knowledge, SNN and BI-AI, to

properly apply them to solve problems, one needs to know some basic science

principles established in the past, such as epistemology by Aristotle, perceptron by

Rosenblatt, multilayer perceptron by Rumelhart, Amari, Werbos and others,

self-organising maps by Kohonen, fuzzy logic by Zadeh, quantum principles by

Einstein and Rutherford, von Neumann computing and Atanassoff ABC machine

and of course the human brain. All these principles are briefly covered in the book,

giving a proper foundation for a better understanding of SNN and BI-AI and how

they can be used to understand the time–space puzzles of nature and life and to gain

new, deep knowledge.

I have been lucky to meet and talk with some of the pioneers in the fields, such

as Shun-ichi Amari, Teuvo Kohonen, Walter Freeman, John Taylor, Lotfi Zadeh,

Takeshi Yamakawa, Steve Grossberg, John Andreae, Janus Kacprzyk, Steve

Furber, to mention only few of them, who gave me inspiration to go deep in this

research. My humble view is that we should not forget our pioneers and teachers

who gave us the light of knowledge.

xii Preface

Some of the new methods presented in the book are developed by the author and

have already appeared partially in various publications in collaboration with my

students and colleagues in the period 2005–2018. I would like to acknowledge the

contribution of my colleagues and postdoctoral fellows Lubica Benuskova, Michail

Defoin-Platel, Enmei Tu, Zeng-Guang Hou and his students Nelson and James, Jie

Yang and his students Lei Zhou and Chengie Gu, Giacomo Indiveri, Qun Song,

Paul Pang, Israel Espinosa, Weiqi Yan, Denise Taylor, Grace Wang, Valery Feigin,

Rita Krishnamurthi, Carlo Morabito, Nadia Mammone, Veselka Boeva, Marley

Vellasco, Andreas Koenig, Mario Fedrizzi, Plamen Angelov, Dimitar Filev, Petia

Georgieva, Georgi Bijev, Petia Koprinkova, Chrisina Jayne, Seiichi Ozawa, Cesare

Alippi, and many others.

I was privileged to have a large number of Ph.D. students in this period who also

contributed to publications used in this book. I acknowledge the contribution of my

Ph.D. students Maryam Doborjeh, Neelava Sengupta, Zohre Doborjeh, Anne

Abbott, Kaushalya Kumarasinghe, Akshay Gollohalli, Clarence Tan, Vinita Kumar,

Wei Cui, Vivienne Breen, Fahad Alvi, Reggio Hartono, Elisa Capecci, Nathan

Scott, Norhanifah Murli, Muhaini Othman, Paul Davidson, Kshitij Dhoble,

Nuttapod Nuntalid, Linda Liang, Haza Nuzly, Maggie Ma, Gary Chen, Harya

Widiputra, Raphael Hu, Stefan Schliebs, Anju Verma, Peter Hwang, Snejana Soltic,

Vishal Jain, Simei Wysosky, Liang Goh, Raphael Hu, Gary Chen and others.

Special acknowledgement to Helena Bahrami who helped me with the references

and the formatting of each of the 22 chapters.

During my long-time work on various topics included in this book and during

the writing of the book, I have received a tremendous support and help from my

wife Diana and my daughters Kapka and Assia. I thank them and love them!

I did some work on SNN while on a visiting professorship, funded by EU Funding

named after the great scientist Maria Salomea Skłodowska-Curie (b.1867–d.1934).

My fellowship was hosted by the Institute for Neuroinformatics (INI) at ETH and

University of Zurich, working in collaboration with Giacomo Indiveri. I am grateful

for this wonderful opportunity named after a remarkable scientist.

I did all the work on the book while maintaining my research, teaching and

administrative duties at Auckland University of Technology (AUT). I acknowledge

the generous funding and support I have received from this vibrant University since

my appointment in 2002, and still continuing. As the Founding Director of the

Knowledge Engineering and Discovery Research Institute (KEDRI) at AUT for 16

years now, that allowed me to take a leadership in research, I have been helped

tremendously by the KEDRI Administrative Manager Joyce D’Mello.

I acknowledge the support and the excellent work by the team of the Springer

Series of Bio- and Neurosystems—the series editorial manager Leontina, and also

Arun Kumar, Sabine and the whole team involved in this series.

Preface xiii

If I have to summarise the philosophy of this book in one sentence as a moto, I

would say:

Inspired by the oneness in nature in time–space, we aim to achieve oneness in

data modelling using brain-inspired computation.

August 2018 Nikola K. Kasabov

Fellow IEEE, Fellow RSNZ,

Fellow IITP NZ, DVF RAE UK

Director

Knowledge Engineering and Discovery

Research Institute (KEDRI)

Auckland University of Technology

Auckland, New Zealand

xiv Preface

About the Book Content by Topics and Chapters

and The Pathway of Knowledge

Foundations ECOS and SNN methods Applications Future directions

Brain

information

processing

(Chapter 3)

Molecular

information

processing

(Chapter 15)

Evolutionary

Computation (EC)

(Chapter 7)

Bioinformatics

data modelling

(Chapters 15,17)

Deep learning and

deep knowledge

from brain data

(EEG, fMRI, DTI)

(Chapters 8–11)

ANN and ECOS

computational

methods

(Chapter 2)

Evolving processes and

their representation as

data, information and

knowledge (Chapter 1)

SNN methods

(Chapter 4)

Quantum

inspired

computation

(Chapters 7, 22)

SNN, eSNN, BI￾SNN parameter

optimisation with EC

(Chapter 7)

Brain-Computer

Interfaces with BI￾SNN

(Chapter 14)

Audio- and visual

information

processing

(Chapters 12, 13)

SNN for

neuroinformatics

and personalised

modelling

(Chapters 16, 18)

Affective

computation

(Chapters 9, 14)

Neuromorphic

systems

(Chapter 20)

Predictive

modelling in

ecology

(Chapter 19)

(Chapter 15)

New spike-time

information

theory for data

compression

(Chapter 21)

Information

theory

(Chapters 1, 21)

Computational

architectures

(Chapter 20)

Evolving SNN

(eSNN)

(Chapter 5)

Brain-inspired SNN

(BI-SNN) and the

design of BI-AI

(Chapter 6)

Predictive

modelling in

transport

(Chapter 19)

Predictive

modelling in

environment

(Chapter 19)

Integrated

quantum￾neurogenetic￾brain- inspired

models

(Chapter 22)

Towards

Integrated Human

Intelligence and

Artificial

Intelligence

(HI+AI)

(Chapter 22)

xv

Contents

Part I Time-Space and AI. Artificial Neural Networks

1 Evolving Processes in Time-Space. Deep Learning and Deep

Knowledge Representation in Time-Space. Brain-Inspired AI .... 3

1.1 Evolving Processes in Time-Space ...................... 3

1.1.1 What Are Evolving Processes? .................. 4

1.1.2 Evolving Processes in Living Organisms ........... 5

1.1.3 Spatio-temporal and Spectro-temporal Evolving

Processes .................................. 8

1.2 Characteristics of Evolving Processes: Frequency, Energy,

Probability, Entropy and Information .................... 9

1.3 Light and Sound ................................... 15

1.4 Evolving Processes in Time-Space and Direction ........... 18

1.5 From Data and Information to Knowledge ................ 19

1.6 Deep Learning and Deep Knowledge Representation

in Time-Space. How Deep? ........................... 22

1.6.1 Defining Deep Knowledge in Time-Space .......... 22

1.6.2 How Deep? ................................ 25

1.6.3 Examples of Deep Knowledge Representation

in This Book ............................... 26

1.7 Statistical, Computational Modelling of Evolving Processes ... 26

1.7.1 Statistical Methods for Computational Modelling ..... 27

1.7.2 Global, Local and Transductive (“Personalised”)

Modelling ................................. 28

1.7.3 Model Validation ............................ 31

1.8 Brain-Inspired AI .................................. 32

1.9 Chapter Summary and Further Readings for Deeper

Knowledge ....................................... 35

References ............................................ 36

xvii

Tải ngay đi em, còn do dự, trời tối mất!