Artificial Intelligence in the Age of Neural Networks and Brain Computing

 
 
Elsevier (Verlag)
  • 1. Auflage
  • |
  • erschienen am 13. November 2018
  • |
  • 352 Seiten
 
E-Book | PDF mit Adobe DRM | Systemvoraussetzungen
978-0-12-816250-7 (ISBN)
 

Artificial Intelligence in the Age of Neural Networks and Brain Computing demonstrates that existing disruptive implications and applications of AI is a development of the unique attributes of neural networks, mainly machine learning, distributed architectures, massive parallel processing, black-box inference, intrinsic nonlinearity and smart autonomous search engines. The book covers the major basic ideas of brain-like computing behind AI, provides a framework to deep learning, and launches novel and intriguing paradigms as future alternatives. The success of AI-based commercial products proposed by top industry leaders, such as Google, IBM, Microsoft, Intel and Amazon can be interpreted using this book.

  • Developed from the 30th anniversary of the International Neural Network Society (INNS) and the 2017 International Joint Conference on Neural Networks (IJCNN)
  • Authored by top experts, global field pioneers and researchers working on cutting-edge applications in signal processing, speech recognition, games, adaptive control and decision-making
  • Edited by high-level academics and researchers in intelligent systems and neural networks
  • Englisch
  • San Diego
  • |
  • USA
  • 8,29 MB
978-0-12-816250-7 (9780128162507)
weitere Ausgaben werden ermittelt
  • Front Cover
  • Artificial Intelligence in the Age of Neural Networks and Brain Computing
  • Artificial Intelligence in the Age of Neural Networks and Brain Computing
  • Copyright
  • Contents
  • List of Contributors
  • Editors' Brief Biographies
  • Introduction
  • 1 - Nature's Learning Rule: The Hebbian-LMS Algorithm
  • 1. INTRODUCTION
  • 2. ADALINE AND THE LMS ALGORITHM, FROM THE 1950S
  • 3. UNSUPERVISED LEARNING WITH ADALINE, FROM THE 1960S
  • 4. ROBERT LUCKY'S ADAPTIVE EQUALIZATION, FROM THE 1960S
  • 5. BOOTSTRAP LEARNING WITH A SIGMOIDAL NEURON
  • 6. BOOTSTRAP LEARNING WITH A MORE "BIOLOGICALLY CORRECT" SIGMOIDAL NEURON
  • 6.1 TRAINING A NETWORK OF HEBBIAN-LMS NEURONS
  • 7. OTHER CLUSTERING ALGORITHMS
  • 7.1 K-MEANS CLUSTERING
  • 7.2 EXPECTATION-MAXIMIZATION ALGORITHM
  • 7.3 DENSITY-BASED SPATIAL CLUSTERING OF APPLICATION WITH NOISE ALGORITHM
  • 7.4 COMPARISON BETWEEN CLUSTERING ALGORITHMS
  • 8. A GENERAL HEBBIAN-LMS ALGORITHM
  • 9. THE SYNAPSE
  • 10. POSTULATES OF SYNAPTIC PLASTICITY
  • 11. THE POSTULATES AND THE HEBBIAN-LMS ALGORITHM
  • 12. NATURE'S HEBBIAN-LMS ALGORITHM
  • 13. CONCLUSION
  • APPENDIX: TRAINABLE NEURAL NETWORK INCORPORATING HEBBIAN-LMS LEARNING
  • ACKNOWLEDGMENTS
  • REFERENCES
  • 2 - A Half Century of Progress Toward a Unified Neural Theory of Mind and Brain With Applications to Autonomous Ada ...
  • 1. TOWARDS A UNIFIED THEORY OF MIND AND BRAIN
  • 2. A THEORETICAL METHOD FOR LINKING BRAIN TO MIND: THE METHOD OF MINIMAL ANATOMIES
  • 3. REVOLUTIONARY BRAIN PARADIGMS: COMPLEMENTARY COMPUTING AND LAMINAR COMPUTING
  • 4. THE WHAT AND WHERE CORTICAL STREAMS ARE COMPLEMENTARY
  • 5. ADAPTIVE RESONANCE THEORY
  • 6. VECTOR ASSOCIATIVE MAPS FOR SPATIAL REPRESENTATION AND ACTION
  • 7. HOMOLOGOUS LAMINAR CORTICAL CIRCUITS FOR ALL BIOLOGICAL INTELLIGENCE: BEYOND BAYES
  • 8. WHY A UNIFIED THEORY IS POSSIBLE: EQUATIONS, MODULES, AND ARCHITECTURES
  • 9. ALL CONSCIOUS STATES ARE RESONANT STATES
  • 10. THE VARIETIES OF BRAIN RESONANCES AND THE CONSCIOUS EXPERIENCES THAT THEY SUPPORT
  • 11. WHY DOES RESONANCE TRIGGER CONSCIOUSNESS?
  • 12. TOWARDS AUTONOMOUS ADAPTIVE INTELLIGENT AGENTS AND CLINICAL THERAPIES IN SOCIETY
  • REFERENCES
  • 3 - Third Gen AI as Human Experience Based Expert Systems
  • 1. INTRODUCTION
  • 2. THIRD GEN AI
  • 2.1 MAXWELL-BOLTZMANN HOMEOSTASIS [8]
  • 2.2 THE INVERSE IS CONVOLUTION NEURAL NETWORKS
  • 2.3 FUZZY MEMBERSHIP FUNCTION (FMF AND DATA BASIS)
  • 3. MFE GRADIENT DESCENT
  • 3.1 UNSUPERVISED LEARNING RULE
  • 4. CONCLUSION
  • ACKNOWLEDGMENT
  • REFERENCES
  • FURTHER READING
  • 4 - The Brain-Mind-Computer Trichotomy: Hermeneutic Approach
  • 1. DICHOTOMIES
  • 1.1 THE BRAIN-MIND PROBLEM
  • 1.2 THE BRAIN-COMPUTER ANALOGY/DISANALOGY
  • 1.3 THE COMPUTATIONAL THEORY OF MIND
  • 2. HERMENEUTICS
  • 2.1 SECOND-ORDER CYBERNETICS
  • 2.2 HERMENEUTICS OF THE BRAIN
  • 2.3 THE BRAIN AS A HERMENEUTIC DEVICE
  • 2.4 NEURAL HERMENEUTICS
  • 3. SCHIZOPHRENIA: A BROKEN HERMENEUTIC CYCLE
  • 3.1 HERMENEUTICS, COGNITIVE SCIENCE, SCHIZOPHRENIA
  • 4. TOWARD THE ALGORITHMS OF NEURAL/MENTAL HERMENEUTICS
  • 4.1 UNDERSTANDING SITUATIONS: NEEDS HERMENEUTIC INTERPRETATION
  • ACKNOWLEDGMENTS
  • REFERENCES
  • FURTHER READING
  • 5 - From Synapses to Ephapsis: Embodied Cognition and Wearable Personal Assistants
  • 1. NEURAL NETWORKS AND NEURAL FIELDS
  • 2. EPHAPSIS
  • 3. EMBODIED COGNITION
  • 4. WEARABLE PERSONAL ASSISTANTS
  • REFERENCES
  • 6 - Evolving and Spiking Connectionist Systems for Brain-Inspired Artificial Intelligence
  • 1. FROM ARISTOTLE'S LOGIC TO ARTIFICIAL NEURAL NETWORKS AND HYBRID SYSTEMS
  • 1.1 ARISTOTLE'S LOGIC AND RULE-BASED SYSTEMS FOR KNOWLEDGE REPRESENTATION AND REASONING
  • 1.2 FUZZY LOGIC AND FUZZY RULE-BASED SYSTEMS
  • 1.3 CLASSICAL ARTIFICIAL NEURAL NETWORKS (ANN)
  • 1.4 INTEGRATING ANN WITH RULE-BASED SYSTEMS: HYBRID CONNECTIONIST SYSTEMS
  • 1.5 EVOLUTIONARY COMPUTATION (EC): LEARNING PARAMETER VALUES OF ANN THROUGH EVOLUTION OF INDIVIDUAL MODELS AS PART OF POPULATIO ...
  • 2. EVOLVING CONNECTIONIST SYSTEMS (ECOS)
  • 2.1 PRINCIPLES OF ECOS
  • 2.2 ECOS REALIZATIONS AND AI APPLICATIONS
  • 3. SPIKING NEURAL NETWORKS (SNN) AS BRAIN-INSPIRED ANN
  • 3.1 MAIN PRINCIPLES, METHODS, AND EXAMPLES OF SNN AND EVOLVING SNN (ESNN)
  • 3.2 APPLICATIONS AND IMPLEMENTATIONS OF SNN FOR AI
  • 4. BRAIN-LIKE AI SYSTEMS BASED ON SNN. NEUCUBE. DEEP LEARNING ALGORITHMS
  • 4.1 BRAIN-LIKE AI SYSTEMS. NEUCUBE
  • 4.2 DEEP LEARNING AND DEEP KNOWLEDGE REPRESENTATION IN NEUCUBE SNN MODELS: METHODS AND AI APPLICATIONS [6]
  • 4.2.1 Supervised Learning for Classification of Learned Patterns in a SNN Model
  • 4.2.2 Semisupervised Learning
  • 5. CONCLUSION
  • ACKNOWLEDGMENT
  • REFERENCES
  • 7 - Pitfalls and Opportunities in the Development and Evaluation of Artificial Intelligence Systems
  • 1. INTRODUCTION
  • 2. AI DEVELOPMENT
  • 2.1 OUR DATA ARE CRAP
  • 2.2 OUR ALGORITHM IS CRAP
  • 3. AI EVALUATION
  • 3.1 USE OF DATA
  • 3.2 PERFORMANCE MEASURES
  • 3.3 DECISION THRESHOLDS
  • 4. VARIABILITY AND BIAS IN OUR PERFORMANCE ESTIMATES
  • 5. CONCLUSION
  • ACKNOWLEDGMENT
  • REFERENCES
  • 8 - The New AI: Basic Concepts, and Urgent Risks and Opportunities in the Internet of Things
  • 1. INTRODUCTION AND OVERVIEW
  • 1.1 DEEP LEARNING AND NEURAL NETWORKS BEFORE 2009-11
  • 1.2 THE DEEP LEARNING CULTURAL REVOLUTION AND NEW OPPORTUNITIES
  • 1.3 NEED AND OPPORTUNITY FOR A DEEP LEARNING REVOLUTION IN NEUROSCIENCE
  • 1.4 RISKS OF HUMAN EXTINCTION, NEED FOR NEW PARADIGM FOR INTERNET OF THINGS
  • 2. BRIEF HISTORY AND FOUNDATIONS OF THE DEEP LEARNING REVOLUTION
  • 2.1 OVERVIEW OF THE CURRENT LANDSCAPE
  • 2.2 HOW THE DEEP REVOLUTION ACTUALLY HAPPENED
  • 2.3 BACKPROPAGATION: THE FOUNDATION WHICH MADE THIS POSSIBLE
  • 2.4 CONNS, 3 LAYERS, AND AUTOENCODERS: THE THREE MAIN TOOLS OF TODAY'S DEEP LEARNING
  • 3. FROM RNNS TO MOUSE-LEVEL COMPUTATIONAL INTELLIGENCE: NEXT BIG THINGS AND BEYOND
  • 3.1 TWO TYPES OF RECURRENT NEURAL NETWORK
  • 3.2 DEEP VERSUS BROAD: A FEW PRACTICAL ISSUES
  • 3.3 ROADMAP FOR MOUSE-LEVEL COMPUTATIONAL INTELLIGENCE (MLCI)
  • 3.4 EMERGING NEW HARDWARE TO ENHANCE CAPABILITY BY ORDERS OF MAGNITUDE
  • 4. NEED FOR NEW DIRECTIONS IN UNDERSTANDING BRAIN AND MIND
  • 4.1 TOWARD A CULTURAL REVOLUTION IN HARD NEUROSCIENCE
  • 4.2 FROM MOUSE BRAIN TO HUMAN MIND: PERSONAL VIEWS OF THE LARGER PICTURE
  • 5. INFORMATION TECHNOLOGY (IT) FOR HUMAN SURVIVAL: AN URGENT UNMET CHALLENGE
  • 5.1 EXAMPLES OF THE THREAT FROM ARTIFICIAL STUPIDITY
  • 5.2 CYBER AND EMP THREATS TO THE POWER GRID
  • 5.3 THREATS FROM UNDEREMPLOYMENT OF HUMANS
  • 5.4 PRELIMINARY VISION OF THE OVERALL PROBLEM, AND OF THE WAY OUT
  • REFERENCES
  • 9 - Theory of the Brain and Mind: Visions and History
  • 1. EARLY HISTORY
  • 2. EMERGENCE OF SOME NEURAL NETWORK PRINCIPLES
  • 3. NEURAL NETWORKS ENTER MAINSTREAM SCIENCE
  • 4. IS COMPUTATIONAL NEUROSCIENCE SEPARATE FROM NEURAL NETWORK THEORY?
  • 5. DISCUSSION
  • REFERENCES
  • 10 - Computers Versus Brains: Game Is Over or More to Come?
  • 1. INTRODUCTION
  • 2. AI APPROACHES
  • 3. METASTABILITY IN COGNITION AND IN BRAIN DYNAMICS
  • 4. MULTISTABILITY IN PHYSICS AND BIOLOGY
  • 5. PRAGMATIC IMPLEMENTATION OF COMPLEMENTARITY FOR NEW AI
  • ACKNOWLEDGMENTS
  • REFERENCES
  • 11 - Deep Learning Approaches to Electrophysiological Multivariate Time-Series Analysis**
  • 1. INTRODUCTION
  • 2. THE NEURAL NETWORK APPROACH
  • 3. DEEP ARCHITECTURES AND LEARNING
  • 3.1 DEEP BELIEF NETWORKS
  • 3.2 STACKED AUTOENCODERS
  • 3.3 CONVOLUTIONAL NEURAL NETWORKS
  • 4. ELECTROPHYSIOLOGICAL TIME-SERIES
  • 4.1 MULTICHANNEL NEUROPHYSIOLOGICAL MEASUREMENTS OF THE ACTIVITY OF THE BRAIN
  • 4.2 ELECTROENCEPHALOGRAPHY (EEG)
  • 4.3 HIGH-DENSITY ELECTROENCEPHALOGRAPHY
  • 4.4 MAGNETOENCEPHALOGRAPHY
  • 5. DEEP LEARNING MODELS FOR EEG SIGNAL PROCESSING
  • 5.1 STACKED AUTOENCODERS
  • 5.2 SUMMARY OF THE PROPOSED METHOD FOR EEG CLASSIFICATION
  • 5.3 DEEP CONVOLUTIONAL NEURAL NETWORKS
  • 5.4 OTHER DL APPROACHES
  • 6. FUTURE DIRECTIONS OF RESEARCH
  • 6.1 DL INTERPRETABILITY
  • 6.2 ADVANCED LEARNING APPROACHES IN DL
  • 6.3 ROBUSTNESS OF DL NETWORKS
  • 7. CONCLUSIONS
  • REFERENCES
  • FURTHER READING
  • 12 - Computational Intelligence in the Time of Cyber-Physical Systems and the Internet of Things
  • 1. INTRODUCTION
  • 2. SYSTEM ARCHITECTURE
  • 3. ENERGY HARVESTING AND MANAGEMENT
  • 3.1 ENERGY HARVESTING
  • 3.2 ENERGY MANAGEMENT AND RESEARCH CHALLENGES
  • 4. LEARNING IN NONSTATIONARY ENVIRONMENTS
  • 4.1 PASSIVE ADAPTATION MODALITY
  • 4.2 ACTIVE ADAPTATION MODALITY
  • 4.3 RESEARCH CHALLENGES
  • 5. MODEL-FREE FAULT DIAGNOSIS SYSTEMS
  • 5.1 MODEL-FREE FAULT DIAGNOSIS SYSTEMS
  • 5.2 RESEARCH CHALLENGES
  • 6. CYBERSECURITY
  • 6.1 HOW CAN CPS AND IOT BE PROTECTED FROM CYBERATTACKS?
  • 6.2 CASE STUDY: DARKNET ANALYSIS TO CAPTURE MALICIOUS CYBERATTACK BEHAVIORS
  • 7. CONCLUSIONS
  • ACKNOWLEDGMENTS
  • REFERENCES
  • 13 - Multiview Learning in Biomedical Applications
  • 1. INTRODUCTION
  • 2. MULTIVIEW LEARNING
  • 2.1 INTEGRATION STAGE
  • 2.2 TYPE OF DATA
  • 2.3 TYPES OF ANALYSIS
  • 3. MULTIVIEW LEARNING IN BIOINFORMATICS
  • 3.1 PATIENT SUBTYPING
  • 3.2 DRUG REPOSITIONING
  • 4. MULTIVIEW LEARNING IN NEUROINFORMATICS
  • 4.1 AUTOMATED DIAGNOSIS SUPPORT TOOLS FOR NEURODEGENERATIVE DISORDERS
  • 4.2 MULTIMODAL BRAIN PARCELLATION
  • 5. DEEP MULTIMODAL FEATURE LEARNING
  • 5.1 DEEP LEARNING APPLICATION TO PREDICT PATIENT'S SURVIVAL
  • 5.2 MULTIMODAL NEUROIMAGING FEATURE LEARNING WITH DEEP LEARNING
  • 6. CONCLUSIONS
  • REFERENCES
  • 14 - Meaning Versus Information, Prediction Versus Memory, and Question Versus Answer
  • 1. INTRODUCTION
  • 2. MEANING VERSUS INFORMATION
  • 3. PREDICTION VERSUS MEMORY
  • 4. QUESTION VERSUS ANSWER
  • 5. DISCUSSION
  • 6. CONCLUSION
  • ACKNOWLEDGMENTS
  • REFERENCES
  • 15 - Evolving Deep Neural Networks
  • 1. INTRODUCTION
  • 2. BACKGROUND AND RELATED WORK
  • 3. EVOLUTION OF DEEP LEARNING ARCHITECTURES
  • 3.1 EXTENDING NEAT TO DEEP NETWORKS
  • 3.2 COOPERATIVE COEVOLUTION OF MODULES AND BLUEPRINTS
  • 3.3 EVOLVING DNNS IN THE CIFAR-10 BENCHMARK
  • 4. EVOLUTION OF LSTM ARCHITECTURES
  • 4.1 EXTENDING CODEEPNEAT TO LSTMS
  • 4.2 EVOLVING DNNS IN THE LANGUAGE MODELING BENCHMARK
  • 5. APPLICATION CASE STUDY: IMAGE CAPTIONING FOR THE BLIND
  • 5.1 EVOLVING DNNS FOR IMAGE CAPTIONING
  • 5.2 BUILDING THE APPLICATION
  • 5.3 IMAGE CAPTIONING RESULTS
  • 6. DISCUSSION AND FUTURE WORK
  • 7. CONCLUSION
  • REFERENCES
  • Index
  • A
  • B
  • C
  • D
  • E
  • F
  • G
  • H
  • I
  • J
  • K
  • L
  • M
  • N
  • O
  • P
  • Q
  • R
  • S
  • T
  • U
  • V
  • W
  • X
  • Back Cover

Dateiformat: PDF
Kopierschutz: Adobe-DRM (Digital Rights Management)

Systemvoraussetzungen:

Computer (Windows; MacOS X; Linux): Installieren Sie bereits vor dem Download die kostenlose Software Adobe Digital Editions (siehe E-Book Hilfe).

Tablet/Smartphone (Android; iOS): Installieren Sie bereits vor dem Download die kostenlose App Adobe Digital Editions (siehe E-Book Hilfe).

E-Book-Reader: Bookeen, Kobo, Pocketbook, Sony, Tolino u.v.a.m. (nicht Kindle)

Das Dateiformat PDF zeigt auf jeder Hardware eine Buchseite stets identisch an. Daher ist eine PDF auch für ein komplexes Layout geeignet, wie es bei Lehr- und Fachbüchern verwendet wird (Bilder, Tabellen, Spalten, Fußnoten). Bei kleinen Displays von E-Readern oder Smartphones sind PDF leider eher nervig, weil zu viel Scrollen notwendig ist. Mit Adobe-DRM wird hier ein "harter" Kopierschutz verwendet. Wenn die notwendigen Voraussetzungen nicht vorliegen, können Sie das E-Book leider nicht öffnen. Daher müssen Sie bereits vor dem Download Ihre Lese-Hardware vorbereiten.

Bitte beachten Sie bei der Verwendung der Lese-Software Adobe Digital Editions: wir empfehlen Ihnen unbedingt nach Installation der Lese-Software diese mit Ihrer persönlichen Adobe-ID zu autorisieren!

Weitere Informationen finden Sie in unserer E-Book Hilfe.


Download (sofort verfügbar)

176,12 €
inkl. 19% MwSt.
Download / Einzel-Lizenz
PDF mit Adobe DRM
siehe Systemvoraussetzungen
E-Book bestellen