Reverse engineering the mind : consciously acting machines and accelerated evolution

Florian Neukart describes methods for interpreting signals in the human brain in combination with state of the art AI, allowing for the creation of artificial conscious entities (ACE). Key methods are to establish a symbiotic relationship between a biological brain, sensors, AI and quantum hard- and...

Full description

Saved in:
Bibliographic Details
Main Author Neukart, Florian (Author)
Format Electronic eBook
LanguageEnglish
Published Wiesbaden : Springer, [2017]
SeriesAutoUni-Schriftenreihe ; Bd. 94.
Subjects
Online AccessPlný text

Cover

Loading…
Table of Contents:
  • 1. Evolution's most extraordinary achievement
  • 1.1. Anatomy of the human brain
  • 1.1.1. Truncus cerebri
  • 1.1.1.1. Cerebellum
  • 1.1.1.2. Mesencephalon
  • 1.1.1.3. Pons
  • 1.1.1.4. Medulla oblongata
  • 1.1.2. Paleomammalian
  • 1.1.2.1. Corpus amygdaloideum
  • 1.1.2.2. Hippocampus
  • 1.1.2.3. Diencephalon
  • 1.1.2.3.1. Hypothalamus
  • 1.1.2.3.2. Subthalamus
  • 1.1.2.3.3. Thalamus dorsalis
  • 1.1.2.3.4. Pineal gland and Epithalamus
  • 1.1.2.4. Cingulate gyrus
  • 1.1.3. Cortex and neocortex
  • 1.1.3.1. Frontal lobe
  • 1.1.3.2. Parietal lobe
  • 1.1.3.3. Temporal lobe
  • 1.1.3.4. Occipital lobe
  • 1.2. Neural information transfer
  • 1.3. Summary
  • 2. Pillars of artificial intelligence
  • 2.1. Machine learning
  • 2.1.1. Supervised learning algorithms
  • 2.1.2. Unsupervised Learning Algorithms
  • 2.2. Computer Vision
  • 2.3. Logic and reasoning
  • 2.4. Language and communication
  • 2.5. Agents and actions
  • 2.5.1. Principles of the new agent-centered approach
  • 2.5.2. Multi-agent behavior
  • 2.5.3. Multi-agent learning
  • 2.6. Summary
  • 3. outline of artificial neural networks
  • 3.1. Definition
  • 3.2. Paradigms of computational intelligence
  • 3.3. Neural networks
  • 3.3.1. Artificial neural networks
  • 3.3.1.1. Suitable problems
  • 3.3.1.2. Basic knowledge
  • 3.3.1.2.1. Structure
  • 3.3.1.2.2. Bias
  • 3.3.1.2.3. Gradient descent
  • 3.3.1.3. Activation functions
  • 3.3.1.3.1. Linear activation function
  • 3.3.1.3.2. Sigmoid activation function
  • 3.3.1.3.3. Hyperbolic tangent activation function
  • 3.3.1.3.4. Rectifier linear unit
  • 3.3.1.3.5. Gaussian activation function
  • 3.3.1.4. Regularization
  • 3.3.2. Types of artificial neural networks
  • 3.3.2.1. Supervised and unsupervised learning
  • 3.3.2.2. Feed-forward artificial neural network
  • 3.3.2.3. Feed-forward artificial neural network with feedback connections
  • 3.3.2.4. Fully connected artificial neural network
  • 3.3.2.5. Basic artificial neural network structure
  • 3.3.2.6. Perceptron
  • 3.3.2.6.1. Single layer perceptron
  • 3.3.2.6.2. Multi layer perceptron
  • 3.3.2.6.3. Spiking artificial neural networks
  • 3.3.2.7. Radial basis artificial neural network
  • 3.3.2.8. Recurrent artificial neural network
  • 3.3.2.8.1. Elman recurrent artificial neural network
  • 3.3.2.8.2. Jordan recurrent artificial neural network
  • 3.3.2.9. Fully connected artificial neural network
  • 3.3.2.9.1. Hopfield artificial neural network
  • 3.3.2.9.2. Boltzmann Machine
  • 3.3.2.9.3. Support vector machine
  • 3.3.2.9.4. Self-organizing feature map
  • 3.3.2.9.5. Committee machines
  • 3.3.3. Training and learning
  • 3.3.3.1. Supervised and unsupervised training
  • 3.3.3.2. (Root) mean squared error
  • 3.3.3.3. Estimators
  • 3.3.3.4. Hebb's learning rule
  • 3.3.3.5. Delta rule
  • 3.3.3.6. Propagation learning
  • 3.3.3.6.1. Back propagation training
  • 3.3.3.6.2. Manhattan update rule training
  • 3.3.3.6.3. Resilient propagation
  • 3.3.3.7. Genetic learning (NeuroEvolution)
  • 3.3.3.7.1. Evolutionary search of connection weights
  • 3.3.3.7.2. Evolutionary search of architectures
  • 3.3.3.7.3. Evolutionary search of learning rules
  • 3.3.3.8. Simulated annealing
  • 3.3.3.9. NeuroEvolution of augmenting topologies (NEAT)
  • 3.3.4. Stability-plasticity dilemma
  • 3.4. Summary
  • 4. Advanced artificial perception and pattern recognition
  • 4.1. Convolutional artificial neural networks
  • 4.1.1. Data representation
  • 4.1.2. Structure
  • 4.1.2.1. Convolutional layers
  • 4.1.2.2. Different ways of perception and processing
  • 4.1.2.3. Maxpooling/downsampling layers
  • 4.1.2.4. Feature maps
  • 4.1.2.5. Fully connected layers
  • 4.1.2.6. Number of neurons
  • 4.1.3. Training
  • 4.2. Deep belief artificial neural network
  • 4.2.1. Stacking together RBMs
  • 4.2.2. Training
  • 4.3. Cortical artificial neural network
  • 4.3.1. Structure
  • 4.3.1.1. Cortices
  • 4.3.1.2. Number of neurons
  • 4.3.1.3. Synapses
  • 4.3.2. generic cortical artificial neural network
  • 4.3.3. Purpose
  • 4.3.4. Evolution and weight initialization
  • 4.4. SHOCID recurrent artificial neural network
  • 4.4.1. Structure
  • 4.4.1.1. Recurrent layer one
  • 4.4.1.2. Recurrent layer two
  • 4.4.1.3. Number of neurons
  • 4.4.1.4. Synapses
  • 4.4.2. Purpose
  • 4.4.3. Evolution and weight initialization
  • 4.5. Summary
  • 5. Advanced nature-inspired evolution and learning strategies
  • 5.1. Transgenetic NeuroEvolution
  • 5.1.1. Fundamentals
  • 5.1.2. Host genetic material
  • 5.1.3. Endosymbiont
  • 5.1.4. Algorithm
  • 5.1.5. Horizontal (endosymbiotic) gene (sequence) transfer
  • 5.1.5.1. Weight plasmid
  • 5.1.5.2. Structure plasmid
  • 5.1.6. Transposon mutation
  • 5.1.6.1. Jump and swap transposon
  • 5.1.6.2. Erase and jump transposon
  • 5.1.7. Usage
  • 5.2. Artificial immune system-inspired NeuroEvolution
  • 5.2.1. Fundamentals
  • 5.2.2. Clonal selection and somatic hypermutation
  • 5.2.3. Danger theory, virus attack and hyperrecombination
  • 5.2.4. Negative selection
  • 5.2.5. Overall algorithm
  • 5.2.6. Causality
  • 5.2.7. Usage
  • 5.3. Structural evolution
  • 5.3.1. Fundamentals
  • 5.3.2. Algorithm
  • 5.3.3. Generic determination of artificial neural network quality
  • 5.3.4. Parameterization
  • 5.3.5. Usage
  • 5.4. Summary
  • 6. Autonomously acting cars and predicting market behaviour: some application scenarios for ANNs
  • 6.1. Analysis and knowledge
  • 6.1.1. Supervised and unsupervised functions
  • 6.1.2. Classification
  • 6.1.3. Regression
  • 6.1.4. Clustering
  • 6.1.5. Attribute importance
  • 6.1.6. Association
  • 6.1.7. Interesting knowledge
  • 6.1.8. Accurate knowledge
  • 6.1.9. Interpretable knowledge
  • 6.1.10. Intelligent processing
  • 6.1.11. Efficient processing
  • 6.2. Autonomously acting cars
  • 6.2.1. V2X-communication
  • 6.2.2. Massively equip car with processing power and AI-algorithms
  • 6.2.3. Artificial intelligence and environment sensing
  • 6.2.3.1. Cameras and how AI is applied to related data
  • 6.2.3.2. RADAR and how AI is applied to related data
  • 6.2.3.3. LiDAR and how AI is applied to related data
  • 6.2.3.4. Additional sensors and how AI is applied to related data
  • 6.2.3.5. GPS and how AI is applied to related data
  • 6.2.3.6. Microphones and how AI is applied to related data
  • 6.2.3.7. Autonomously acting car's brain
  • the domain controller
  • 6.3. Summary
  • 7. outline of quantum mechanics
  • 7.1. Quantum systems in general
  • 7.1.1. Quantum theory
  • 7.1.1.1. Quantum states
  • 7.1.1.2. Observables
  • 7.1.1.3. Quantum measurements
  • 7.1.1.4. Quantum dynamics
  • 7.1.2. Quantum operators
  • 7.1.3. Quantum physical effects
  • 7.1.3.1. Quantum interference
  • 7.1.3.2. Quantum linear superposition
  • 7.1.3.3. Quantum entanglement
  • 7.2. unitary evolution U
  • 7.3. state vector reduction R
  • 7.4. Summary
  • 8. Quantum physics and the biological brain
  • 8.1. Difficulties with U in the macroscopic world
  • 8.2. Hameroff-Penrose model of orchestrated objective reduction
  • 8.2.1. idea
  • 8.2.2. Microtubules
  • 8.3. Further models
  • 8.4. Summary
  • 9. Matter and consciousness
  • 9.1. Qualia
  • 9.2. Materialism
  • 9.2.1. Eliminative materialism
  • 9.2.2. Noneliminative materialism
  • 9.3. Functionalism
  • 9.3.1. problem of absent or inverted qualia
  • 9.3.2. Chinese Room argument
  • 9.3.3. knowledge argument
  • 9.4. Identity Theory
  • 9.5. Summary
  • 10. Reverse engineering the mind
  • 10.1. Theory of mind
  • 10.2. Quantum linear superposition in artificial brains
  • 10.3. Self-organization
  • 10.3.1. Structure and system
  • 10.3.1.1. Conservative structure
  • 10.3.1.2. Dissipative structure
  • 10.3.2. Self-organization in computational intelligence
  • 10.3.2.1. Self-organized learning
  • 10.3.2.2. Learning with respect to self-organization
  • 10.3.2.2.1. Competitive learning
  • 10.3.2.2.2. Competitive learning in artificial neural networks
  • 10.3.2.3. Adaptive Resonance Theory
  • 10.3.3. transition to the human brain
  • 10.3.3.1. Laterally interconnected synergetically self-organizing maps
  • 10.3.3.2. pruning neocortex
  • 10.3.3.2.1. Incremental pruning
  • 10.3.3.2.2. Selective pruning
  • 10.3.3.2.3. Pruning and quantum artificial neural networks
  • 10.3.4. Arguments for self-organization in artificial neural systems
  • 10.4. Mechanisms apart from self-organization
  • 10.4.1. Leader
  • 10.4.2. Blueprint
  • 10.4.3. Recipe
  • 10.4.4. Template
  • 10.5. Quantum physics and the artificial brain
  • 10.5.1. Quantum artificial neural network
  • 10.5.1.1. Structure
  • 10.5.1.2. Quantum bits
  • 10.5.1.3. Superposition
  • 10.5.1.3.1. Superposition of dendrites
  • 10.5.1.3.2. Superposition of neurons
  • 10.5.1.3.3. Superposition of the quantum artificial neural network
  • 10.5.1.4. Entanglement
  • 10.5. Interference
  • 10.5.1.6. Processing
  • 10.5.1.6.1. Entanglement
  • 10.5.1.6.2. Quantum parallelism
  • 10.5.1.6.3. From basic operators to the quantum transfer function
  • 10.5.1.6.4. Reduction of and information about the quantum perceptron equations
  • 10.5.1.6.5. Normalization
  • 10.5.1.7. Measurement
  • 10.5.1.7.1. Quantum artificial neural network configuration search function
  • 10.5.1.7.2. Example processing
  • 10.5.1.8. Envisaged implementations of a quantum artificial neural network
  • 10.5.1.8.1. Adiabatic quantum annealing
  • 10.5.1.8.2. Nuclear magnetic resonance
  • 10.5.1.8.3. Others
  • 10.6. artificial neocortex
  • 10.6.1. Knowledge and data
  • 10.6.1.1. Knowledge representation
  • 10.6.1.2. Declarative knowledge representation
  • 10.6.1.2.1. Semantic networks
  • 10.6.1.2.2. Object-attribute-value-triplet
  • 10.6.1.2.3. Frames
  • 10.6.2. Context recognition and hierarchical learning
  • 10.6.2.1. Definition of context-sensitive information
  • 10.6.2.2. Information Clustering
  • 10.6.2.3. Context analysis
  • 10.6.2.4. Hierarchical learning
  • 10.6.2.5. Interpreting the context
  • 10.6.2.6. Hidden Markov models and conceptual hierarchies in the neocortex
  • 10.6.3. Implementation
  • 10.6.3.1. Acquisition of basic knowledge
  • 10.6.3.2. Encoding the acquired knowledge into pattern recognizers
  • 10.6.3.3. Access to knowledge and how search engines are similar to the brain
  • 10.6.3.4. Language processing and understanding
  • 10.6.3.5. Quantum pattern recognizers
  • 10.6.3.6. Real world input and new experiences
  • 10.6.3.7. Automatic information interconnection
  • 10.6.4. superior goal
  • 10.7. distributed mind
  • 10.7.1. Non-invasive transducers
  • 10.7.2. Semi-invasive, invasive transducers and the neural grid
  • 10.7.3. Signal processing
  • 10.7.3.1. Pre-processing
  • 10.7.3.2. Feature extraction
  • 10.7.3.3. Detection and classification
  • 10.7.4. BCI requirements for the distributed mind
  • 10.8. Summary
  • 11. Conclusion.