Difference between revisions of "Neural Network"

From Nordan Symposia
Jump to navigationJump to search
m (Text replacement - "http://" to "https://")
 
Line 1: Line 1:
 
[[File:lighterstill.jpg]][[File:Neural_network_poster2.jpg‎|right|frame]]
 
[[File:lighterstill.jpg]][[File:Neural_network_poster2.jpg‎|right|frame]]
  
In [[neuroscience]], a '''neural network''' describes a [[population]] of [[physically]] interconnected [[neurons]] or a group of disparate neurons whose inputs or signalling targets define a recognizable [[circuit]]. [[Communication]] between neurons often involves an electrochemical [[process]]. The [[interface]] through which they interact with surrounding neurons usually consists of several [http://en.wikipedia.org/wiki/Dendrites dendrites] (input connections), which are connected via [http://en.wikipedia.org/wiki/Synapse synapses] to other neurons, and one [http://en.wikipedia.org/wiki/Axon axon] (output connection). If the sum of the input signals surpasses a certain threshold, the neuron sends an [http://en.wikipedia.org/wiki/Action_potential action potential] (AP) at the [http://en.wikipedia.org/wiki/Axon_hillock axon hillock] and transmits this electrical signal along the axon.
+
In [[neuroscience]], a '''neural network''' describes a [[population]] of [[physically]] interconnected [[neurons]] or a group of disparate neurons whose inputs or signalling targets define a recognizable [[circuit]]. [[Communication]] between neurons often involves an electrochemical [[process]]. The [[interface]] through which they interact with surrounding neurons usually consists of several [https://en.wikipedia.org/wiki/Dendrites dendrites] (input connections), which are connected via [https://en.wikipedia.org/wiki/Synapse synapses] to other neurons, and one [https://en.wikipedia.org/wiki/Axon axon] (output connection). If the sum of the input signals surpasses a certain threshold, the neuron sends an [https://en.wikipedia.org/wiki/Action_potential action potential] (AP) at the [https://en.wikipedia.org/wiki/Axon_hillock axon hillock] and transmits this electrical signal along the axon.
  
 
In contrast, a ''neuronal circuit'' is a [[function]]al [[entity]] of interconnected neurons that [[influence]] each other (similar to a control loop in [[cybernetics]]).
 
In contrast, a ''neuronal circuit'' is a [[function]]al [[entity]] of interconnected neurons that [[influence]] each other (similar to a control loop in [[cybernetics]]).
 
==Early study==
 
==Early study==
Early treatments of neural networks can be found in [http://en.wikipedia.org/wiki/Herbert_Spencer Herbert Spencer]'s ''Principles of Psychology'', 3rd edition (1872), [http://en.wikipedia.org/wiki/Theodor_Meynert Theodor Meynert's] ''Psychiatry'' (1884), [http://en.wikipedia.org/wiki/William_James William James]' ''Principles of Psychology'' (1890), and [[Sigmund Freud]]'s Project for a Scientific Psychology (composed 1895). The first rule of neuronal learning was described by [http://en.wikipedia.org/wiki/Donald_Olding_Hebb Hebb] in 1949, Hebbian learning. Thus, Hebbian pairing of pre-synaptic and post-synaptic [[activity]] can substantially alter the [[dynamic]] characteristics of the synaptic [[connection]] and therefore facilitate or inhibit [[signal]] transmission. The neuroscientists [http://en.wikipedia.org/wiki/Warren_Sturgis_McCulloch Warren Sturgis McCulloch] and [http://en.wikipedia.org/wiki/Walter_Pitts Walter Pitts] published the first [[work]]s on the processing of neural networks called "What the frog's eye tells to the frog's brain." They showed [[theoretically]] that networks of [[artificial]] neurons could implement [[logical]], arithmetic, and [[symbolic]] [[functions]]. Simplified [[models]] of [[biological]] neurons were set up, now usually called perceptrons or artificial neurons. These simple models accounted for neural summation, i.e., [[potentials]] at the post-synaptic membrane will summate in the [[cell]] [[body]]. Later models also provided for excitatory and inhibitory synaptic transmission.
+
Early treatments of neural networks can be found in [https://en.wikipedia.org/wiki/Herbert_Spencer Herbert Spencer]'s ''Principles of Psychology'', 3rd edition (1872), [https://en.wikipedia.org/wiki/Theodor_Meynert Theodor Meynert's] ''Psychiatry'' (1884), [https://en.wikipedia.org/wiki/William_James William James]' ''Principles of Psychology'' (1890), and [[Sigmund Freud]]'s Project for a Scientific Psychology (composed 1895). The first rule of neuronal learning was described by [https://en.wikipedia.org/wiki/Donald_Olding_Hebb Hebb] in 1949, Hebbian learning. Thus, Hebbian pairing of pre-synaptic and post-synaptic [[activity]] can substantially alter the [[dynamic]] characteristics of the synaptic [[connection]] and therefore facilitate or inhibit [[signal]] transmission. The neuroscientists [https://en.wikipedia.org/wiki/Warren_Sturgis_McCulloch Warren Sturgis McCulloch] and [https://en.wikipedia.org/wiki/Walter_Pitts Walter Pitts] published the first [[work]]s on the processing of neural networks called "What the frog's eye tells to the frog's brain." They showed [[theoretically]] that networks of [[artificial]] neurons could implement [[logical]], arithmetic, and [[symbolic]] [[functions]]. Simplified [[models]] of [[biological]] neurons were set up, now usually called perceptrons or artificial neurons. These simple models accounted for neural summation, i.e., [[potentials]] at the post-synaptic membrane will summate in the [[cell]] [[body]]. Later models also provided for excitatory and inhibitory synaptic transmission.
 
==Representations in neural networks==
 
==Representations in neural networks==
A [http://en.wikipedia.org/wiki/Receptive_field receptive field] is a small region within the entire visual field. Any given neuron only responds to a subset of stimuli within its receptive field. This property is called tuning. As for [[vision]], in the earlier visual areas, neurons have simpler tuning. For example, a neuron in V1 may fire to any vertical stimulus in its receptive field. In the higher visual areas, neurons have [[complex]] tuning. For example, in the fusiform gyrus, a neuron may only fire when a certain face appears in its receptive field. It is also known that many parts of the brain generate patterns of electrical activity that correspond closely to the layout of the retinal image (this is known as [http://en.wikipedia.org/wiki/Retinotopy retinotopy]). It seems further that imagery that [[origin]]ates from the [[senses]] and internally generated imagery may have a shared [http://en.wikipedia.org/wiki/Ontology ontology] at higher levels of [http://en.wikipedia.org/wiki/Cerebral_cortex cortical] processing (see e.g. [http://en.wikipedia.org/wiki/Language_of_thought Language of thought]). About many parts of the [[brain]] some characterization has been made as to what tasks are correlated with its [[activity]].
+
A [https://en.wikipedia.org/wiki/Receptive_field receptive field] is a small region within the entire visual field. Any given neuron only responds to a subset of stimuli within its receptive field. This property is called tuning. As for [[vision]], in the earlier visual areas, neurons have simpler tuning. For example, a neuron in V1 may fire to any vertical stimulus in its receptive field. In the higher visual areas, neurons have [[complex]] tuning. For example, in the fusiform gyrus, a neuron may only fire when a certain face appears in its receptive field. It is also known that many parts of the brain generate patterns of electrical activity that correspond closely to the layout of the retinal image (this is known as [https://en.wikipedia.org/wiki/Retinotopy retinotopy]). It seems further that imagery that [[origin]]ates from the [[senses]] and internally generated imagery may have a shared [https://en.wikipedia.org/wiki/Ontology ontology] at higher levels of [https://en.wikipedia.org/wiki/Cerebral_cortex cortical] processing (see e.g. [https://en.wikipedia.org/wiki/Language_of_thought Language of thought]). About many parts of the [[brain]] some characterization has been made as to what tasks are correlated with its [[activity]].
  
 
In the [[brain]], [[memories]] are very likely represented by [[patterns]] of [[activation]] amongst networks of neurons. However, how these representations are formed, retrieved and reach conscious [[awareness]] is not completely [[understood]]. [[Cognition|Cognitive]] [[processes]] that characterize [[human]] [[intelligence]] are mainly ascribed to the emergent properties of [[complex]] [[dynamic]] [[characteristics]] in the complex [[systems]] that constitute '''neural networks'''. Therefore, the [[study]] and [[modeling]] of these networks have attracted broad interest under different [[paradigms]] and many [[different]] [[theories]] have been formulated to explain various aspects of their [[behavior]]. One of these — and the subject of several theories — is considered a special property of a neural network: the ability to learn [[complex]] [[patterns]].
 
In the [[brain]], [[memories]] are very likely represented by [[patterns]] of [[activation]] amongst networks of neurons. However, how these representations are formed, retrieved and reach conscious [[awareness]] is not completely [[understood]]. [[Cognition|Cognitive]] [[processes]] that characterize [[human]] [[intelligence]] are mainly ascribed to the emergent properties of [[complex]] [[dynamic]] [[characteristics]] in the complex [[systems]] that constitute '''neural networks'''. Therefore, the [[study]] and [[modeling]] of these networks have attracted broad interest under different [[paradigms]] and many [[different]] [[theories]] have been formulated to explain various aspects of their [[behavior]]. One of these — and the subject of several theories — is considered a special property of a neural network: the ability to learn [[complex]] [[patterns]].
 
==Philosophical issues==
 
==Philosophical issues==
Today most [[research]]ers believe in representations of some kind ([http://en.wikipedia.org/wiki/Representationalism representationalism]) or, more general, in particular mental [[states]] ([http://en.wikipedia.org/wiki/Cognitivism cognitivism]). For instance, [[perception]] can be viewed as [[information]] processing through transfer information from the world into the [[brain]]/[[mind]] where it is further processed and related to other information. Few others envisage a direct path back into the external world in the form of [[action]] (radical [http://en.wikipedia.org/wiki/Behaviourism behaviourism]).
+
Today most [[research]]ers believe in representations of some kind ([https://en.wikipedia.org/wiki/Representationalism representationalism]) or, more general, in particular mental [[states]] ([https://en.wikipedia.org/wiki/Cognitivism cognitivism]). For instance, [[perception]] can be viewed as [[information]] processing through transfer information from the world into the [[brain]]/[[mind]] where it is further processed and related to other information. Few others envisage a direct path back into the external world in the form of [[action]] (radical [https://en.wikipedia.org/wiki/Behaviourism behaviourism]).
  
Another issue, called the binding problem, relates to the question of how the [[activity]] of more or less distinct [[populations]] of neurons dealing with different aspects of perception are combined to form a unified perceptual [[experience]] and have [http://en.wikipedia.org/wiki/Qualia qualia].
+
Another issue, called the binding problem, relates to the question of how the [[activity]] of more or less distinct [[populations]] of neurons dealing with different aspects of perception are combined to form a unified perceptual [[experience]] and have [https://en.wikipedia.org/wiki/Qualia qualia].
  
 
Neuronal networks are not full reconstructions of any cognitive [[system]] found in the [[human]] [[brain]], and are therefore unlikely to form a complete representation of human perception. Some [[research]]ers [[argue]] that human perception must be studied as a whole; hence, the system cannot be taken apart and studied without destroying its [[original]][[ function]]ality. Furthermore, there is [[evidence]] that [[cognition]] is gained through a well orchestrated barrage of sub-threshold synaptic activity throughout the network.
 
Neuronal networks are not full reconstructions of any cognitive [[system]] found in the [[human]] [[brain]], and are therefore unlikely to form a complete representation of human perception. Some [[research]]ers [[argue]] that human perception must be studied as a whole; hence, the system cannot be taken apart and studied without destroying its [[original]][[ function]]ality. Furthermore, there is [[evidence]] that [[cognition]] is gained through a well orchestrated barrage of sub-threshold synaptic activity throughout the network.
 
==Study methods==
 
==Study methods==
Different neuroimaging [[techniques]] have been [[developed]] to investigate the [[activity]] of neural networks. The use of 'brain scanners' or functional neuroimaging to investigate the [[structure]] or function of the brain is common, either as simply a way of better assessing brain injury with high resolution pictures, or by examining the [[relative]] activations of different brain areas. Such [[technologies]] may include [http://en.wikipedia.org/wiki/FMRI fMRI] (functional Magnetic Resonance Imaging), [http://en.wikipedia.org/wiki/Positron_Emission_Tomography PET] (Positron Emission Tomography) and [http://en.wikipedia.org/wiki/Computed_axial_tomography CAT] (Computed axial tomography). Functional neuroimaging uses specific brain imaging technologies to take scans from the brain, usually when a person is doing a particular task, in an attempt to understand how the activation of particular brain areas is related to the task. In functional neuroimaging, especially Functional Magnetic Resonance Imaging (fMRI), which measures hemodynamic activity that is closely linked to neural activity, Positron Emission Tomography (PET), and Electroencephalography (EEG) is used.
+
Different neuroimaging [[techniques]] have been [[developed]] to investigate the [[activity]] of neural networks. The use of 'brain scanners' or functional neuroimaging to investigate the [[structure]] or function of the brain is common, either as simply a way of better assessing brain injury with high resolution pictures, or by examining the [[relative]] activations of different brain areas. Such [[technologies]] may include [https://en.wikipedia.org/wiki/FMRI fMRI] (functional Magnetic Resonance Imaging), [https://en.wikipedia.org/wiki/Positron_Emission_Tomography PET] (Positron Emission Tomography) and [https://en.wikipedia.org/wiki/Computed_axial_tomography CAT] (Computed axial tomography). Functional neuroimaging uses specific brain imaging technologies to take scans from the brain, usually when a person is doing a particular task, in an attempt to understand how the activation of particular brain areas is related to the task. In functional neuroimaging, especially Functional Magnetic Resonance Imaging (fMRI), which measures hemodynamic activity that is closely linked to neural activity, Positron Emission Tomography (PET), and Electroencephalography (EEG) is used.
  
[http://en.wikipedia.org/wiki/Connectionism Connectionist] [[models]] serve as a test platform for different [[hypothesis]] of representation, [[information]] processing, and [[signal]] transmission. Lesioning studies in such models, e.g. artificial neural networks, where parts of the nodes are deliberately destroyed to see how the network [[performs]], can also yield important [[insights]] in the working of several [[cell]] assemblies. Similarly, simulations of dysfunctional neurotransmitters in neurological conditions (e.g., dopamine in the basal ganglia of [http://en.wikipedia.org/wiki/Parkinson%27s_disease Parkinson]'s patients) can yield insights into the underlying [[mechanisms]] for patterns of cognitive deficits observed in the particular patient [[group]]. Predictions from these models can be tested in patients and/or via pharmacological manipulations, and these studies can in turn be used to inform the models, making the process recursive.
+
[https://en.wikipedia.org/wiki/Connectionism Connectionist] [[models]] serve as a test platform for different [[hypothesis]] of representation, [[information]] processing, and [[signal]] transmission. Lesioning studies in such models, e.g. artificial neural networks, where parts of the nodes are deliberately destroyed to see how the network [[performs]], can also yield important [[insights]] in the working of several [[cell]] assemblies. Similarly, simulations of dysfunctional neurotransmitters in neurological conditions (e.g., dopamine in the basal ganglia of [https://en.wikipedia.org/wiki/Parkinson%27s_disease Parkinson]'s patients) can yield insights into the underlying [[mechanisms]] for patterns of cognitive deficits observed in the particular patient [[group]]. Predictions from these models can be tested in patients and/or via pharmacological manipulations, and these studies can in turn be used to inform the models, making the process recursive.
* [http://www.benbest.com/science/anatmind/anatmd3.html Learning, Memory and Plasticity]  
+
* [https://www.benbest.com/science/anatmind/anatmd3.html Learning, Memory and Plasticity]  
* [http://www.his.sunderland.ac.uk/ps/worksh2/denham.pdf Comparison of Neural Networks in the Brain and Artificial Neural Networks]
+
* [https://www.his.sunderland.ac.uk/ps/worksh2/denham.pdf Comparison of Neural Networks in the Brain and Artificial Neural Networks]
* [http://ocw.mit.edu/OcwWeb/Brain-and-Cognitive-Sciences/9-95-AResearch-Topics-in-NeuroscienceJanuary--IAP-2003/LectureNotes/ Lecture notes at MIT OpenCourseWare]
+
* [https://ocw.mit.edu/OcwWeb/Brain-and-Cognitive-Sciences/9-95-AResearch-Topics-in-NeuroscienceJanuary--IAP-2003/LectureNotes/ Lecture notes at MIT OpenCourseWare]
* [http://www.willamette.edu/~gorr/classes/cs449/brain.html Computation in the Brain]
+
* [https://www.willamette.edu/~gorr/classes/cs449/brain.html Computation in the Brain]
* [http://ifcsun1.ifisiol.unam.mx/Brain/neuron2.htm Signaling Properties of the Neuron]
+
* [https://ifcsun1.ifisiol.unam.mx/Brain/neuron2.htm Signaling Properties of the Neuron]
* [http://diwww.epfl.ch/~gerstner/SPNM/node6.html The Problem of Neuronal Coding]
+
* [https://diwww.epfl.ch/~gerstner/SPNM/node6.html The Problem of Neuronal Coding]
* [http://www.ymer.org/amir/software/biological-neural-networks-toolbox/ Biological Neural Network Toolbox] - A free Matlab toolbox for simulating networks of several different types of neurons  
+
* [https://www.ymer.org/amir/software/biological-neural-networks-toolbox/ Biological Neural Network Toolbox] - A free Matlab toolbox for simulating networks of several different types of neurons  
* [http://wormweb.org/neuralnet.html WormWeb.org: Interactive Visualization of the C. elegans Neural Network] - C. elegans, a nematode with 302 neurons, is the only organism for whom the entire neural network has been uncovered.  Use this site to browse through the network and to search for paths between any 2 neurons.
+
* [https://wormweb.org/neuralnet.html WormWeb.org: Interactive Visualization of the C. elegans Neural Network] - C. elegans, a nematode with 302 neurons, is the only organism for whom the entire neural network has been uncovered.  Use this site to browse through the network and to search for paths between any 2 neurons.
  
 
[[Category: Biology]]
 
[[Category: Biology]]

Latest revision as of 01:27, 13 December 2020

Lighterstill.jpg

Neural network poster2.jpg

In neuroscience, a neural network describes a population of physically interconnected neurons or a group of disparate neurons whose inputs or signalling targets define a recognizable circuit. Communication between neurons often involves an electrochemical process. The interface through which they interact with surrounding neurons usually consists of several dendrites (input connections), which are connected via synapses to other neurons, and one axon (output connection). If the sum of the input signals surpasses a certain threshold, the neuron sends an action potential (AP) at the axon hillock and transmits this electrical signal along the axon.

In contrast, a neuronal circuit is a functional entity of interconnected neurons that influence each other (similar to a control loop in cybernetics).

Early study

Early treatments of neural networks can be found in Herbert Spencer's Principles of Psychology, 3rd edition (1872), Theodor Meynert's Psychiatry (1884), William James' Principles of Psychology (1890), and Sigmund Freud's Project for a Scientific Psychology (composed 1895). The first rule of neuronal learning was described by Hebb in 1949, Hebbian learning. Thus, Hebbian pairing of pre-synaptic and post-synaptic activity can substantially alter the dynamic characteristics of the synaptic connection and therefore facilitate or inhibit signal transmission. The neuroscientists Warren Sturgis McCulloch and Walter Pitts published the first works on the processing of neural networks called "What the frog's eye tells to the frog's brain." They showed theoretically that networks of artificial neurons could implement logical, arithmetic, and symbolic functions. Simplified models of biological neurons were set up, now usually called perceptrons or artificial neurons. These simple models accounted for neural summation, i.e., potentials at the post-synaptic membrane will summate in the cell body. Later models also provided for excitatory and inhibitory synaptic transmission.

Representations in neural networks

A receptive field is a small region within the entire visual field. Any given neuron only responds to a subset of stimuli within its receptive field. This property is called tuning. As for vision, in the earlier visual areas, neurons have simpler tuning. For example, a neuron in V1 may fire to any vertical stimulus in its receptive field. In the higher visual areas, neurons have complex tuning. For example, in the fusiform gyrus, a neuron may only fire when a certain face appears in its receptive field. It is also known that many parts of the brain generate patterns of electrical activity that correspond closely to the layout of the retinal image (this is known as retinotopy). It seems further that imagery that originates from the senses and internally generated imagery may have a shared ontology at higher levels of cortical processing (see e.g. Language of thought). About many parts of the brain some characterization has been made as to what tasks are correlated with its activity.

In the brain, memories are very likely represented by patterns of activation amongst networks of neurons. However, how these representations are formed, retrieved and reach conscious awareness is not completely understood. Cognitive processes that characterize human intelligence are mainly ascribed to the emergent properties of complex dynamic characteristics in the complex systems that constitute neural networks. Therefore, the study and modeling of these networks have attracted broad interest under different paradigms and many different theories have been formulated to explain various aspects of their behavior. One of these — and the subject of several theories — is considered a special property of a neural network: the ability to learn complex patterns.

Philosophical issues

Today most researchers believe in representations of some kind (representationalism) or, more general, in particular mental states (cognitivism). For instance, perception can be viewed as information processing through transfer information from the world into the brain/mind where it is further processed and related to other information. Few others envisage a direct path back into the external world in the form of action (radical behaviourism).

Another issue, called the binding problem, relates to the question of how the activity of more or less distinct populations of neurons dealing with different aspects of perception are combined to form a unified perceptual experience and have qualia.

Neuronal networks are not full reconstructions of any cognitive system found in the human brain, and are therefore unlikely to form a complete representation of human perception. Some researchers argue that human perception must be studied as a whole; hence, the system cannot be taken apart and studied without destroying its originalfunctionality. Furthermore, there is evidence that cognition is gained through a well orchestrated barrage of sub-threshold synaptic activity throughout the network.

Study methods

Different neuroimaging techniques have been developed to investigate the activity of neural networks. The use of 'brain scanners' or functional neuroimaging to investigate the structure or function of the brain is common, either as simply a way of better assessing brain injury with high resolution pictures, or by examining the relative activations of different brain areas. Such technologies may include fMRI (functional Magnetic Resonance Imaging), PET (Positron Emission Tomography) and CAT (Computed axial tomography). Functional neuroimaging uses specific brain imaging technologies to take scans from the brain, usually when a person is doing a particular task, in an attempt to understand how the activation of particular brain areas is related to the task. In functional neuroimaging, especially Functional Magnetic Resonance Imaging (fMRI), which measures hemodynamic activity that is closely linked to neural activity, Positron Emission Tomography (PET), and Electroencephalography (EEG) is used.

Connectionist models serve as a test platform for different hypothesis of representation, information processing, and signal transmission. Lesioning studies in such models, e.g. artificial neural networks, where parts of the nodes are deliberately destroyed to see how the network performs, can also yield important insights in the working of several cell assemblies. Similarly, simulations of dysfunctional neurotransmitters in neurological conditions (e.g., dopamine in the basal ganglia of Parkinson's patients) can yield insights into the underlying mechanisms for patterns of cognitive deficits observed in the particular patient group. Predictions from these models can be tested in patients and/or via pharmacological manipulations, and these studies can in turn be used to inform the models, making the process recursive.