Продолжая использовать сайт, вы даете свое согласие на работу с этими файлами.
Spreading activation
Spreading activation is a method for searching associative networks, biological and artificial neural networks, or semantic networks. The search process is initiated by labeling a set of source nodes (e.g. concepts in a semantic network) with weights or "activation" and then iteratively propagating or "spreading" that activation out to other nodes linked to the source nodes. Most often these "weights" are real values that decay as activation propagates through the network. When the weights are discrete this process is often referred to as marker passing. Activation may originate from alternate paths, identified by distinct markers, and terminate when two alternate paths reach the same node. However brain studies show that several different brain areas play an important role in semantic processing.
Spreading activation in semantic networks as a model were invented in cognitive psychology to model the fan out effect.
Spreading activation can also be applied in information retrieval, by means of a network of nodes representing documents and terms contained in those documents.
Cognitive psychology
As it relates to cognitive psychology, spreading activation is the theory of how the brain iterates through a network of associated ideas to retrieve specific information. The spreading activation theory presents the array of concepts within our memory as cognitive units, each consisting of a node and its associated elements or characteristics, all connected together by edges. A spreading activation network can be represented schematically, in a sort of web diagram with shorter lines between two nodes meaning the ideas are more closely related and will typically be associated more quickly to the original concept. For memory psychology, Spreading activation model means people organize their knowledge of the world based on their personal experience, which is saying those personal experiences form the network of ideas that is the person's knowledge of the world.
When a word (the target) is preceded by an associated word (the prime) in word recognition tasks, participants seem to perform better in the amount of time that it takes them to respond. For instance, subjects respond faster to the word "doctor" when it is preceded by "nurse" than when it is preceded by an unrelated word like "carrot". This semantic priming effect with words that are close in meaning within the cognitive network has been seen in a wide range of tasks given by experimenters, ranging from sentence verification to lexical decision and naming.
As another example, if the original concept is "red" and the concept "vehicles" is primed, they are much more likely to say "fire engine" instead of something unrelated to vehicles, such as "cherries". If instead "fruits" was primed, they would likely name "cherries" and continue on from there. The activation of pathways in the network has everything to do with how closely linked two concepts are by meaning, as well as how a subject is primed.
Algorithm
A directed graph is populated by Nodes[ 1...N ] each having an associated activation value A [ i ] which is a real number in the range [ 0.0 ... 1.0]. A Link[ i, j ] connects source node[ i ] with target node[ j ]. Each edge has an associated weight W [ i, j ] usually a real number in the range [0.0 ... 1.0].
Parameters:
- Firing threshold F, a real number in the range [0.0 ... 1.0]
- Decay factor D, a real number in the range [0.0 ... 1.0]
Steps:
- Initialize the graph setting all activation values A [ i ] to zero. Set one or more origin nodes to an initial activation value greater than the firing threshold F. A typical initial value is 1.0.
- For each unfired node [ i ] in the graph having an activation value A [ i ] greater than the node firing threshold F:
- For each Link [ i, j ] connecting the source node [ i ] with target node [ j ], adjust A [ j ] = A [ j ] + (A [ i ] * W [ i, j ] * D) where D is the decay factor.
- If a target node receives an adjustment to its activation value so that it would exceed 1.0, then set its new activation value to 1.0. Likewise maintain 0.0 as a lower bound on the target node's activation value should it receive an adjustment to below 0.0.
- Once a node has fired it may not fire again, although variations of the basic algorithm permit repeated firings and loops through the graph.
- Nodes receiving a new activation value that exceeds the firing threshold F are marked for firing on the next spreading activation cycle.
- If activation originates from more than one node, a variation of the algorithm permits marker passing to distinguish the paths by which activation is spread over the graph
- The procedure terminates when either there are no more nodes to fire or in the case of marker passing from multiple origins, when a node is reached from more than one path. Variations of the algorithm that permit repeated node firings and activation loops in the graph, terminate after a steady activation state, with respect to some delta, is reached, or when a maximum number of iterations is exceeded.
Examples
See also
- Nils J. Nilsson. "Artificial Intelligence: A New Synthesis". Morgan Kaufmann Publishers, Inc., San Francisco, California, 1998, pages 121-122
- Rodriguez, M.A., " Grammar-Based Random Walkers in Semantic Networks", Knowledge-Based Systems, 21(7), 727-739, doi:10.1016/j.knosys.2008.03.030, 2008.
- Karalyn Patterson, Peter J. Nestor & Timothy T. Rogers "Where do you know what you know? The representation of semantic knowledge in the human brain", Nature Reviews Neuroscience 8, 976-987 (December 2007)