The NEWSOM function is used to obtain a self-organizing network. The syntax for this function is as follows:
net = newsom(PR,[d1,d2,...],tfcn,dfcn,olr,osteps,tlr,tns)
Competitive layers are used to solve classification problems.
NET = NEWSOM(PR,[D1,D2,...],TFCN,DFCN,OLR,OSTEPS,TLR,TNS) takes,
- PR - Rx2 matrix of min and max values for R input elements.
- Di - Size of ith layer dimension, defaults = [5 8].
- TFCN - Topology function, default = 'hextop'.
- DFCN - Distance function, default = 'linkdist'.
- OLR - Ordering phase learning rate, default = 0.9.
- OSTEPS - Ordering phase steps, default = 1000.
- TLR - Tuning phase learning rate, default = 0.02; TND - Tuning phase neighborhood distance, default = 1.
and returns a new self-organizing map.
How the winning neuron is selected
When an input topology is presented to a SOM network, the units in the output layer compete with each other for the right to be declared the winner. The winning output unit will be the unit whose incoming connection weights are the closest to the input topology in terms of Euclidean distance. Thus, the input is presented, and each output unit competes to match the input pattern. The output that is closest to the input pattern is declared the winner. The connection weights of the winning unit are then adjusted, i.e., moved in the direction of the input pattern by a factor determined by the learning rate. This is the basic nature of competitive neural networks. A SOM also has the capability to generalize. This means that the network can recognize or characterize inputs it has never encountered before. A new input is assimilated with the map unit it is mapped to.