# learnp

Perceptron weight and bias learning function

## Syntax

```[dW,LS] = learnp(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) info = learnp('code') ```

## Description

`learnp` is the perceptron weight/bias learning function.

`[dW,LS] = learnp(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)` takes several inputs,

 `W` `S`-by-`R` weight matrix (or `b`, and `S`-by-`1` bias vector) `P` `R`-by-`Q` input vectors (or `ones(1,Q)`) `Z` `S`-by-`Q` weighted input vectors `N` `S`-by-`Q` net input vectors `A` `S`-by-`Q` output vectors `T` `S`-by-`Q` layer target vectors `E` `S`-by-`Q` layer error vectors `gW` `S`-by-`R` weight gradient with respect to performance `gA` `S`-by-`Q` output gradient with respect to performance `D` `S`-by-`S` neuron distances `LP` Learning parameters, none, `LP = []` `LS` Learning state, initially should be = `[]`

and returns

 `dW` `S`-by-`R` weight (or bias) change matrix `LS` New learning state

`info = learnp('code')` returns useful information for each `code` character vector:

 `'pnames'` Names of learning parameters `'pdefaults'` Default learning parameters `'needg'` Returns 1 if this function uses `gW` or `gA`

## Examples

Here you define a random input `P` and error `E` for a layer with a two-element input and three neurons.

```p = rand(2,1); e = rand(3,1); ```

Because `learnp` only needs these values to calculate a weight change (see “Algorithm” below), use them to do so.

```dW = learnp([],p,[],[],[],[],e,[],[],[],[],[]) ```

## Algorithms

`learnp` calculates the weight change `dW` for a given neuron from the neuron’s input `P` and error `E` according to the perceptron learning rule:

```dw = 0, if e = 0 = p', if e = 1 = -p', if e = -1 ```

This can be summarized as

```dw = e*p' ```

## References

Rosenblatt, F., Principles of Neurodynamics, Washington, D.C., Spartan Press, 1961