Optimal hidden nodes number

1 view (last 30 days)
Hamza Ali
Hamza Ali on 8 Jul 2017
Commented: Greg Heath on 10 Oct 2017
Hello everyone, I would like to find optimal hidden nodes number using structured trial an error. I did the following simulation :
Hmin = 1;
Hmax = 30;
dH = 1;
NTrials = 5;
I took the minimum error for each 5 trials to plot the following graph:
My question is how to determine optimal hidden nodes from this graph ? Thank you.
  4 Comments
Joshua
Joshua on 8 Jul 2017
Hamza, what specifically are you trying to get from the graph (minimum, inflection point, zero slope, etc.)? Unless you provide detailed information about what you're actually looking for and what algorithm was used to make the graph, we can't help you.
Hamza Ali
Hamza Ali on 9 Jul 2017
Thank you Joshua for your answer, i would like to find number of hidden nodes that gives least testing error.

Sign in to comment.

Accepted Answer

Greg Heath
Greg Heath on 9 Jul 2017
This is the approach I use. First I determine how many training equations are used
Ntrneq = Ntrn*O
Ntrn = number of training input/target pairs
O = dimension of the output target
Then, for a given MLP with an I-H-O network topology, the number of unknown weights is
Nw = (I+1)*H+(H*1)*O.
When
H <= Hub = (Ntrneq-O)/(I+O+1),
the number of unknown weights does not exceed the number of training equations. Therefore, numerically estimated minimum error solutions are stable.
Otherwise Nw > Ntrneq and the net is OVERFIT. Then, unless precautions are taken, the dreaded phenomenon of
OVERTRAINING an OVERFIT NETWORK
can occur and solutions can be useless.
Two ways to avoid this are
1. Use a validation design subset that will cause training to be stopped
when the validation subset error increases continually for 6(MATLAB
DEFAULT) consecutive epochs.
2. Bayesian Regularization via either
a. The TRAINBR training algorithm
b. Using TRAINLM (the default) with the error function MSEREG
My approach is to use a double loop solution to
1. Minimize H
2. Subject to the training target subset constraint
mse(errortrn) <= 0.01 * mean(var(targettrn',1))
3. This results in a training subset Rsquare that is greater than 0.99 !!!
4 The outer loop is over h = Hmin:dH:Hmax
5. The inner loop is over Ntrials of random initial weight assignments.
Even though I always use a validation subset, I try 10 initial random weight trials for each trial value of H with H <= Hub. If unsuccessful, I then consider H > Hub.
I have posted hundreds of examples in the NEWSGROUP and ANSWERS. A good search term is
Hmin:dH:Hmax
Hope this helps.
Thank you for formally accepting my answer
Greg.

More Answers (2)

Walter Roberson
Walter Roberson on 8 Jul 2017
When you use neural networks, the lowest theoretical error always occurs at the point where the state matrices are large enough to include exact copies of every sample that you ever trained on, plus the known output for each of those samples. For example if you train on 50000 samples each of 17 features, then a neural network that is 50000 * 17 large (exact copy of input data) + 50000 large (exact copy of output data) will have an error rate of 0 for that data.
Such a system might be pretty useless on other data.
Likewise, if you are were doing clustering, then you can achieve 100% accuracy by using one cluster per unique input sample.
So... before you can talk about "optimal", you need to define exactly what you mean by that.
There are a lot of things for which the Pareto Rule applies: "for many events, roughly 80% of the effects come from 20% of the causes". This applies recursively -- of the 20% that remains after the first pass, 80% will be explained by 20% of the second layer of causes. And you can keep going with that. But it is common that the cost of each layer you go through is roughly the same, so addressing the first 80% of the second 20% of the original costs about as much as dealing with the original 80% did, and doing the next step costs about as much as everything already spent, and so on. Basically, for each step closer to 100% accuracy you get, the costs double.
Where is the "optimal"? Well that depends on whether you have resource limitations or if you prize 100% accuracy more than anything.

Greg Heath
Greg Heath on 13 Jul 2017
Edited: Greg Heath on 13 Jul 2017
I = 5, O = 1, N = 46824
Ntrn ~ 0.7*N = 32877 Ntrneq = Ntrn*O = 32877 Hub = (Ntrneq-O)/(I+O+1) = 32876/7 ~ 4682
For H << Hub, try H <= Hub/10 or H < 468
The reason for the quirky numbers is because your data base is HUGE!
I would just start with the default H = 10 with Ntrials = 10 and continue doubling H until success. Then I would consider reducing by filling in the gaps between values already tried.
Hope this helps.
Greg
  3 Comments
Hamza Ali
Hamza Ali on 9 Oct 2017
Hello Mr Greg,
According to your indications, i compute Hub for this following values (I = 5, O = 1, N = 46824), i found approximately Hub = 71. Then, i varied H from (Hmin = 10) to (Hmax = 80), and continue doubling H with (Ntrials = 10) for each value of H that is tested. I have got these results and i think that the best value of H is 50, because gradually as i increase the number of neurons, the test error decreases until it reaches 50, and from this value the error increases again.
you find enclosed the table that contains results, and screenshot of the plot (test error according to number of hidden layer).
Best regards.
Greg Heath
Greg Heath on 10 Oct 2017
The best way to judge is to state, a priori, how much error you will accept.
The simplest model is output = constant. To minimize mean-square-error that constant should be the target mean
output = mean(target,2)
and the resulting MSE is the mean biased target variance.
vart1 = mse(target - mean(target,2))
= mean(var(target',1))
I am usually satisfied with the goal
MSEgoal = 0.01 * vart1
which yields the square statistic See Wikipedia)
Rsquare = 1 - MSE/MSEgoal = 0.99
Hope this helps.
Greg

Sign in to comment.

Categories

Find more on Networks in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!