Problems about weights in neural network training
3 views (last 30 days)
Show older comments
Hi all!
I am using neural network models in matlab, and now I am facing a problem about the weights in NN training.
Basically, I have a multiple inputs multiple outputs recurrent neural network and the network is generated as
net = narxnet(inputDelays,feedbackDelays,hiddenLayerSize),
and the corresponding mathematical model is like
Y(k+1) = A*Y(k) + f(U),
where U is the input of the model and Y is the output (which also feedback to the input), A is a unknown constant matrix and f() is a unknown nonlinear function of U .
The whole training process is fine, and I can get very small training error. However, when I want to use this network for a prediction, I have a problem, which is that, the influence from the feedback is too much.
For example, let's say the network is net , and I have two different inputs U1 = [0 1 0] and U2 = [1 1 1] . When I want to have a one step prediction regarding to these two different inputs, I use
netc = closeloop(net);
u1 = tonndata(U1,false,false);
u2 = tonndata(U2,false,false);
[a1,b1,c1,d1] = preparets(netc,u1,{},targets);
[a2,b2,c2,d2] = preparets(netc,u2,{},targets);
outputs1 = netc(a1,b1,c1);
outputs2 = netc(a2,b2,c2);
The result is that, outputs1 is exactly identical to outputs2 . I tried a lot of times and used all different inputs. If the feedback Y(k) is the same, the final predicted output Y(k+1) is always the same. I guess this might because of the noise from the training data, but I do have a very good training data set with very small noises.
So now I am wondering if there is any method that, I can increase the influence from the input U and decrease the influence from Y , meanwhile keeping a certain training accuracy.
Thank you very much for the help!
0 Comments
Accepted Answer
Greg Heath
on 24 Oct 2013
After you close the loop:
Test the CL net on the original data. It probably will not perform as well as the OL loop net.
If dissatisfied, train the CL design on the original data using the final OL weights as the initial weights for the CL training.
Check my recent posts in the NEWSREADER and ANSWERS re closeloop designs.
Thank you for formally accepting my answer
Greg
4 Comments
Greg Heath
on 30 Jul 2014
1. According to the given equation, the correct delay inputs should be
ID=0, FD=1
2. I cannot comment on your choice of H=15 because I don't know
[ I N ] = size(input)
[ O N ] = size(target)
Hub = -1+ceil( (0.7*N*O-O) / (I + O +1) )
3. The training success is determined by nontraining performance, MSEval and MSEtst, not training performance MSEtrn
4. You should not use 'dividerand' because it destroys the inherent auto and crosscorrelations of the original data. Your net may accomodate the order encountered in scrambled training data. However, don't expect it to work for nontraining data.
5. I suggested the comparison with timedelaynet and narnet to answer your question of input vs feedback importance. That is not what you did.
Sorry for the delayed response. It was not intentional
Greg
More Answers (0)
See Also
Categories
Find more on Modeling and Prediction with NARX and Time-Delay Networks in Help Center and File Exchange
Products
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!