How to plot the NAR predicted values in matlab?

Hi there. After several reading up, I realise for my project I will need to use NAR because I am using history values to predict. I have a monthly data previous months. & I need to predict for the next 12 months.
T = tonndata(Target_Set1,false,false);
% Create a Nonlinear Autoregressive Network
feedbackDelays = 1:2;
hiddenLayerSize = 2;
net = narnet(feedbackDelays,hiddenLayerSize);
% Prepare the Data for Training and Simulation
% The function PREPARETS prepares timeseries data for a particular network,
% shifting time by the minimum amount to fill input states and layer states.
% Using PREPARETS allows you to keep your original time series data unchanged, while
% easily customizing it for networks with differing numbers of delays, with
% open loop or closed loop feedback modes.
[x,xi,ai,t] = preparets(net,{},{},T);
% Setup Division of Data for Training, Validation, Testing
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
% Train the Network
[net,tr] = train(net,x,t,xi,ai);
% Test the Network
y = net(x,xi,ai);
e = gsubtract(t,y);
performance = perform(net,t,y)
% View the Network
view(net)
% Plots
% Uncomment these lines to enable various plots.
%figure, plotperform(tr)
%figure, plottrainstate(tr)
%figure, plotresponse(t,y)
%figure, ploterrcorr(e)
%figure, plotinerrcorr(x,e)
% Closed Loop Network
% Use this network to do multi-step prediction.
% The function CLOSELOOP replaces the feedback input with a direct
% connection from the outout layer.
netc = closeloop(net);
[xc,xic,aic,tc] = preparets(netc,{},{},T);
yc = netc(xc,xic,aic);
perfc = perform(net,tc,yc)
% Step-Ahead Prediction Network
% For some applications it helps to get the prediction a timestep early.
% The original network returns predicted y(t+1) at the same time it is given y(t+1).
% For some applications such as decision making, it would help to have predicted
% y(t+1) once y(t) is available, but before the actual y(t+1) occurs.
% The network can be made to return its output a timestep early by removing one delay
% so that its minimal tap delay is now 0 instead of 1. The new network returns the
% same outputs as the original network, but outputs are shifted left one timestep.
nets = removedelay(net);
[xs,xis,ais,ts] = preparets(nets,{},{},T);
ys = nets(xs,xis,ais);
stepAheadPerformance = perform(net,ts,ys)
How do I plot the predicted values after training and validation.From the examples I have seen for NARXNET normally they will use the code below.
plot([cell2mat(T),nan(1,N);
nan(1,length(T)),cell2mat(yPred);
nan(1,length(T)),cell2mat(targetSeriesVal)]')
legend('Original Targets','Network Predictions','Expected Outputs');
Thanks in advance.

 Accepted Answer

1. Use the Target autocorrelation function to determine a good set of feedback delays. Search the NEWSGROUP and ANSWERS using
greg nncorr
2. Find a good value for H, the number of hidden nodes, by using a double loop over candidate values and initial weights. Search N & A:
greg narnet Ntrials
greg narxnet Ntrials
3. Plot y and yc. If perfc is much worse than performance, continue training with
[ netc trc yc ec xcf acf ] = train(netc, x, t, xi, ai);
4. Search N & A:
greg closeloop
Hope this helps.
Greg

4 Comments

Hi Greg. Thank you for you prompt reply. It was really a big help for me as I am really new in this. Here are my codes for now.
% 1. Importing data
Data_Inputs=xlsread('C:\Users\user\Desktop\FYP\CaseStudy\CaseStudyCol.xlsx'); % Import file
Training_Set1=Data_Inputs(1:end,3);%specific training set
Target_Set=Data_Inputs(1:end,2); %specific target set
Input=Training_Set1'; %Convert to row
Target=Target_Set'; %Convert to row
N = length(Target)
zy = zscore(Target,1);
autocorry = ifft( abs(fft(zy)).^2 )/N
X = con2seq(Input); %Convert to cell
T = con2seq(Target); %Convert to cell
tr = Data_Inputs(1:end,1);
tr1 = tr';
TIME = con2seq(tr1);
figure(2);
plot(tr,Target_Set);
xlabel('Time(years)');
ylabel('Energy Demand(MTOE)');
%AUTOCORRELATION
ZT=zscore(Target,1);
autocorrT = nncorr(ZT,ZT,N-1,'biased')
figure(3)
plot(autocorrT)
%title('ACF')
%To find H
[I ,N]=size(Input);
[O ,N]=size(Target);
Neq=N*O;
Hub=floor((N-1)*O/(I+O+1)) %max H for Neq>=Nw
% 2. Create a Nonlinear Autoregressive Network
feedbackDelays = 1;
hiddenLayerSize = 3;
net = narnet(feedbackDelays,hiddenLayerSize);
% 3. Configure the network
% Choose Feedback Pre/Post-Processing Functions
% Settings for feedback input are automatically applied to feedback output
% For a list of all processing functions type: help nnprocess
net.input.processFcns = {'removeconstantrows','mapminmax'};
% 4. Initialize the weights and biases
% Prepare the Data for Training and Simulation
% The function PREPARETS prepares timeseries data for a particular network,
% shifting time by the minimum amount to fill input states and layer states.
% Using PREPARETS allows you to keep your original time series data unchanged, while
% easily customizing it for networks with differing numbers of delays, with
% open loop or closed loop feedback modes.
[x,xi,ai,t] = preparets(net,{},{},T);
% Setup Division of Data for Training, Validation, Testing
%net.divideFcn='divideblock';
net.divideParam.trainRatio = 70/100;
net.divideParam.valRatio = 15/100;
net.divideParam.testRatio = 15/100;
% Choose a Training Function
% For a list of all training functions type: help nntrain
%net.trainFcn = 'trainlm'; % Levenberg-Marquardt
% Choose a Performance Function
% For a list of all performance functions type: help nnperformance
%net.performFcn = 'mse'; % Mean squared error
% Choose Plot Functions
% For a list of all plot functions type: help nnplot
%net.plotFcns = {'plotperform','plottrainstate','plotresponse', ...
% 'ploterrcorr', 'plotinerrcorr'};
% 5. Train the Network
[net,tr] = train(net,x,t,xi,ai);
% 6. Test the Network
y = net(x,xi,ai);
e = gsubtract(t,y);
performance = perform(net,t,y);
% Recalculate Training, Validation and Test Performance
trainTargets = gmultiply(t,tr.trainMask);
valTargets = gmultiply(t,tr.valMask);
testTargets = gmultiply(t,tr.testMask);
trainPerformance = perform(net,trainTargets,y)
valPerformance = perform(net,valTargets,y)
testPerformance = perform(net,testTargets,y)
% View the Network
view(net)
% Plots
% Uncomment these lines to enable various plots.
figure, plotperform(tr)
%figure, plottrainstate(tr)
figure, plotresponse(t,y)
%figure, ploterrcorr(e)
%figure, plotinerrcorr(x,e)
% Closed Loop Network
% Use this network to do multi-step prediction.
% The function CLOSELOOP replaces the feedback input with a direct
% connection from the outout layer.
netc = closeloop(net);
[xc,xic,aic,tc] = preparets(netc,{},{},T);
yc = netc(xc,xic,aic);
perfc = perform(net,tc,yc)
% Step-Ahead Prediction Network
% For some applications it helps to get the prediction a timestep early.
% The original network returns predicted y(t+1) at the same time it is given y(t+1).
% For some applications such as decision making, it would help to have predicted
% y(t+1) once y(t) is available, but before the actual y(t+1) occurs.
% The network can be made to return its output a timestep early by removing one delay
% so that its minimal tap delay is now 0 instead of 1. The new network returns the
% same outputs as the original network, but outputs are shifted left one timestep.
nets = removedelay(net);
[xs,xis,ais,ts] = preparets(nets,{},{},T);
ys = nets(xs,xis,ais);
stepAheadPerformance = perform(net,ts,ys)
figure(4);
subplot(3,1,1);
plot(cell2mat(y),'b','DisplayName','Expected Outputs')
hold on
plot(cell2mat(T),'r','DisplayName','Original Targets')
title({'Prediction'})
legend('show')
hold off
subplot(3,1,2)
plot(cell2mat(yc),'b','DisplayName','Expected Outputs with Close Loop')
hold on
plot(cell2mat(T),'r','DisplayName','Original Targets')
title({'Prediction'})
legend('show')
hold off
subplot(3,1,3);
plot(cell2mat(ys),'b','DisplayName','Expected Outputs with Step Ahead Prediction')
hold on
plot(cell2mat(T),'r','DisplayName','Original Targets')
title({'Prediction'})
legend('show')
hold off
Results are as follows.
N =
12
autocorry =
Columns 1 through 9
1.0000 0.5972 0.2746 -0.1545 -0.3676 -0.5456 -0.6083 -0.5456 -0.3676
Columns 10 through 12
-0.1545 0.2746 0.5972
autocorrT =
Columns 1 through 9
-0.1303 -0.2242 -0.3714 -0.3585 -0.3555 -0.3041 -0.1901 -0.0091 0.2169
Columns 10 through 18
0.4988 0.7275 1.0000 0.7275 0.4988 0.2169 -0.0091 -0.1901 -0.3041
Columns 19 through 23
-0.3555 -0.3585 -0.3714 -0.2242 -0.1303
Hub =
3
trainPerformance =
2.4042e-17
valPerformance =
1.7224
testPerformance =
0.1221
perfc =
0.3404
stepAheadPerformance =
0.3354
My Questions
1. When I plotresponse my output y, It give me shifted values of the the original data. For example my first data is 3.2. But the plotted data is to 4.2. Can you briefly explain to me why this happens?
2. How do I know if the perfc is much worse than performance? I have these output.
trainPerformance =
2.4042e-17
valPerformance =
1.7224
testPerformance =
0.1221
perfc =
0.3404
stepAheadPerformance =
0.3354
3. For now, I have data from 1970 to 2001. But I need to predict until 2014. How do I code that because I am using NAR as I am using only past outputs?
Thanks in advance. Hope to hear from you soon.
I would prefer that you practice on one of the MATLAB narnet data sets ( help nndatasets) so that we can compare answers.
Search
greg narnet
and try to choose a recent one that is relevant.
Greg
hi greg! i have gone through some of your answers and you consistently say that we can find feedback delays by plotting the autocorrelation function. i am currently working on narnet and i am unable to find the optimum no. of delays i should use. How exactly do i find it? Steps or any links would help. i have gone through the NEWSGROUP and ANSWERS. Thanks !
In general, it is too hard to find the optimal set of hidden nodes and delays. I am just happy to find the minimum number of delays and hidden nodes that will model 95% of the average target variance.
I have many postings dealing with that search NEWSGROUP and ANSWERS using
narnet nncorr
Write back in a NEW POST if you have problems understanding or using it.
Hope this helps
Greg

Sign in to comment.

More Answers (0)

Categories

Find more on Deep Learning Toolbox in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!