how can i fix indexes for the train, val and test in train function? and how can i return the testing accuracy exactly like it appears in the plot confusion matrix?

5 views (last 30 days)
Dear Friends, i'm using the code below: to classify by berlin audio dataset, my dataset is not mixed, every group of audio files related are seperated from other groups,
please i have two questions: why isnt the fixed indexing working and how can i access the testing performance that appears in the confusion matrix of the the plotconfusion, please help me, a need it urgent in my work.
close all;
clear all;
clc;
%loading the berlin dataset
load berlin.mat;
%loading the target 0,1 matrix
load binary_matrix_berlin;
%rotating the both matrices
x = rot90(cell2mat(fin));
t = binary_matrix_berlin';
[ I N ] = size(x); % [ 2 1000]
[ O N ] = size(t); % [ 4 1000]
%there are seven groups in my dataset and i'm trying to classify them from each other, the set_1 is from file no. 1 to file number 69, the set_2 is from file no. 70 to file no. 115 an so on, so using the net.divideFcn = 'dividerand' doesnt work with me, i have to use the 'divideInd', but is not working, please help me in this issue.
% Choose a Training Function
% For a list of all training functions type: help nntrain
% 'trainlm' is usually fastest.
% 'trainbr' takes longer but may be better for challenging problems.
% 'trainscg' uses less memory. Suitable in low memory situations.
trainFcn = 'trainscg'; % Scaled conjugate gradient backpropagation.
% Create a Pattern Recognition Network
hiddenLayerSize = [80 40];
net = patternnet(hiddenLayerSize, trainFcn);
% Choose Input and Output Pre/Post-Processing Functions
% For a list of all processing functions type: help nnprocess
net.input.processFcns = {'removeconstantrows','mapminmax'};
% Setup Division of Data for Training, Validation, Testing
% For a list of all data division functions type: help nndivision
net.divideFcn = 'divideInd'; % Divide data by index
net.divideParam.trainnum = [1:42,70:97,116:158,187:235,268:315,347:384,409:485];
net.divideParam.valnum = [43:49,98:102,159:166,236:244,316:323,385:391,486:498];
net.divideParam.testnum = [50:69,103:115,167:186,245:267,324:346,392:408,499:535];
% Choose a Performance Function
% For a list of all performance functions type: help nnperformance
net.performFcn = 'mse'; % Cross-Entropy
% Choose Plot Functions
% For a list of all plot functions type: help nnplot
net.plotFcns = {'plotperform','plottrainstate','ploterrhist', ...
'plotconfusion', 'plotroc'};
% Train the Network
[net,tr] = train(net,x,t);
%.........................................................
% Test the Network
y = net(x);
e = gsubtract(t,y);
performance = perform(net,t,y)
tind = vec2ind(t);
%a vector representing all the samples of the dataset predicted after
%training the NN.
yind = vec2ind(y);
%the percentage error of the all confusion matrix
percentErrors = sum(tind ~= yind)/numel(tind);
%...........................................................
% Recalculate Training, Validation and Test Performance
%in the list of functions listed below, non of them is giving me the testing performance, as it appears in the plotconfusion "test confusion matrix", please i need the function that can return the testing performance only, or its error percentage, "the testing results after applying the testing samples to the neural network only".
trainTargets = t .* tr.trainMask{1};
valTargets = t .* tr.valMask{1};
testTargets = t .* tr.testMask{1};
trainPerformance = perform(net,trainTargets,y)
valPerformance = perform(net,valTargets,y)
testPerformance = perform(net,testTargets,y)
% View the Network
view(net)
  3 Comments
Husam Ali
Husam Ali on 8 Jun 2020
%sorry for the dealy, but i'm a PhD student and i'm over loaded.
%first i divided the train, validation and testing according to index
% 70% of the Dataset for training, 15% validation and 15% for testing
net.divideParam.trainInd = [ 1:49, 70:102, 116:166, 187:242, 268:322, 347:389, 409:497] ;
net.divideParam.valInd = [ 50:59, 103:108, 167:177, 243:255, 323:334, 389:397, 498:516];
net.divideParam.testInd = [60:69, 109:115, 178:186, 256:267, 335:346, 398:408, 517:535];
%then i prepared my training data
fi_test = fin ( : , map(rt,1:len(rt)) ) ;
x = rot90(cell2mat(fi_test));
%then i trained the data, x for training data, t for target
[net,tr] = train ( net , x , t,'useGPU','yes');
%then calculated y a vector representing all the samples of the dataset predicted after training the NN.
y = net(x);
yind = vec2ind(y);
%the percentage error of the all confusion matrix
percentErrors = sum(tind ~= yind)/numel(tind);
%.......................................................
%calculating the training, validation and testing accuracy will depend on yind
test_indx = [];
train_indx = [];
val_indx = [];
%yind is a vector, each cell in this vector represents the prediction of the NN to that specific sample. so you have to know every sample and its coresponding cell in the yind vector. for example: from cell 60 to cell 69, they represent the testing samples for class one, so you will access those cell to see what has the system predicited them, so you will count how many 1's there are in that portion of the yind vector, from 109 to 115 are the testing samples for class two, and so on for all the classes, there is a portiong for the training, validation and testing samples. the training accuracy will be calculated and it will appear the same as when you puch the confusion button in the "nntraintool", but the validation appears different some time because it changes frequently during training. to be noted, the yind vector will be filled of number from 1 to 7 if you had seven classes to classify, and from 1 to 6, if you had six classes and so on.
train_indx = [1 70 116 187 268 347 409;49 102 165 243 323 390 497];
val_indx = [50 103 166 244 324 391 498 ;59 108 175 255 334 399 516];
test_indx = [60 109 176 256 335 400 517; 69 115 186 267 346 408 535];
no_emo_sect = 7;
j=1;
per=zeros(7,1);
per1=zeros(7,1);
per2=zeros(7,1);
for i= 1 : no_emo_sect
for j = test_indx(1,i) : test_indx(2,i)
if (yind(j)==i)
per(i)=per(i)+1;
end
end
for j = train_indx(1,i) : train_indx(2,i)
if (yind(j)==i)
per1(i)=per1(i)+1;
end
end
for j = val_indx(1,i) : val_indx(2,i)
if (yind(j)==i)
per2(i)=per2(i)+1;
end
end
end
%70%,15%,15%
train_acc = (sum(per1)*100)/the total number of training samples;
val_acc = (sum(per2)*100)/the total number of validation samples;
test_acc = (sum(per)*100)/the total number of testing samples;

Sign in to comment.

Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!