Why is my CNN overfitting?
Show older comments
I'm trying to train a convolutional neural network (CNN) to classify raw ECG signals. Successes indicate that defibrillation worked on the patient and failures indicate that defibrillation did not work. I've included the .mat file with all the data. The matrix vf represents all the signals while the string array, R, represents the labels. vf has 2078 rows for each patient and 760 columns for samples per signal.
My training plot looks like a classic case of overfitting and I have included the image below. My validation accuracy is consistently lower than my training accuracy and the validation loss increases over time. I've tried to introduce batch normalization, L2 regularization, and data augmentation into my neural network to no avail.
Are there errors with my neural network layers? It seems that no matter what I do, the validation accuracy will not go higher than 55%.
Thanks in advance.
load cnnData
miniBatchSize = 128;
maxEpochs = 150;
learnRate = 0.001;
out = mat2gray(vf); % Min-max normalizes signals
data = reshape(out,[1 size(out,2) 1 size(out,1)]);
labels = categorical(R);
% Determining indices of training, validation, and test data
[trainInd,valInd,testInd] = dividerand(size(out,1),0.7,0.15,0.15);
% Gathers training data and corresponding labels
XTrain = data(:,:,1,trainInd);
YTrain = labels(trainInd);
% Gathers testing data and corresponding labels
XTest = data(:,:,1,testInd);
YTest = labels(testInd);
% Gathers validation data and corresponding labels
XValidation = data(:,:,1,valInd);
YValidation = labels(valInd);
% Layers of Neural Network
layers = [imageInputLayer([1 size(out,2) 1])
convolution2dLayer([1, 15],16,'Padding',1)
reluLayer
dropoutLayer(0.4)
maxPooling2dLayer(2,'Stride',2)
convolution2dLayer([1, 15],32,'Padding',1)
reluLayer
dropoutLayer(0.4)
maxPooling2dLayer(2,'Stride',2)
convolution2dLayer([1, 15],64,'Padding',1)
reluLayer
dropoutLayer(0.4)
maxPooling2dLayer(2,'Stride',2)
fullyConnectedLayer(2)
softmaxLayer
classificationLayer];
% Training Options of Neural Network
options = trainingOptions('adam',...
'MiniBatchSize',miniBatchSize,...
'MaxEpochs',maxEpochs,...
'InitialLearnRate',learnRate,...
'ValidationData',{XValidation,YValidation},...
'Plots','training-progress',...
'Shuffle','every-epoch',...
'Verbose',true);
% Trains network and classifies test data
net = trainNetwork(XTrain,YTrain,layers,options);
Pred = classify(net,XTest);
% Creates confusion matrix
confMat = confusionmat(YTest,Pred);

3 Comments
Matt J
on 23 Jun 2021
I've tried to introduce batch normalization, L2 regularization, and data augmentation into my neural network to no avail.
I think we need more details on the sweep of these parameters that you did. What range of regularization parameters did you test?
It is interesting that your validation loss climbs even though the accuracy stays about the same.
Jared Stinson
on 23 Jun 2021
Edited: Jared Stinson
on 23 Jun 2021
ytzhak goussha
on 30 Jun 2021
You can try reducing the number of filters or their sizes
Answers (0)
Categories
Find more on AI for Signals in Help Center and File Exchange
Products
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!