CNN-LSTM validation data underperforming compared to training data

28 views (last 30 days)
James Lu
James Lu on 4 Feb 2022
Edited: Imola Fodor on 8 Apr 2022
Hi all,
I am working on a CNN-LSTM for classifying audio spectrograms. I am having an issue where, during training, my training data curve performs very well (accuracy increases fast and converges to ~100%, loss decreases quickly and converges to ~0). However, my validation curve struggles (accuracy remains around 50% and loss slowly increases). I have run this several times, randomly choosing the training and validation data sets. I also included a dropout layer after LSTM layer. Hence, I am convinced the odd behavior isn't from data anomolies or overfitting. A screenshot is shown below.
When running classify() using the trained network and validation data, does MATLAB run the validation data through my convolution layers? If not, I suspect it is attempting to classify data that isn't convolved despite being trained on convolved spectrograms. This would explain the stark contrast between the training and validation curves.
If classify() does run validation data through my convolutional layers, the network would indeed be working with data it is trained on and giving poor results, which may indicate I am overfitting somehow. However, I have no way of verifying these suspicions.
Thank you for your help. My CNN-LSTM code is given below for reference.
numHiddenUnits1 = 200;
numClasses = 2;
inputSize = [683 lengthMax 1];
layers = [
% input matrix of spectrogram values
% convolutional layers
convolution2dLayer([5 5],10,"Name","conv1","Stride",[2 1])
maxPooling2dLayer([5 5],"Name","maxpool1","Padding","same","Stride",[2 1])
convolution2dLayer([5 5],10,"Name","conv2","Stride",[2 1])
maxPooling2dLayer([5 5],"Name","maxpool2","Padding","same","Stride",[2 1])
convolution2dLayer([3 3],1,"Name","conv3","Padding",[1 1 1 1])
maxPooling2dLayer([2 2],"Name","maxpool3","Padding","same","Stride",[2 1]);
% unfold and feed into LSTM
lgraph = layerGraph(layers);
lgraph = connectLayers(lgraph,'fold/miniBatchSize','unfold/miniBatchSize');
% Training
maxEpochs = 200;
learningRate = 0.001;
miniBatchSize = 15; % is this needed?
options = trainingOptions('sgdm', ...
'ExecutionEnvironment', 'gpu', ...
'GradientThreshold', 1, ...
'MaxEpochs' ,maxEpochs, ...
'SequenceLength', 'longest', ...
'Verbose', 0, ...
'ValidationData', {xVal, yVal}, ...
'ValidationFrequency', 30, ...
'InitialLearnRate', learningRate, ...
'Plots', 'training-progress',...
'Shuffle', 'every-epoch');
net = trainNetwork(xTrain, yTrain, lgraph, options);

Accepted Answer

yanqi liu
yanqi liu on 7 Feb 2022
yes,sir,may be add some dropoutLayer、batchNormalizationLayer to make model more generic
if possible,may be upload your data to debug
  1 Comment
James Lu
James Lu on 8 Feb 2022
Thanks! adding batchNormalizationLayers after every convolutional layer now gives consistent training behavior

Sign in to comment.

More Answers (2)

Joss Knight
Joss Knight on 7 Feb 2022
I don't know why your model seems to be overfitting, but I can confirm that your validation data is being run through the exact same network as your training data.

Imola Fodor
Imola Fodor on 8 Apr 2022
Edited: Imola Fodor on 8 Apr 2022
i had a similar problem, and it was because generating a particular spectrogram within a parfor loop and in a for loop give different "images".. i have generated the spectrograms in parfor for the training/validation set (since there were a lot) and in for for the test set (obviously less).. for the test i did not have good results whileas for the validation yes.. i was advised from the mathworks team to read in parallel the audiofile and then visualize the spectrogram, using PARFEVAL.
i would also recommend to check whether you prepared your audio inserts in the same way for training and validation (normalization etc.)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!