Training a Variational Autoencoder (VAE) on sine waves

22 views (last 30 days)
Hi,
Basically, I am testing the autoencoder on sine waves. I have a training set and a testing set each having 100 sine waves of length 1100 samples (they are all similar). However, when I try to run the code, I get the following error:
Error using nnet.internal.cnn.dlnetwork/forward (line 194)
Layer 'fc_encoder': Invalid input data. The number of weights (17600) for each output feature must match the number of elements (204800) in each observation
of the first argument.
Error in dlnetwork/forward (line 165)
[varargout{1:nargout}] = forward(net.PrivateNetwork, x, layerIndices, layerOutputIndices);
Error in sampling (line 2)
compressed = forward(encoderNet, x);
Error in modelGradients (line 2)
[z, zMean, zLogvar] = sampling(encoderNet, x);
Error in deep.internal.dlfeval (line 18)
[varargout{1:nout}] = fun(x{:});
Error in dlfeval (line 40)
[varargout{1:nout}] = deep.internal.dlfeval(fun,varargin{:});
Error in ConvAE (line 57)
[infGrad, genGrad] = dlfeval(...
When I run the same code with XBatch = XTrain instead, I get the same error but with number of elements 440000 instead of 204800.
When I run the same code with XBatch = XTrain(idx,:) instead, I get the error: Index in position 1 exceeds array bounds (must not exceed 100).
Can anyone help? I have used the exact same Helper Functions as in the link.
Thanks!
latentDim = 50;
encoderLG = layerGraph([
imageInputLayer([1 1100],'Name','input_encoder','Normalization','none')
convolution2dLayer([1 100], 32, 'Padding','same', 'Stride', 2, 'Name', 'conv1')
reluLayer('Name','relu1')
convolution2dLayer([1 100], 64, 'Padding','same', 'Stride', 2, 'Name', 'conv2')
reluLayer('Name','relu2')
fullyConnectedLayer(2 * latentDim, 'Name', 'fc_encoder')
]);
decoderLG = layerGraph([
imageInputLayer([1 1 latentDim],'Name','i','Normalization','none')
transposedConv2dLayer([1 100], 32, 'Cropping', 'same', 'Stride', 2, 'Name', 'transpose1')
reluLayer('Name','relu1')
transposedConv2dLayer([1 100], 64, 'Cropping', 'same', 'Stride', 2, 'Name', 'transpose2')
reluLayer('Name','relu2')
transposedConv2dLayer([1 100], 32, 'Cropping', 'same', 'Stride', 2, 'Name', 'transpose3')
reluLayer('Name','relu3')
transposedConv2dLayer([1 100], 1, 'Cropping', 'same', 'Name', 'transpose4')
]);
encoderNet = dlnetwork(encoderLG);
decoderNet = dlnetwork(decoderLG);
executionEnvironment = "auto";
XTrain = sineTrain;
XTest = sineTest;
numTrainImages = 1100;
numEpochs = 50;
miniBatchSize = 512;
lr = 1e-3;
numIterations = floor(numTrainImages/miniBatchSize);
iteration = 0;
avgGradientsEncoder = [];
avgGradientsSquaredEncoder = [];
avgGradientsDecoder = [];
avgGradientsSquaredDecoder = [];
for epoch = 1:numEpochs
tic;
for i = 1:numIterations
iteration = iteration + 1;
idx = (i-1)*miniBatchSize+1:i*miniBatchSize;
XBatch = XTrain(:,idx);
XBatch = dlarray(single(XBatch), 'SSCB');
if (executionEnvironment == "auto" && canUseGPU) || executionEnvironment == "gpu"
XBatch = gpuArray(XBatch);
end
[infGrad, genGrad] = dlfeval(...
@modelGradients, encoderNet, decoderNet, XBatch);
[decoderNet.Learnables, avgGradientsDecoder, avgGradientsSquaredDecoder] = ...
adamupdate(decoderNet.Learnables, ...
genGrad, avgGradientsDecoder, avgGradientsSquaredDecoder, iteration, lr);
[encoderNet.Learnables, avgGradientsEncoder, avgGradientsSquaredEncoder] = ...
adamupdate(encoderNet.Learnables, ...
infGrad, avgGradientsEncoder, avgGradientsSquaredEncoder, iteration, lr);
end
elapsedTime = toc;
[z, zMean, zLogvar] = sampling(encoderNet, XTest);
xPred = sigmoid(forward(decoderNet, z));
elbo = ELBOloss(XTest, xPred, zMean, zLogvar);
disp("Epoch : "+epoch+" Test ELBO loss = "+gather(extractdata(elbo))+...
". Time taken for epoch = "+ elapsedTime + "s")
end

Accepted Answer

Joss Knight
Joss Knight on 15 Nov 2019
Edited: Joss Knight on 15 Nov 2019
It looks like your input data size is wrong. Your formatting says that the 4th dimension is the batch dimension, but actually it appears that the batch dim is the second dimension. You could try labelling as 'SBCS' instead, but I can't be sure because I don't know what's in sineTrain. It may be you need to permute your data or all your filters to match the convolution dimension to the correct input dimension.
  8 Comments
Uerm
Uerm on 21 Nov 2019
Hi, I have done the things you suggested and it seems to be able to run. Thank you for that! Unfortunately, I do not have access to the technical support. However, there is something wrong with the training part. The ELBO loss is identical through all the epochs which is strange. When I run the code using the same data from the mathworks page (MNIST database), it works and the ELBO loss is reduced through the epochs. But that does not work when I run the code with my own data. I attach two of the codes. Sinus.m is the code for the sine waves that I use as my data. ConvAE.m is the main code. I have added two more layers and adjusted some numbers to match with my own data. I have not attached the other helper functions as they are identical to the ones on the webpage.
Can you spot the problem?
Joss Knight
Joss Knight on 21 Nov 2019
I suggest you ask a new question, to see if anyone wants to help you with this new problem.

Sign in to comment.

More Answers (0)

Categories

Find more on Get Started with DSP System Toolbox in Help Center and File Exchange

Products


Release

R2019b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!