- Type deepNetworkDesigner() at the command window.
- A gui named deep netowrk desinger wqill pop out. (be patient wait for sometime)
- Once gui s opneed, import the created lagraph into gui and you can see your network architecture.
- USe, zoomin, view,fit to width options and go to the Final Convolution layer. Thirs one from the last.
- Click on that block, on the right you can see it's properties.
- At the NumFilters it is 2, change it to 1
- Export the network to workspace, it will be saved as lgraph_1.
- Now you try training using the exported lgraph_1.
Invalid Training data. The Output size of the last layer does not match the reponse size
18 views (last 30 days)
Show older comments
So I'm getting this error on one computer while on another I don't.
Invalid training data. The output size ([32 32 2]) of the last layer does not match the response size ([32 32 1]).
So his program takes an image and segment out a part of it. The image and the mask that represent the correct segemention are all [32 32 1]
The result it spits out is [32 32 2] because its like the % chance of it being the thing to segment out and then the % chance its the background which is why it has 2 dimensions instead of 1 at the end. I just don't understand why it runs fine on one computer but have this error on another computer when its exact same code. The only difference is that it works on computer using 2020b and its not working on the computer 2021a.
Edit: I tested with 2020b on both and the second computer still get the same error while the first computer doesn't.
imageDir = fullfile('..\Data_Processing\trainingImages');
%location of mask
labelDir = fullfile('..\Data_Processing\trainingLabels');
imds = imageDatastore(imageDir);
classNames = ["fluid", "background",];
labelIDs = [254 1];
pxds = pixelLabelDatastore(labelDir,classNames,labelIDs);
augmenter = imageDataAugmenter('RandRotation',[0 90],'RandXReflection',true);
%create batches for training data
patchSize = [32 32];
patchPerImage = 160;
miniBatchSize = 32;
patchds = randomPatchExtractionDatastore(imds,pxds,patchSize, ...
'PatchesPerImage',patchPerImage,'DataAugmentation',augmenter);
patchds.MiniBatchSize = miniBatchSize;
minibatch = preview(patchds);
disp(minibatch);
%create unet
imageSize = [32 32];
numClasses = 2;
lgraph = unetLayers(imageSize, numClasses);
options = trainingOptions('sgdm', ...
'InitialLearnRate',1e-3, ...
'MaxEpochs',5, ...
'VerboseFrequency',10);
predictPatchSize = [32 32];
[net,info] = trainNetwork(patchds,lgraph,options);
0 Comments
Accepted Answer
KSSV
on 28 Jul 2021
This is due to number of filters at Final-convolution layer. DEfault it has number of filters as 2, you need to change it to 1. You can do the following:
[net,info] = trainNetwork(patchds,lgraph_1,options);
Note all the above steps, can also be achieved via command lines; you need to read them at the doc.
2 Comments
More Answers (0)
See Also
Categories
Find more on Image Data Workflows in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!