How to connect fully connecter layer with convolutional layer
Show older comments
Hello,
I am trying to train this network:
layers = layerGraph([ ...
imageInputLayer([1,1,128],"Name",'imgIn',"Normalization","none")
fullyConnectedLayer(256,'Name','bufferFc')
leakyReluLayer(0.01,"Name",'leakyrelu')
batchNormalizationLayer("Name",'batchnorm')
fullyConnectedLayer(128,'Name','fc_decoder1')
leakyReluLayer(0.01,"Name",'leakyrelu_1')
batchNormalizationLayer("Name",'batchnorm_1')
fullyConnectedLayer(1024,'Name','fc_decoder2')
leakyReluLayer(0.01,"Name",'leakyrelu_2')
batchNormalizationLayer("Name",'batchnorm_2')
transposedConv2dLayer(1,512,"Stride",2,"Name",'transpose_conv_7') ...
])
The annoying part is that this network is valid; these layers are part of a bigger network, which I am trying to split into two networks (encoder-decoder). I only added an "imageInputLayer" and one fully connected layer.
When I analyze the network using "analyzeNetwork(layers)," it checks OK, and the output dimension from the last fully connected layer is [1,1,1024], and also from the following batch normalization and leaky relu layers.
However, when I try to convert this into a dlnetwrok
decoderNet = dlnetwork(layers)
I get this error:
Error using dlnetwork/initialize (line 405)
Invalid network.
Error in dlnetwork (line 191)
net = initialize(net, dlX{:});
Caused by:
Layer 'transpose_conv_7': Input size mismatch. Size of input to this layer is different from the expected input
size.
Inputs to this layer:
from layer 'batchnorm_2' (size 1024(C) × 1(B))
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Apparently, the output from the fullyconnected layers lost the space dimensions and only have depth and batch dimensions.
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Here is a list of things that I tried and didn't work:
- Converting it first to layerGraph
- Use depthToSpace2dLayer(blockSize) with block size [1,1].
- Add a custom layer (AddDim), which creates a new dlarray with the correct dimensions and copy the values - this invoked a different:
Caused by:
Layer 'AddDim': Input size mismatch. Incorrect type of 'Z' for 'predict' in Layer 'addDimLayer'. Expected an
unformatted dlarray, but instead was formatted.
and when I changes it to unformated dlarray. this was the error:
Caused by:
Layer 'AddDim': Input size mismatch. Size of input to this layer is different from the expected input size.
Inputs to this layer:
from layer 'batchnorm_2' (size 1024(C) × 1(B))
Thanks in advance
3 Comments
ytzhak goussha
on 29 Jun 2021
Mohammad Sami
on 30 Jun 2021
Can I ask if you need the fully connected layers ? You can potentially have a Fully Convolutional Encoder Decoder Network instead by replacing all your fully connected layers with the transposeConv2dLayer.
ytzhak goussha
on 30 Jun 2021
Accepted Answer
More Answers (0)
Categories
Find more on Deep Learning Toolbox in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!