How do I output hidden layers to a custom loss function in a regularized autoencoder?
Show older comments
I am creating a regularized autoencoder wherein the latent dimension outputs the results of a regression task while the decoder reconstructs the input image. I would like the network to output the results of the latent layer and the image reconstruction to a mean-squared-error loss function. The documentation for variational autoencoders suggests using a custom training loop, but I am concerned about debugging this custom loop on top of the custom layers I need to implement to tie the encoder and decoder weights. Surely an architecture that is over a decade old has been integrated into the dlnetwork, custom loss, and trainnet functionalities? Is there some way to handle the output in a custom loss function which can then be passed to the trainnet function?
I was anticipating syntax like outputs = outputLayer(numberOfOutputs, Name='out') which could then be added and tied to the model with net = addLayers(net, outputs); net = connectLayers(net, 'encoded', 'out/in1'); net = connectLayers(net, 'reconstruction', 'out/in2). The tensorflow models of this type return a list of outputs, and the multiple output documentation suggests there is similar functionality somewhere in MatLab's deep learning tools.

Accepted Answer
More Answers (0)
Categories
Find more on Deep Learning Toolbox in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!