nftool output transfer function

10 views (last 30 days)
Hello,
I am using nftool to build a network that takes a vector of 8192 data points (in reality spread across a frequency dimension), which goes through a hidden layer with the standard tansig transfer function, then is reduced to 3 outputs that describe the intensity of certain features in the frequency domain data. The three outputs must sum to 1, and can have any value from 0 to 1 for each parameter. My training data samples this with increments of 0.01 for each of the three parameter values. I am currently using nftool with 30 nodes in the hidden layer and 3 outputs. I tried using two outputs since the third variable is not independent, but that seemed to make things worse.
Upon training my network converges rapidly, and the regression plot loks good. When I test the net with new simulation data, it is remarkably robust, extracting the correct values even when the noise makes the data "bad" to my eye. However, when I use my actual data, the network returns negative values for some of the variables, which should not be possible. The output layer has a linear transfer function. I am wondering if using a sigmoidal output function with values of 0 to 1 would solve this issue in a straightforward way? I am also exploring the posibility that my simulated noise (randn) does not recapitulate the noise in my system as well, but using a sigmoidal output layer might be the easiest way to take care of this. Once I generate my netwrok using nftool, can I change the output layer transfer function prior to training? If I can change that transfer function, could I complete the training inside nftool? I find it very easy to use and would like to stick to it for simplicity.
Thanks

Accepted Answer

Madhav Thakker
Madhav Thakker on 15 Mar 2021
Hi Matthew,
The output of the neural network depends on the transfer function of the output layer. So, yes changing the output transfer function is a good idea.
The nftool is generally used for training a simple two-layer feed-forward network. You cannot change the output transfer function or load a custom made network in the GUI.
If you want to handcraft a regression network according to your requirements, you can look into https://in.mathworks.com/help/deeplearning/ug/train-a-convolutional-neural-network-for-regression.html which should give you more freedom for experimentation.
You can change easily change the transfer function using -
net.Layers{end}.transferFcn='logsig'
Hope this helps.
  2 Comments
Matthew Merritt
Matthew Merritt on 6 Apr 2021
I have a follow up question. I think that a sigmoidLayer could serve my purposes well, but I need 3 outputs for my problem. It looks like sigmoidLayer only allows a single output. Is this true?

Sign in to comment.

More Answers (0)

Products


Release

R2020b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!