Clear Filters
Clear Filters

Deep Neural LSTM Network Issues

3 views (last 30 days)
Pappu Murthy
Pappu Murthy on 31 Aug 2022
Commented: Pappu Murthy on 11 Sep 2023
I am training a Deep Neural Net with a regression layer in the end. I have 20 inputs and a sequence of output with 10 steps. I tried using both LSTM and BiLISTM layers with something like 100 to 200 hidden units. I have also included two fully connected layers with some 100 hidden nodes or so. But no matter what I do and change learning rate etc.. the progress comes to a plateau very quickly like in a couple of epochs and remains so for the rest of the training. Tried changing learning rate, min batch size, Number of hidden units, adding relu and not adding relu etc etc.. you name it. But I can not improve the accracy to more than about 0.9 (validation MSE). Is there anything else I can try to improve?

Answers (1)

Udit06
Udit06 on 6 Sep 2023
Hi Pappu,
I understand that you aim to improve your validation set accuracy of the LSTM based deep neural network. You can try out the following approaches to improve your accuracy metric:
1) You can normalize your input features in the range [0, 1] to stabilize the training process and improve convergence.
2) You can use dropouts in your network to prevent overfitting and get a generalized model. You can refer to below MathWorks documentation to understand more about dropouts.
3) Instead of using a fixed learning rate, you can use Adam optimizer which utilizes adaptive learning rate based on the gradient magnitude. You can refer to below MathWorks documentation to understand more about Adam optimizer.
I hope this helps.
  1 Comment
Pappu Murthy
Pappu Murthy on 11 Sep 2023
Tried those ideas that you have suggested already but not much help at all.

Sign in to comment.

Products


Release

R2021b

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!