how to handle soft weight constraints in neural network

3 views (last 30 days)
Let us assume that there is a feedforward neural network with two layers. and weights of each layer are constrained such that sum of the weights is a constant value in each layer and their values are non-negative. You may wonder why should we have such assumptions? Answer: I have an optimization problem with unknown variables that can be mapped to a neural network in that weights represent my variables that's why. Can anyone suggest to me a way to handle these constraints? for now, I just integrated these constraints into the cost function, though the way I did is not working very well. I just added the constraints to the main cost function using max. for example when A(x)<x I just added its cost as max(A(x)/x-1,0) to the main cost function.

Accepted Answer

Matt J
Matt J on 4 Jul 2022
Edited: Matt J on 4 Jul 2022
If you wish to train with standard unconstrained stochastic gradient descent algorithms, you will probably have to make a custom layer in which the score functions are calculated according to,
where are the learnable parameters. This is equivalent to weighting the inputs with positive weights that sum to C.

More Answers (0)

Categories

Find more on Sequence and Numeric Feature Data Workflows in Help Center and File Exchange

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!