PPO Agent - Initialization of actor and critic newtorks
14 views (last 30 days)
Show older comments
Whenever a PPO agent is initialized in Matlab, according to the documentation the parameters of both the actor and the critic are set randomly. However I know that this is not the only possible choice: other initialization schemes are possible (e.g. orthogonal initialization), and this can sometimes improve the future performance of the agent.
- Is there a reason why the random initialization has been chosen as the default method here?
- Is it possible to specify a different initialization method easily in the context of Reinforcement learning Toolbox, without starting from scratch?
0 Comments
Accepted Answer
Venu
on 19 Mar 2024
Random initialization can encourage initial exploration by starting the policy and value functions in a non-deterministic state.
It doesn't require specific tuning or assumptions about the model architecture, making it a good default choice.
MATLAB's Reinforcement Learning Toolbox does not directly expose an interface to specify the initialization method for the neural networks (actor and critic) within the PPO agent or other agents directly through high-level functions or options.
So as you have mentioned regarding starting from scratch, when you create the neural networks for the actor and critic using MATLAB's Deep Learning Toolbox (e.g., using layerGraph, dlnetwork, or similar functions), you can specify the initialization for each layer manually. After defining the networks with your desired initialization, you can then pass them to the PPO agent creation function.
Here is a page comparing 3 initializers when training LSTMs:
Hope this helps to an extent!
More Answers (0)
See Also
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!