Issue with Q0 Convergence during Training using PPO Agent
8 views (last 30 days)
Show older comments
Muhammad Fairuz Abdul Jalal
on 10 Jul 2023
Edited: Emmanouil Tzorakoleftherakis
on 12 Jul 2023
Hi guys,
I have developed my model and trained using PPO agent. Overall, the training process has been successful. However, I have encountered an issue with the Q0 values. The maximum achieable rewards is 6000. I set to stop my training at 98.5% of the maximum rewards (5910).
During the training, I have noticed that the Q0 values did not converge as expected. In fact, they seem to be capped at 100, as indicated by the figures. I am currently seeking an explanation for this behavior and trying to understand why the Q0 values are not reaching the desired convergence.
My agent option is as follow:
If anyone has any insights or explanations regarding the behavior of Q0 during training with the PPO agent, I would greatly appreciate your input. Your expertise and guidance would be invaluable in helping me understanding and addressing this issue.
Thank you.
2 Comments
Accepted Answer
Emmanouil Tzorakoleftherakis
on 11 Jul 2023
Edited: Emmanouil Tzorakoleftherakis
on 12 Jul 2023
It seems you set the training to stop when the episode reward reaches the value of 0.985*(Tf/Ts)*3. I cannot comment on the value itself, but usually it's better to use average reward values as an indicator of when to stop training because it will helps filter out outlier episodes.
Aside fromt hat, in case it wasn't clear, the stopping criteria is not based on Q0, but the light blue value (individual episode reward) that you see in the plots you shared above. The value of Q0 will get better based on how well the critic is trained, but it does not necessarily need to "converge" in order to stop training. Better critic means better more stable training, but at the end of the day you only care about your actor. This is usually why it takes a few trials to see what stopping critiria make sense.
More Answers (0)
See Also
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!