Reinforcement Learning with Parallel Computing

11 views (last 30 days)
PB75
PB75 on 9 Aug 2021
Commented: PB75 on 9 Aug 2021
Hi All,
I have been training a TD3 RNN agent on my local PC for montrhs now, due to the long training period due to the performance of my PC I have been saving the buffer, so can I can reload the pretrained agent to restart training.
I now have access to my University HPC server, so can now use parallel computing to speed up the training process.
However, now when I attempt run the restart training with the pretrained agent, now with parallel computing on the HPC server, (which has prevously been running on my local PC with no issues with NO parallel computing) it flags the following issue.
Do I need to start with a fresh agent now I am using parallel computing?
Also is the following code to start parallel computing correct?
% trainingOpts.UseParallel = true;
% trainingOpts.ParallelizationOptions.Mode = 'async';
% trainingOpts.ParallelizationOptions.DataToSendFromWorkers = 'Experiences';
Thanks
Patrick

Answers (1)

Drew Davis
Drew Davis on 9 Aug 2021
As of R2021a, the RL Toolbox does not support parallel training with RNN networks.
You can still reuse your current experience buffer for training new networks by replacing the actor and critic for TD3
agent.AgentOptions.ResetExperienceBufferBeforeTraining = false;
setActor(agent,statelessActor);
setCritic(agent,statelessCritic);
Your snippet to setup TD3 parallel training looks good.
Hope this helps
Drew
  1 Comment
PB75
PB75 on 9 Aug 2021
Hi Drew,
Thanks for your reply, so I cannot use LSTM layers with parallel training?

Sign in to comment.

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!