RL DDPG agent does not seem to learn, aircraft control problem
Show older comments
Hello everyone,
I’m back with some updates on my mixed Reinforcement Learning (RL) and Supervised Learning training. A few days ago, I posted a question here on MathWorks about the working principle of “external actions” in the RL training block. Based on the suggestions I received, I have started a hybrid training approach.
I begin by injecting external actions from the controller for 75 seconds (1/4 of the entire episode length). After this, the agent takes action until the pitch rate error reaches 5 degrees per second. When this threshold is reached, the external agent takes control again. The external actions are then cut off when the pitch rate is very close to 0 degrees per second for about 40 seconds. The agent then takes control again, and this cycle continues.
I have also introduced a maximum number of allowed interventions. If the agent exceeds this threshold, the simulation stops and a penalty is applied. I also apply a penalty every time the external controller must intervene again, while a bonus is given every time the agent makes progress within the time window when it is left alone. This system of bonuses and penalties is added to the standard reward, which takes into account the altitude error, the flight path angle error, and the pitch rate error. The weight coefficients for these errors are 1, 1, and 10, respectively, because I want to emphasize that the aircraft must maintain level wings.
The initial conditions are always random, and the setpoint for altitude is always set 50 meters above the initial altitude.
Unfortunately, after the first training session, I haven’t seen any progress. According to your opinion, is it worth taking another attempt or is the whole setup wrong? Thank you.
6 Comments
Umar
on 3 Aug 2024
Based on the described training setup, it seems that the integration of external actions and the reward system is well thought out. However, the lack of progress indicates potential issues in the training process. Here is example code snippet to help you out structuring your RL training loop.
% Define RL training loop
num_episodes = 1000;
for episode = 1:num_episodes
% Execute episode steps
state = env.reset();
done = false;
while ~done
action = agent.choose_action(state);
next_state, reward, done = env.step(action);
agent.train(state, action, reward, next_state);
state = next_state;
end
end
Hope this helps.
Leonardo Molino
on 4 Aug 2024
Umar
on 4 Aug 2024
@ Leonardo Molino,
Please see my response to your comments below.
Is it actually possible to implement such a training loop in MATLAB?
Yes, it is possible to implement a training loop in MATLAB.
Anyway my agent setup is the following
The code you provided demonstrates the implementation of a training loop in MATLAB for RL using the DDPG (Deep Deterministic Policy Gradient) algorithm. You have configured the agent with options such as sample time, noise options, experience buffer length, mini-batch size, and more, also you have created actor and critic networks which are a configured as well , and your optimizer options are set which looks good. Good luck 👍
Leonardo Molino
on 5 Aug 2024
Umar
on 6 Aug 2024
Hi @ Leonardo Molino,
My suggestion would be start by checking the learning rates, network complexity, and the quality of the training data. Additionally, monitor the loss curves and rewards during training which help you provide insights into the model's performance. Hope this helps.
Leonardo Molino
on 6 Aug 2024
Answers (0)
Categories
Find more on Reinforcement Learning in Help Center and File Exchange
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!