Could I learn from past data INCLUDING actions? Could I make vector with actions to be used in a certain order?
2 views (last 30 days)
Cecilia S. on 16 Jun 2021
If I have a complete set of past data (observations) and a list of the actions taken by some agent (or human), could I update my policy using that instead of running my simulated environment dynamics?
I have a DQN agent that was initially trained using simulated data. As usual, my agent chose actions following some policy and some action selection method (in my case, epsilon greedy selection). Now I would like to update my dnn with real world past data, how could that be done?
I don't seem to be able to modify the action as an input in the step function (I could modify it afterwards but if I do that, then the agent would be evaluating the wrong action). Is there a way to "force" the action value (at the input of the step function) so that the system evaluates that action instead of the one selected by my current exploration/exploitaition method?
Emmanouil Tzorakoleftherakis on 22 Jun 2021
If the historical observations do not depend on the actions taken, (think of stock values, or historical power demand), you could set up your environment so that the agent uses this data for observations. The agent will still be taking actions though.
If the above is not the case, what you are referring to is often called offline RL. This is something we are looking at but we do not have functionality that supports this right now.
Hope this helps