Main Content

rlMBPOAgent

Model-based policy optimization reinforcement learning agent

    Description

    A model-based policy optimization (MBPO) agent is a model-based, online, off-policy, reinforcement learning method. An MBPO agent contains an internal model of the environment, which it uses to generate additional experiences without interacting with the environment.

    During training, the MBPO agent generates real experiences by interacting with the environment. These experiences are used to train the internal environment model, which is used to generate additional experiences. The training algorithm then uses both the real and generated experiences to update the agent policy.

    Creation

    Description

    example

    agent = rlMBPOAgent(baseAgent,envModel) creates a model-based policy optimization agent with default options and sets the BaseAgent and EnvModel properties.

    agent = rlMBPOAgent(___,agentOptions) creates a model-based policy optimization agent using specified options and sets the AgentOptions property.

    Properties

    expand all

    Base reinforcement learning agent, specified as an off-policy agent object.

    For environments with a discrete action space, specify a DQN agent using an rlDQNAgent object.

    For environments with a continuous action space, use one of the following agent objects.

    Environment model, specified as an rlNeuralNetworkEnvironment object. This environment contains transition functions, a reward function, and an is-done function.

    Agent options, specified as an rlMBPOAgentOptions object.

    Current roll-out horizon value, specified as a positive integer. For more information on setting the initial horizon value and the horizon update method, see rlMBPOAgentOptions.

    Model experience buffer, specified as an rlReplayMemory object. During training the agent stores each of its generated experiences (S,A,R,S',D) in a buffer. Here:

    • S is the current observation of the environment.

    • A is the action taken by the agent.

    • R is the reward for taking action A.

    • S' is the next observation after taking action A.

    • D is the is-done signal after taking action A.

    Option to use exploration policy when selecting actions, specified as a one of the following logical values.

    • true — Use the base agent exploration policy when selecting actions.

    • false — Use the base agent greedy policy when selecting actions.

    The initial value of UseExplorationPolicy matches the value specified in BaseAgent. If you change the value of UseExplorationPolicy in either the base agent or the MBPO agent, the same value is used for the other agent.

    This property is read-only.

    Observation specifications, specified as a reinforcement learning specification object or an array of specification objects defining properties such as dimensions, data type, and names of the observation signals.

    The value of ObservationInfo matches the corresponding value specified in BaseAgent.

    This property is read-only.

    Action specification, specified as a reinforcement learning specification object or an array of specification objects defining properties such as dimensions, data type, and names of the action signals.

    The value of ActionInfo matches the corresponding value specified in BaseAgent.

    Sample time of agent, specified as a positive scalar or as -1. Setting this parameter to -1 allows for event-based simulations.

    The initial value of SampleTime matches the value specified in BaseAgent. If you change the value of SampleTime in either the base agent or the MBPO agent, the same value is used for the other agent.

    Within a Simulink® environment, the RL Agent block in which the agent is specified to execute every SampleTime seconds of simulation time. If SampleTime is -1, the block inherits the sample time from its parent subsystem.

    Within a MATLAB® environment, the agent is executed every time the environment advances. In this case, SampleTime is the time interval between consecutive elements in the output experience returned by sim or train. If SampleTime is -1, the time interval between consecutive elements in the returned output experience reflects the timing of the event that triggers the agent execution.

    Object Functions

    trainTrain reinforcement learning agents within a specified environment
    simSimulate trained reinforcement learning agents within specified environment

    Examples

    collapse all

    Create environment interface and extract observation and action specifications.

    env = rlPredefinedEnv("CartPole-Continuous");
    obsInfo = getObservationInfo(env);
    numObservations = obsInfo.Dimension(1);
    actInfo = getActionInfo(env);
    numActions = actInfo.Dimension(1);

    Create a base off-policy agent. For this example, use a SAC agent.

    agentOpts = rlSACAgentOptions;
    agentOpts.MiniBatchSize = 256;
    initOpts = rlAgentInitializationOptions("NumHiddenUnit",64);
    baseagent = rlSACAgent(obsInfo,actInfo,initOpts,agentOpts);

    Define a transition model for the neural network environment. For this example, create a single transition model. To account for modeling uncertainty, you can specify multiple transition models.

    statePath = featureInputLayer(numObservations, ...
        Normalization="none",Name="state");
    actionPath = featureInputLayer(numActions, ...
        Normalization="none",Name="action");
    commonPath = [concatenationLayer(1,2,Name="concat")
        fullyConnectedLayer(64,Name="FC1")
        reluLayer(Name="CriticRelu1")
        fullyConnectedLayer(64, "Name","FC3")
        reluLayer(Name="CriticCommonRelu2")
        fullyConnectedLayer(numObservations,Name="nextObservation")];
    
    transitionNetwork = layerGraph(statePath);
    transitionNetwork = addLayers(transitionNetwork,actionPath);
    transitionNetwork = addLayers(transitionNetwork,commonPath);
    
    transitionNetwork = connectLayers(transitionNetwork,"state","concat/in1");
    transitionNetwork = connectLayers(transitionNetwork,"action","concat/in2");
    
    transitionNetwork = dlnetwork(transitionNetwork);
    
    transitionFcn = rlContinuousDeterministicTransitionFunction( ...
        transitionNetwork,obsInfo,actInfo,...
        ObservationInputNames="state",...
        ActionInputNames="action",...
        NextObservationOutputNames="nextObservation");

    Create a reward model for the neural network environment.

    actionPath = featureInputLayer(numActions,...
        Normalization="none",Name="action");
    nextStatePath = featureInputLayer(numObservations,...
        Normalization="none",Name="nextState");
    commonPath = [concatenationLayer(1,2,Name="concat")
        fullyConnectedLayer(64,Name="FC1")
        reluLayer(Name="CriticRelu1")
        fullyConnectedLayer(64, "Name","FC2")
        reluLayer(Name="CriticCommonRelu2")
        fullyConnectedLayer(64, "Name","FC3")
        reluLayer(Name="CriticCommonRelu3")
        fullyConnectedLayer(1,Name="reward")];
    
    rewardNetwork = layerGraph(nextStatePath);
    rewardNetwork = addLayers(rewardNetwork,actionPath);
    rewardNetwork = addLayers(rewardNetwork,commonPath);
    
    rewardNetwork = connectLayers(rewardNetwork,"nextState","concat/in1");
    rewardNetwork = connectLayers(rewardNetwork,"action","concat/in2");
    
    rewardNetwork = dlnetwork(rewardNetwork);
    
    rewardFcn = rlContinuousDeterministicRewardFunction( ...
        rewardNetwork,obsInfo,actInfo, ...
        ActionInputNames="action",...
        NextObservationInputNames="nextState");

    Create an is-done model for the neural network environment.

    commonPath = [featureInputLayer(numObservations, ...
        Normalization="none",Name="nextState");
    fullyConnectedLayer(64,Name="FC1")
    reluLayer(Name="CriticRelu1")
    fullyConnectedLayer(64,'Name',"FC3")
    reluLayer(Name="CriticCommonRelu2")
    fullyConnectedLayer(2,Name="isdone0")
    softmaxLayer(Name="isdone")];
    
    isDoneNetwork = layerGraph(commonPath);
    isDoneNetwork = dlnetwork(isDoneNetwork);
    
    isdoneFcn = rlIsDoneFunction(isDoneNetwork,obsInfo,actInfo, ...
        NextObservationInputNames="nextState");

    Create the neural network environment.

    generativeEnv = rlNeuralNetworkEnvironment(obsInfo,actInfo,...
        transitionFcn,rewardFcn,isdoneFcn);

    Specify options for creating an MBPO agent. Specify the optimizer options for the transition network and use default values for all other options.

    MBPOAgentOpts = rlMBPOAgentOptions;
    MBPOAgentOpts.TransitionOptimizerOptions = rlOptimizerOptions(...
        LearnRate=1e-4,...
        GradientThreshold=1.0);

    Create the MBPO agent.

    agent = rlMBPOAgent(baseagent,generativeEnv,MBPOAgentOpts);

    Version History

    Introduced in R2022a