Main Content

rlAgentInitializationOptions

Options for initializing reinforcement learning agents

Since R2020b

Description

Use the rlAgentInitializationOptions object to specify initialization options for an agent. To create an agent, use an agent creation function such as rlACAgent.

Creation

Description

initOpts = rlAgentInitializationOptions creates a default options object for initializing a reinforcement learning agent with default networks. Use the initialization options to specify agent initialization parameters, such as the number of units for each hidden layer of the agent networks and whether to use a recurrent neural network.

example

initOpts = rlAgentInitializationOptions(Name=Value) creates an initialization options object and sets its properties using one or more name-value arguments.

Properties

expand all

Number of units in each hidden fully connected layer of the agent networks, except for the fully connected layer just before the network output, specified as a positive integer. The value you set also applies to any LSTM layers.

Example: 64

Flag to use recurrent neural network, specified as a logical.

If you set UseRNN to true, during agent creation, the software inserts a recurrent LSTM layer with the output mode set to sequence in the output path of the agent networks. For more information on LSTM, see Long Short-Term Memory Neural Networks.

Note

TRPO agents do not support recurrent networks.

Example: true

Normalization method, specified as one of the following values:

  • "none" — Do not normalize the input of function approximator object.

  • "rescale-zero-one" — Normalize the input by rescaling it to the interval between 0 and 1. The normalized input Y is (UMin)./(UpperLimitLowerLimit), where U is the nonnormalized input. Note that nonnormalized input values lower than LowerLimit result in normalized values lower than 0. Similarly, nonnormalized input values higher than UpperLimit result in normalized values higher than 1. Here, UpperLimit and LowerLimit are the corresponding properties defined in the specification object of the input channel.

  • "rescale-symmetric" — Normalize the input by rescaling it to the interval between –1 and 1. The normalized input Y is 2(ULowerLimit)./(UpperLimitLowerLimit) – 1, where U is the nonnormalized input. Note that nonnormalized input values lower than LowerLimit result in normalized values lower than –1. Similarly, nonnormalized input values higher than UpperLimit result in normalized values higher than 1. Here, UpperLimit and LowerLimit are the corresponding properties defined in the specification object of the input channel.

Note

When you specify the Normalization property of rlAgentInitializationOptions, normalization is applied only to the approximator input channels corresponding to rlNumericSpec specification objects in which both the UpperLimit and LowerLimit properties are defined. After you create the agent, you can use setNormalizer to assign normalizers that use any normalization method. For more information on normalizer objects, see rlNormalizer.

Example: "rescale-symmetric"

Object Functions

rlACAgentActor-critic (AC) reinforcement learning agent
rlPGAgentPolicy gradient (PG) reinforcement learning agent
rlDDPGAgentDeep deterministic policy gradient (DDPG) reinforcement learning agent
rlDQNAgentDeep Q-network (DQN) reinforcement learning agent
rlPPOAgentProximal policy optimization (PPO) reinforcement learning agent
rlTD3AgentTwin-delayed deep deterministic (TD3) policy gradient reinforcement learning agent
rlSACAgentSoft actor-critic (SAC) reinforcement learning agent
rlTRPOAgentTrust region policy optimization (TRPO) reinforcement learning agent

Examples

collapse all

Create an agent initialization options object. Specify the number of hidden neurons for every fully conneced layer and use of a recurrent network.

initOpts = rlAgentInitializationOptions(NumHiddenUnit=64,UseRNN=true)
initOpts = 
  rlAgentInitializationOptions with properties:

    NumHiddenUnit: 64
           UseRNN: 1
    Normalization: "none"

You can modify the options using dot notation. For example, set the number of hidden units to 128.

initOpts.NumHiddenUnit = 128
initOpts = 
  rlAgentInitializationOptions with properties:

    NumHiddenUnit: 128
           UseRNN: 1
    Normalization: "none"

To create your agent, use initOpts as an input argument of an agent constructor function.

Version History

Introduced in R2020b