Main Content

# rlPPOAgentOptions

Options for PPO agent

## Description

Use an `rlPPOAgentOptions` object to specify options for proximal policy optimization (PPO) agents. To create a PPO agent, use `rlPPOAgent`

For more information on PPO agents, see Proximal Policy Optimization Agents.

For more information on the different types of reinforcement learning agents, see Reinforcement Learning Agents.

## Creation

### Syntax

``opt = rlPPOAgentOptions``
``opt = rlPPOAgentOptions(Name,Value)``

### Description

````opt = rlPPOAgentOptions` creates an `rlPPOAgentOptions` object for use as an argument when creating a PPO agent using all default settings. You can modify the object properties using dot notation.```

example

````opt = rlPPOAgentOptions(Name,Value)` sets option properties using name-value pairs. For example, `rlPPOAgentOptions('DiscountFactor',0.95)` creates an option set with a discount factor of `0.95`. You can specify multiple name-value pairs. Enclose each property name in quotes.```

## Properties

expand all

Number of steps the agent interacts with the environment before learning from its experience, specified as a positive integer.

The `ExperienceHorizon` value must be greater than or equal to the `MiniBatchSize` value.

Clip factor for limiting the change in each policy update step, specified as a positive scalar less than `1`.

Entropy loss weight, specified as a scalar value between `0` and `1`. A higher loss weight value promotes agent exploration by applying a penalty for being too certain about which action to take. Doing so can help the agent move out of local optima.

For episode step t, the entropy loss function, which is added to the loss function for actor updates, is:

`${H}_{t}=E\sum _{k=1}^{M}{\mu }_{k}\left({S}_{t}|{\theta }_{\mu }\right)\mathrm{ln}{\mu }_{k}\left({S}_{t}|{\theta }_{\mu }\right)$`

Here:

• E is the entropy loss weight.

• M is the number of possible actions.

• μk(St|θμ) is the probability of taking action Ak when in state St following the current policy.

When gradients are computed during training, an additional gradient component is computed for minimizing this loss function.

Mini-batch size used for each learning epoch, specified as a positive integer. When the agent uses a recurrent neural network, `MiniBatchSize` is treated as the training trajectory length.

The `MiniBatchSize` value must be less than or equal to the `ExperienceHorizon` value.

Number of epochs for which the actor and critic networks learn from the current experience set, specified as a positive integer.

Method for estimating advantage values, specified as one of the following:

• `"gae"` — Generalized advantage estimator

• `"finite-horizon"` — Finite horizon estimation

For more information on these methods, see the training algorithm information in Proximal Policy Optimization Agents.

Smoothing factor for generalized advantage estimator, specified as a scalar value between `0` and `1`, inclusive. This option applies only when the `AdvantageEstimateMethod` option is `"gae"`

Option to return the action with maximum likelihood for simulation and policy generation, specified as a logical value. When `UseDeterministicExploitation` is set to `true`, the action with maximum likelihood is always used in `sim` and `generatePolicyFunction`, which casues the agent to behave deterministically.

When `UseDeterministicExploitation` is set to `false`, the agent samples actions from probability distributions, which causes the agent to behave stochastically.

Sample time of agent, specified as a positive scalar.

Within a Simulink® environment, the agent gets executed every `SampleTime` seconds of simulation time.

Within a MATLAB® environment, the agent gets executed every time the environment advances. However, `SampleTime` is the time interval between consecutive elements in the output experience returned by `sim` or `train`.

Discount factor applied to future rewards during training, specified as a positive scalar less than or equal to 1.

## Object Functions

 `rlPPOAgent` Proximal policy optimization reinforcement learning agent

## Examples

collapse all

Create a PPO agent options object, specifying the experience horizon.

`opt = rlPPOAgentOptions('ExperienceHorizon',256)`
```opt = rlPPOAgentOptions with properties: ExperienceHorizon: 256 MiniBatchSize: 128 ClipFactor: 0.2000 EntropyLossWeight: 0.0100 NumEpoch: 3 AdvantageEstimateMethod: "gae" GAEFactor: 0.9500 UseDeterministicExploitation: 0 SampleTime: 1 DiscountFactor: 0.9900 ```

You can modify options using dot notation. For example, set the agent sample time to `0.5`.

`opt.SampleTime = 0.5;`

## See Also

### Topics

Introduced in R2019b

Download ebook