Main Content

Create Agents Using Reinforcement Learning Designer

The Reinforcement Learning Designer app supports the following types of agents.

To train an agent using Reinforcement Learning Designer, you must first create or import an environment. For more information, see Create MATLAB Environments for Reinforcement Learning Designer and Create Simulink Environments for Reinforcement Learning Designer.

Create Agent

To create an agent, on the Reinforcement Learning tab, in the Agent section, click New.

Dialog box for specifying options for creating a default agent.

In the Create agent dialog box, specify the following information.

  • Agent name — Specify the name of your agent.

  • Environment — Select an environment that you previously created or imported.

  • Compatible algorithm — Select an agent training algorithm. This list contains only algorithms that are compatible with the environment you select.

The Reinforcement Learning Designer app creates agents with default deep neural network actor and critic representations. You can specify the following options for the default networks.

  • Number of hidden units — Specify number of units in each fully-connected or LSTM layer of the actor and critic networks.

  • Use recurrent neural network — Select this option to create actor and critic representations with recurrent neural networks that contain an LSTM layer.

To create the agent, click OK.

The app adds the new default agent to the Agents pane and opens a document for editing the agent options.

Document for viewing and editing properties of new agent.

Import Agent

You can also import an agent from the MATLAB® workspace into Reinforcement Learning Designer. To do so, on the Reinforcement Learning tab, click Import. Then, under Select Agent, select the agent to import.

Select one of the listed agents, which are available in the MATLAB workspace.

The app adds the new imported agent to the Agents pane and opens a document for editing the agent options.

Edit Agent Options

In Reinforcement Learning Designer, you can edit agent options in the corresponding agent document.

Document for viewing and editing properties of new agent.

You can edit the following options for each agent.

  • Agent Options — Agent options, such as the sample time and discount factor. Specify these options for all supported agent types.

  • Exploration Model — Exploration model options. PPO agents do not have an exploration model.

  • Target Policy Smoothing Model — Options for target policy smoothing, which is supported for only TD3 agents.

For more information on these options, see the corresponding agent options object.

You can import agent options from the MATLAB workspace. To create options for each type of agent, use one of the preceding objects. You can also import options that you previously exported from the Reinforcement Learning Designer app

To import the options, on the corresponding Agent tab, click Import. Then, under Options, select an options object. The app lists only compatible options objects from the MATLAB workspace.

Select one of the listed options objects, which are available in the MATLAB workspace.

The app configures the agent options to match those In the selected options object.

Edit Actor and Critic

You can edit the properties of the actor and critic representations for each agent.

  • DQN agents have just a critic network.

  • DDPG and PPO agents have an actor representation and a critic representation.

  • TD3 agents have an actor representation and two critic representations. When you modify the critic representation options for a TD3 agent, the changes apply to both critics.

You can also import actor and critic representations from the MATLAB workspace. For more information on creating actor and critic representations, see Create Policy and Value Function Representations. You can also import representations that you previously exported from the Reinforcement Learning Designer app.

To import an actor or critic representation, on the corresponding Agent tab, click Import. Then, under either Actor or Critic, select a representation object with action and observation specifications that are compatible with the specifications of the agent.

Select one of the listed actor or critic representation objects, which are available in the MATLAB workspace.

The app replaces the actor or critic representation in the agent with the selected representation. If you import a critic representation for a TD3 agent, the app replaces the network for both critics.

Modify Deep Neural Networks

To use a nondefault deep neural network for an actor or critic, you must import the network from the MATLAB workspace. One common strategy is to export the default deep neural network, modify it using the Deep Network Designer app, and then import it back into Reinforcement Learning Designer. For more information on creating deep neural networks for actors and critics, see Create Policy and Value Function Representations.

To import a deep neural network, on the corresponding Agent tab, click Import. Then, under either Actor Neural Network or Critic Neural Network, select a network with input and output layers that are compatible with the observation and action specifications of the agent.

Select one of the listed deep neural networks, which are available in the MATLAB workspace.

The app replaces the deep neural network in the corresponding actor or agent representation. If you import a critic network for a TD3 agent, the app replaces the network for both critics.

Export Agents and Agent Components

For a given agent, you can export any of the following to the MATLAB workspace.

  • Agent

  • Agent options

  • Actor or critic representation

  • Actor or critic deep neural network

To export an agent or agent component, on the corresponding Agent tab, click Export. Then, select the item to export.

Select the agent or agent component to export to the MATLAB workspace.

The app saves a copy of the agent or agent component in the MATLAB workspace.

See Also

|

Related Topics