Main Content

Create Custom Simulink Environments

To create a custom Simulink® environment, first create a Simulink environment model that represents the world as seen from the agent. Such a system is often referred to as plant or open loop system, while the whole (integrated) system that includes both agent and environment is often referred to as the closed loop system.

Your environment model must have an input signal, the action, which influences (through some discrete, continuous or mixed dynamics) its next internal state and its outputs, which are the observation, the reward and the is-done signals. The is-done signal is a scalar that indicates the termination of an episode, causing the simulation to stop when its value is true.

Note

The reward signal at time t must be the one corresponding to the transition between the observation output at time t-1 and the observation output at time t. Therefore, the environment output signal corresponds to the signal called Next Observation in the agent-environment illustration presented in Reinforcement Learning Environments.

If your observation contains multiple channels, group the signals carried by the channels into a single observation bus. For more information about bus signals, see Explore Simulink Bus Capabilities (Simulink).

For critical considerations on defining reward and observation signals in custom environments, see Define Reward and Observation Signals in Custom Environments.

Once you have created the Simulink model that represents the environment, you must add the RL Agent block to it. You can do so automatically or manually.

  • To automatically create a new closed-loop Simulink model that contains an RL Agent block and references your environment model from its Environment block, use createIntegratedEnv.

    You can specify as input arguments the names of the action, observation, is-done, and reward ports in your environment model. If your action or observation space is finite, you can also specify its possible values (otherwise the signals are assumed to be continuous). This function returns an environment object as well as the block path of the agent and the environment observation and action specifications. For more information on model referencing, see Model Reference Basics (Simulink).

  • To manually add the agent to your model, drag and drop the RL Agent block from the Reinforcement Learning Simulink library. Connect the action, observation, reward and is-done signals to the appropriate output and input ports of the block.

    Unless you already have an agent object for this environment in the MATLAB® workspace, you must create specification objects for the action and observation signals using rlNumericSpec (for continuous signals) or rlFiniteSetSpec (for discrete signals). For bus signals, create specifications using bus2RLSpec.

    Once you connect the blocks, create an environment object using rlSimulinkEnv, specifying the model filename, the block path to the RL Agent within the model, and the specification objects for the observation and the action channels, respectively. If your agent block already references an agent object in the MATLAB workspace, you do not need to supply the specification objects as input arguments.

    For an example, see Water Tank Reinforcement Learning Environment Model.

Both rlSimulinkEnv and createIntegratedEnv return a custom Simulink environment as a SimulinkEnvWithAgent object. This environment object acts as an interface so that when you call sim or train, these functions in turn call the (compiled) Simulink model associated with the object to generate experiences for the agents. You can use this object to train and simulate agents in the same way as with any other environment.

You can also create a multiagent Simulink environment. To do so, create a Simulink model that has one action input and one set of outputs (observation, reward and is-done) for every agent. Then manually add an agent block for each agent. Once you connect the blocks, create an environment object using rlSimulinkEnv. Unless each agent block already references an agent object in the MATLAB workspace, you must supply to rlSimulinkEnv two cell arrays containing the observation action specification objects, respectively, as input arguments. For an example, see Train Multiple Agents to Perform Collaborative Task.

Your environment can also include third-party functionality. For more information, see Integrate with Existing Simulation or Environment (Simulink).

Algebraic Loops Between Environment and Agent

To avoid (potentially unsolvable) algebraic loops, you must avoid any direct feedthrough (that is any direct dependency in the same time step) from the action to any of the output signals. This is because Simulink treats the agent block as having a direct feedthrough from all its inputs (that is the action output at a given time step is considered to be directly dependent on the observation, reward and is-done inputs at the same time step).

Additionally, for models created using createIntegratedEnv the environment block is a referenced subsystem. Therefore the environment block is also normally treated as a direct feedthrough block unless the Minimize algebraic loop occurrences parameter is enabled.

In general, adding a Delay (Simulink) or Memory (Simulink) block to the action signal between the agent block and environment block removes the algebraic loop. Alternatively you can add delay or memory blocks to all the environment output signals after the environment block. For more information on algebraic loops and how to remove some of them, see Algebraic Loop Concepts (Simulink) and Remove Algebraic Loops (Simulink).

See Also

Functions

Objects

Related Examples

More About