Main Content

show

Visualize a training result object in a new Reinforcement Learning Training Monitor window

Since R2024a

    Description

    By default, the train function shows the training progress and results in the Reinforcement Learning Training Monitor during training. If you configure training to not show the Reinforcement Learning Training Monitor or you close the Reinforcement Learning Training Monitor, you can view the training results after the training using the show function. You can also use show to view the training results for agents saved during training.

    show(trainingResults) visualizes the training result object trainingResults in a new Reinforcement Learning Training Monitor window.

    example

    Examples

    collapse all

    For this example, load the environment and agent for the Train Reinforcement Learning Agent in MDP Environment example.

    load mdpAgentAndEnvironment

    Specify options for training the agent. Stop the episode after 50 steps, and stop training after 50 episodes.

    trOpts = rlTrainingOptions;
    trOpts.MaxStepsPerEpisode = 50;
    trOpts.MaxEpisodes = 50;

    Do not show training progress during training.

    trOpts.Plots = "none";

    Configure the SaveAgentCriteria and SaveAgentValue options to save all agents after episode 30. Agents are saved in the savedAgents folder.

    trOpts.SaveAgentCriteria = "EpisodeCount";
    trOpts.SaveAgentValue = 30;

    Train the agent.

    trRes = train(qAgent,env,trOpts);

    Show the training results in a new Reinforcement Learning Training Monitor window.

    show(trRes)

    Load the file Agent40.mat in the savedAgent folder. The file contains two variables, saved_agent which contains the saved Q-Learning agent at the end of the 40th episode, and the variable savedAgentResult, which contains the training results at the end of the 40th episode.

    load savedAgents/Agent40

    To display the training results, use show.

    show(savedAgentResult)

    The Reinforcement Learning Training Monitor shows the training progress up to the episode in which the agent was saved.

    Input Arguments

    collapse all

    Training results, specified as an rlTrainingResult, rlMultiAgentTrainingResult, or rlEvolutionStrategyTrainingResult object. For more information, see train, and trainWithEvolutionStrategy.

    Version History

    Introduced in R2024a