Main Content

inspectTrainingResult

Plot training information from a previous training session

    Description

    By default, the train function shows the training progress and results in the Episode Manager during training. If you configure training to not show the Episode Manager or you close the Episode Manager after training, you can view the training results using the inspectTrainingResult function, which opens the Episode Manager. You can also use inspectTrainingResult to view the training results for agents saved during training.

    example

    inspectTrainingResult(trainResults) opens the Episode Manager and plots the training results from a previous training session.

    example

    inspectTrainingResult(agentResults) opens the Episode Manager and plots the training results from a previously saved agent structure.

    Examples

    collapse all

    For this example, assume that you have trained the agent in the Train Reinforcement Learning Agent in MDP Environment example and subsequently closed the Episode Manager.

    Load the training information returned by the train function.

    load mdpTrainingStats trainingStats

    Reopen the Episode Manager for this training session.

    inspectTrainingResult(trainingStats)

    For this example, load the environment and agent for the Train Reinforcement Learning Agent in MDP Environment example.

    load mdpAgentAndEnvironment

    Specify options for training the agent. Configure the SaveAgentCriteria and SaveAgentValue options to save all agents with a reward greater than or equal to 13.

    trainOpts = rlTrainingOptions;
    trainOpts.MaxStepsPerEpisode = 50;
    trainOpts.MaxEpisodes = 50;
    trainOpts.Plots = "none";
    trainOpts.SaveAgentCriteria = "EpisodeReward";
    trainOpts.SaveAgentValue = 13;

    Train the agent. During training, when an episode has a reward greater than or equal to 13, a copy of the agent is saved in a savedAgents folder.

    rng('default') % for reproducibility
    trainingStats = train(qAgent,env,trainOpts);

    Load the training results for one of the saved agents. This command loads both the agent and a structure that contains the corresponding training results.

    load savedAgents/Agent30

    View the training results from the saved agent result structure.

    inspectTrainingResult(savedAgentResultStruct)

    The Episode Manager shows the training progress up to the episode in which the agent was saved.

    Input Arguments

    collapse all

    Training episode data, specified as a structure or structure array returned by the train function.

    Saved agent results, specified as a structure previously saved by the train function. The train function saves agents when you specify the SaveAgentCriteria and SaveAgentValue options in the rlTrainingOptions object used during training.

    When you load a saved agent, the agent and its training results are added to the MATLAB® workspace as saved_agent and savedAgentResultStruct, respectively. To plot the training data for this agent, use the following command.

    inspectTrainingResult(savedAgentResultStruct)

    For multi-agent training, savedAgentResultStruct contains structure fields with training results for all the trained agents.

    Introduced in R2021a