Main Content

Model Testing Metrics

The Model Testing Dashboard collects metric data from the model design and testing artifacts in a project, such as requirements, models, and test results. Use the metric data to assess the status and quality of your model testing. Each metric in the dashboard measures a different aspect of the quality of the testing of your model and reflects guidelines in industry-recognized software development standards, such as ISO 26262 and DO-178. Use the widgets in the Model Testing Dashboard to see high-level metric results and testing gaps, as described in Explore Status and Quality of Testing Activities Using Model Testing Dashboard.

Alternatively, you can use the API functions to collect metric results programmatically. When using the API, use the metric IDs to refer to each metric. See Collect Metrics on Model Testing Artifacts Programmatically for an example of how to collect these metrics programmatically. You can use the function getAvailableMetricIds to return a list of available metric identifiers.

The model testing metrics provide:

Metrics for Requirements Linked to Tests

The metrics associated with the Requirements Linked to Tests section of the dashboard include:

MetricDescription
RequirementWithTestCase

Determine whether a requirement is linked to test cases.

RequirementWithTestCasePercentage

Calculate the percentage of requirements that are linked to test cases.

RequirementWithTestCaseDistribution

Distribution of the number of requirements linked to test cases compared to the number of requirements that are missing test cases.

TestCasesPerRequirement

Count the number of test cases linked to each requirement.

TestCasesPerRequirementDistribution

Distribution of the number of test cases linked to each requirement.

For more information, see Requirements Linked to Tests.

Metrics for Tests Linked to Requirements

The metrics associated with the Tests Linked to Requirements section of the dashboard include:

MetricDescription
TestCaseWithRequirement

Determine whether a test case is linked to requirements.

TestCaseWithRequirementPercentage

Calculate the fraction of test cases that are linked to requirements.

TestCaseWithRequirementDistribution

Distribution of the number of test cases linked to requirements compared to the number of test cases that are missing links to requirements.

RequirementsPerTestCase

Count the number of requirements linked to each test case.

RequirementsPerTestCaseDistribution

Distribution of the number of requirements linked to each test case.

For more information, see Tests Linked to Requirements.

Metrics for Test Case Breakdown

The metrics associated with the Test Case Breakdown section of the dashboard include:

MetricDescription
TestCaseType

Return the type of the test case.

TestCaseTypeDistribution

Distribution of the types of the test cases for the unit.

TestCaseTag

Return the tags for a test case.

TestCaseTagDistribution

Distribution of the tags of the test cases for the unit.

For more information, see Test Case Breakdown.

Metrics for Model Test Status

The metrics associated with the Model Test Status section of the dashboard include:

MetricDescription
TestCaseStatus

Return the status of the test case result.

TestCaseStatusPercentage

Calculate the fraction of test cases that passed.

TestCaseStatusDistribution

Distribution of the statuses of the test case results for the unit.

TestCaseVerificationStatus

Determine whether a test case has pass/fail criteria such as verify statements, verification blocks, custom criteria, and logical or temporal assessments.

TestCaseVerificationStatusDistribution

Distribution of the number of test cases that do not have pass/fail criteria compared to the number of test cases that do have pass/fail criteria.

For more information, see Model Test Status.

Metrics for Model Coverage for the Unit

The Model Coverage section of the dashboard shows the aggregated coverage for the unit.

The metrics associated with the Model Coverage section of the dashboard include:

MetricDescription
ExecutionCoverageBreakdown

Overall model execution coverage achieved, justified, or missed by the tests in the unit.

DecisionCoverageBreakdown

Overall model decision coverage achieved, justified, or missed by the tests in the unit.

ConditionCoverageBreakdown

Overall model condition coverage achieved, justified, or missed by the tests in the unit.

MCDCCoverageBreakdown

Overall model modified condition and decision coverage (MC/DC) achieved, justified, or missed by the tests in the unit.

For more information, see Model Coverage for the Unit.

Metrics for Model Coverage for Each Model in the Unit

When you click on a bar in the Model Coverage bar chart, the Metric Details show the coverage for each model in the unit.

The metrics associated with the model coverage for each model in the unit include:

MetricDescription
ExecutionCoverageFragment

Execution coverage for each model in the unit.

DecisionCoverageFragment

Decision coverage for each model in the unit.

ConditionCoverageFragment

Condition coverage for each model in the unit.

MCDCCoverageFragment

Modified condition/decision coverage (MC/DC) for each model in the unit.

For more information, see Model Coverage for each Model in the Unit.

Metrics for Requirements-Based Tests for the Unit

The Achieved Coverage Ratio section of the dashboard shows the sources of achieved coverage for the unit. The Requirements-Based Tests section shows how much of the overall achieved coverage comes from requirements-based tests.

The metrics associated with the Requirements-Based Tests section of the dashboard include:

MetricDescription
RequirementsExecutionCoverageBreakdown

Fraction of the overall achieved execution coverage that comes from requirements-based tests in the unit.

RequirementsDecisionCoverageBreakdown

Fraction of the overall achieved decision coverage that comes from requirements-based tests in the unit.

RequirementsConditionCoverageBreakdown

Fraction of the overall achieved condition coverage that comes from requirements-based tests in the unit.

RequirementsMCDCCoverageBreakdown

Fraction of the overall achieved MC/DC coverage that comes from requirements-based tests in the unit.

For more information, see Requirements-Based Tests for the Unit.

Metrics for Requirements-Based Tests for Each Model in the Unit

When you click on a bar in the Requirements-Based Tests section, the Metric Details show the coverage ratio for each model in the unit. For requirements-based tests, the coverage ratio is the percentage of the overall achieved coverage that comes from requirements-based tests.

The metrics associated with requirements-based coverage for each model in the unit include:

MetricDescription
RequirementsExecutionCoverageFragment

Fraction of the overall achieved execution coverage that comes from requirements-based tests in each model in the unit.

RequirementsDecisionCoverageFragment

Fraction of the overall achieved decision coverage that comes from requirements-based tests in each model in the unit.

RequirementsConditionCoverageFragment

Fraction of the overall achieved condition coverage that comes from requirements-based tests in each model in the unit.

RequirementsMCDCCoverageFragment

Fraction of the overall achieved MC/DC coverage that comes from requirements-based tests in each model in the unit.

For more information, see Requirements-Based Tests for each Model in the Unit.

Metrics for Unit-Boundary Tests for the Unit

The Achieved Coverage Ratio section of the dashboard shows the sources of achieved coverage for the unit. The Unit-Boundary Tests section shows how much of the overall achieved coverage comes from unit-boundary tests.

The metrics associated with the Unit-Boundary Tests section of the dashboard include:

MetricDescription
UnitBoundaryExecutionCoverageBreakdown

Fraction of the overall achieved execution coverage that comes from unit-boundary tests in the unit.

UnitBoundaryDecisionCoverageBreakdown

Fraction of the overall achieved decision coverage that comes from unit-boundary tests in the unit.

UnitBoundaryConditionCoverageBreakdown

Fraction of the overall achieved condition coverage that comes from unit-boundary tests in the unit.

UnitBoundaryMCDCCoverageBreakdown

Fraction of the overall achieved MC/DC coverage that comes from unit-boundary tests in the unit.

For more information, see Unit-Boundary Tests for the Unit.

Metrics for Unit-Boundary Tests for Each Model in the Unit

When you click on a bar in the Unit-Boundary Tests section, the Metric Details show the coverage ratio for each model in the unit. For unit-boundary tests, the coverage ratio is the percentage of the overall achieved coverage that comes from unit-boundary tests.

The metrics associated with unit-boundary coverage for each model in the unit include:

MetricDescription
UnitBoundaryExecutionCoverageFragment

Fraction of the overall achieved execution coverage that comes from unit-boundary tests in each model in the unit.

UnitBoundaryDecisionCoverageFragment

Fraction of the overall achieved decision coverage that comes from unit-boundary tests in each model in the unit.

UnitBoundaryConditionCoverageFragment

Fraction of the overall achieved condition coverage that comes from unit-boundary tests in each model in the unit.

UnitBoundaryMCDCCoverageFragment

Fraction of the overall achieved MC/DC coverage that comes from unit-boundary tests in each model in the unit.

For more information, see Unit-Boundary Tests for each Model in the Unit.

Requirements Linked to Tests

The metrics associated with the Requirements Linked to Tests section of the dashboard are:

RequirementWithTestCase

Determine whether a requirement is linked to test cases.

Metric Information

Metric ID: RequirementWithTestCase

Description:

Use this metric to determine whether a requirement is linked to a test case with a link where the Type is set to Verifies. The metric analyzes only requirements where the Type is set to Functional and that are linked to the unit with a link where the Type is set to Implements.

To collect data for this metric:

  • In the Model Testing Dashboard, click a metric in the Requirements Linked to Tests section and, in the table, see the Test Link Status column.

  • Use getMetrics with the metric ID RequirementWithTestCase.

Collecting data for this metric loads the model file and requires a Requirements Toolbox™ license.

Results:

For this metric, instances of metric.Result return Value as one of these logical outputs:

  • 0 — The requirement is not linked to test cases in the project.

  • 1 — The requirement is linked to at least one test case with a link where the Type is set to Verifies.

Capabilities and Limitations:

The metric:

  • Analyzes only requirements where the Type is set to Functional and that are linked to the unit with a link where the Type is set to Implements.

  • Counts links to test cases in the project where the link type is set to Verifies, including links to test cases that test other models or subsystems. For each requirement that is linked to test cases, check that the links are to test cases that run on the unit that implements the requirement.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

RequirementWithTestCasePercentage

Calculate the percentage of requirements that are linked to test cases.

Metric Information

Metric ID: RequirementWithTestCasePercentage

Description:

This metric counts the fraction of requirements that are linked to at least one test case with a link where the Type is set to Verifies. The metric analyzes only requirements where the Type is set to Functional and that are linked to a unit with a link where the Type is set to Implements.

This metric calculates the results by using the results of the RequirementWithTestCase metric.

To collect data for this metric:

  • In the Model Testing Dashboard, view the Requirements with Tests widget.

  • Use getMetrics with the metric ID RequirementWithTestCasePercentage.

Collecting data for this metric loads the model file and requires a Requirements Toolbox license.

Results:

For this metric, instances of metric.Result return Value as a fraction structure that contains these fields:

  • Numerator — The number of implemented requirements that are linked to at least one test case.

  • Denominator — The total number of functional requirements implemented in the unit with a link where the Type is set to Implements.

Compliance Thresholds:

The default compliance thresholds for this metric are:

  • Compliant100% of unit requirements are linked to test cases

  • Non-Compliant — Less than 100% of unit requirements are linked to test cases

  • Warning — None

Capabilities and Limitations:

The metric:

  • Analyzes only requirements where the Type is set to Functional and that are linked to a unit with a link where the Type is set to Implements.

  • Counts links to test cases in the project where the link type is set to Verifies, including links to test cases that test other models or subsystems. For each requirement that is linked to test cases, check that the links are to test cases that run on the unit that implements the requirement.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

RequirementWithTestCaseDistribution

Distribution of the number of requirements linked to test cases compared to the number of requirements that are missing test cases.

Metric Information

Metric ID: RequirementWithTestCaseDistribution

Description:

Use this metric to count the number of requirements that are linked to test cases and the number of requirements that are missing links to test cases. The metric analyzes only requirements where the Type is set to Functional and that are linked to a unit with a link where the Type is set to Implements. A requirement is linked to a test case if it has a link where the Type is set to Verifies.

This metric returns the result as a distribution of the results of the RequirementWithTestCase metric.

To collect data for this metric:

  • In the Model Testing Dashboard, place your cursor over the Requirements with Tests widget.

  • Use getMetrics with the metric ID RequirementWithTestCaseDistribution.

Collecting data for this metric loads the model file and requires a Requirements Toolbox license.

Results:

For this metric, instances of metric.Result return Value as a distribution structure that contains these fields:

  • BinCounts — The number of requirements in each bin, returned as an integer vector.

  • BinEdges — The logical output results of the RequirementWithTestCase metric, returned as a vector with entries 0 (false) and 1 (true).

The first bin includes requirements that are not linked to test cases. The second bin includes requirements that are linked to at least one test case.

Compliance Thresholds:

The default compliance thresholds for this metric are:

  • Compliant0 requirements are missing links to test cases

  • Non-Compliant1 or more requirements are missing links to test cases

  • Warning — None

Capabilities and Limitations:

The metric:

  • Analyzes only requirements where the Type is set to Functional and that are linked to a unit with a link where the Type is set to Implements.

  • Counts links to test cases in the project where the link type is set to Verifies, including links to test cases that test other models or subsystems. For each requirement that is linked to test cases, check that the links are to test cases that run on the unit that implements the requirement.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

TestCasesPerRequirement

Count the number of test cases linked to each requirement.

Metric Information

Metric ID: TestCasesPerRequirement

Description:

Use this metric to count the number of test cases linked to each requirement. The metric analyzes only requirements where the Type is set to Functional and that are linked to the unit with a link where the Type is set to Implements. A test case is linked to a requirement if it has a link where the Type is set to Verifies.

To collect data for this metric:

  • In the Model Testing Dashboard, click a metric in the section Tests per Requirement to display the results in a table.

  • Use getMetrics with the metric ID TestCasesPerRequirement.

Collecting data for this metric loads the model file and requires a Requirements Toolbox license.

Results:

For this metric, instances of metric.Result return Value as an integer.

Capabilities and Limitations:

The metric:

  • Analyzes only requirements where the Type is set to Functional and that are linked to the unit with a link where the Type is set to Implements.

  • Counts links to test cases in the project where the link type is set to Verifies, including links to test cases that test other models or subsystems. For each requirement that is linked to test cases, check that the links are to test cases that run on the unit that implements the requirement.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

TestCasesPerRequirementDistribution

Distribution of the number of test cases linked to each requirement.

Metric Information

Metric ID: TestCasesPerRequirementDistribution

Description:

This metric returns a distribution of the number of test cases linked to each requirement. Use this metric to determine if requirements are linked to a disproportionate number of test cases. The metric analyzes only requirements where the Type is set to Functional and that are linked to the unit with a link where the Type is set to Implements. A test case is linked to a requirement if it has a link where the Type is set to Verifies.

This metric returns the result as a distribution of the results of the Test cases per requirement metric.

To collect data for this metric:

  • In the Model Testing Dashboard, view the Tests per Requirement widget.

  • Use getMetrics with the metric ID TestCasesPerRequirementDistribution.

Collecting data for this metric loads the model file and requires a Requirements Toolbox license.

Results:

For this metric, instances of metric.Result return Value as a distribution structure that contains these fields:

  • BinCounts — The number of requirements in each bin, returned as an integer vector.

  • BinEdges — Bin edges for the number of test cases linked to each requirement, returned as an integer vector. BinEdges(1) is the left edge of the first bin, and BinEdges(end) is the right edge of the last bin. The length of BinEdges is one more than the length of BinCounts.

The bins in the result of this metric correspond to the bins 0, 1, 2, 3, and >3 in the Tests per Requirement widget.

Compliance Thresholds:

This metric does not have predefined thresholds. Consequently, the compliance threshold overlay icon appears when you click Uncategorized in the Overlays section of the toolstrip.

Capabilities and Limitations:

The metric:

  • Analyzes only requirements where the Type is set to Functional and that are linked to the unit with a link where the Type is set to Implements.

  • Counts links to test cases in the project where the link type is set to Verifies, including links to test cases that test other models or subsystems. For each requirement that is linked to test cases, check that the links are to test cases that run on the unit that implements the requirement.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

Tests Linked to Requirements

The metrics associated with the Tests Linked to Requirements section of the dashboard are:

TestCaseWithRequirement

Determine whether a test case is linked to requirements.

Metric Information

Metric ID: TestCaseWithRequirement

Description:

Use this metric to determine whether a test case is linked to a requirement with a link where the Type is set to Verifies. The metric analyzes only test cases that run on the model or subsystems in the unit for which you collect metric data.

To collect data for this metric:

  • In the Model Testing Dashboard, click a metric in the Tests Linked to Requirements section and, in the table, see the Requirement Link Status column.

  • Use getMetrics with the metric ID TestCaseWithRequirement.

Collecting data for this metric loads the model file and requires a Simulink® Test™ license.

Results:

For this metric, instances of metric.Result return Value as one of these logical outputs:

  • 0 — The test case is not linked to requirements that are implemented in the unit.

  • 1 — The test case is linked to at least one requirement with a link where the Type is set to Verifies.

Capabilities and Limitations:

The metric:

  • Analyzes only test cases in the project that test:

    • Unit models

    • Atomic subsystems

    • Atomic subsystem references

    • Atomic Stateflow® charts

    • Atomic MATLAB® Function blocks

    • Referenced models

  • Counts only links where the Type is set to Verifies that link to requirements where the Type is set to Functional. This includes links to requirements that are not linked to the unit or are linked to other units. For each test case that is linked to requirements, check that the links are to requirements that are implemented by the unit that the test case runs on.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

TestCaseWithRequirementPercentage

Calculate the fraction of test cases that are linked to requirements.

Metric Information

Metric ID: TestCaseWithRequirementPercentage

Description:

This metric counts the fraction of test cases that are linked to at least one requirement with a link where the Type is set to Verifies. The metric analyzes only test cases that run on the model or subsystems in the unit for which you collect metric data.

This metric calculates the results by using the results of the TestCaseWithRequirement metric.

To collect data for this metric:

  • In the Model Testing Dashboard, view the Tests with Requirements widget.

  • Use getMetrics with the metric ID TestCaseWithRequirementPercentage.

Collecting data for this metric loads the model file and requires a Simulink Test license.

Results:

For this metric, instances of metric.Result return Value as a fraction structure that contains these fields:

  • Numerator — The number of test cases that are linked to at least one requirement with a link where the Type is set to Verifies.

  • Denominator — The total number of test cases that test the unit.

Compliance Thresholds:

The default compliance thresholds for this metric are:

  • Compliant100% of test cases are linked to requirements

  • Non-Compliant — Less than 100% of test cases are linked to requirements

  • Warning — None

Capabilities and Limitations:

The metric:

  • Analyzes only test cases in the project that test:

    • Unit models

    • Atomic subsystems

    • Atomic subsystem references

    • Atomic Stateflow charts

    • Atomic MATLAB Function blocks

    • Referenced models

  • Counts only links where the Type is set to Verifies that link to requirements where the Type is set to Functional. This includes links to requirements that are not linked to the unit or are linked to other units. For each test case that is linked to requirements, check that the links are to requirements that are implemented by the unit that the test case runs on.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

TestCaseWithRequirementDistribution

Distribution of the number of test cases linked to requirements compared to the number of test cases that are missing links to requirements.

Metric Information

Metric ID: TestCaseWithRequirementDistribution

Description:

Use this metric to count the number of test cases that are linked to requirements and the number of test cases that are missing links to requirements. The metric analyzes only test cases that run on the model or subsystems in the unit for which you collect metric data. A test case is linked to a requirement if it has a link where the Type is set to Verifies.

This metric returns the result as a distribution of the results of the TestCaseWithRequirement metric.

To collect data for this metric:

  • In the Model Testing Dashboard, place your cursor over the Tests with Requirements widget.

  • Use getMetrics with the metric ID TestCaseWithRequirementDistribution.

Collecting data for this metric loads the model file and requires a Simulink Test license.

Results:

For this metric, instances of metric.Result return the Value as a distribution structure that contains these fields:

  • BinCounts — The number of test cases in each bin, returned as an integer vector.

  • BinEdges — The logical output results of the TestCaseWithRequirement metric, returned as a vector with entries 0 (false) and 1 (true).

The first bin includes test cases that are not linked to requirements. The second bin includes test cases that are linked to at least one requirement.

Compliance Thresholds:

The default compliance thresholds for this metric are:

  • Compliant0 unit tests are missing links to requirements

  • Non-Compliant1 or more unit tests are missing links to requirements

  • Warning — None

Capabilities and Limitations:

The metric:

  • Analyzes only test cases in the project that test:

    • Unit models

    • Atomic subsystems

    • Atomic subsystem references

    • Atomic Stateflow charts

    • Atomic MATLAB Function blocks

    • Referenced models

  • Counts only links where the Type is set to Verifies that link to requirements where the Type is set to Functional. This includes links to requirements that are not linked to the unit or are linked to other units. For each test case that is linked to requirements, check that the links are to requirements that are implemented by the unit that the test case runs on.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

RequirementsPerTestCase

Count the number of requirements linked to each test case.

Metric Information

Metric ID: RequirementsPerTestCase

Description:

Use this metric to count the number of requirements linked to each test case. The metric analyzes only test cases that run on the model or subsystems in the unit for which you collect metric data. A test case is linked to a requirement if it has a link where the Type is set to Verifies.

To collect data for this metric:

  • In the Model Testing Dashboard, click a metric in the section Requirements per Test to display the results in a table.

  • Use getMetrics with the metric ID RequirementsPerTestCase.

Collecting data for this metric loads the model file and requires a Simulink Test license.

Results:

For this metric, instances of metric.Result return Value as an integer.

Capabilities and Limitations:

The metric:

  • Analyzes only test cases in the project that test:

    • Unit models

    • Atomic subsystems

    • Atomic subsystem references

    • Atomic Stateflow charts

    • Atomic MATLAB Function blocks

    • Referenced models

  • Counts only links where the Type is set to Verifies that link to requirements where the Type is set to Functional. This includes links to requirements that are not linked to the unit or are linked to other units. For each test case that is linked to requirements, check that the links are to requirements that are implemented by the unit that the test case runs on.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

RequirementsPerTestCaseDistribution

Distribution of the number of requirements linked to each test case.

Metric Information

Metric ID: RequirementsPerTestCaseDistribution

Description:

This metric returns a distribution of the number of requirements linked to each test case. Use this metric to determine if test cases are linked to a disproportionate number of requirements. The metric analyzes only test cases that run on the model or subsystems in the unit for which you collect metric data. A test case is linked to a requirement if it has a link where the Type is set to Verifies.

This metric returns the result as a distribution of the results of the Requirements per test case metric.

To collect data for this metric:

  • In the Model Testing Dashboard, view the Requirements per Test widget.

  • Use getMetrics with the metric ID RequirementsPerTestCaseDistribution.

Collecting data for this metric loads the model file and requires a Simulink Test license.

Results:

For this metric, instances of metric.Result return Value as a distribution structure that contains these fields:

  • BinCounts — The number of test cases in each bin, returned as an integer vector.

  • BinEdges — Bin edges for the number of requirements linked to each test case, returned as an integer vector. BinEdges(1) is the left edge of the first bin, and BinEdges(end) is the right edge of the last bin. The length of BinEdges is one more than the length of BinCounts.

The bins in the result of this metric correspond to the bins 0, 1, 2, 3, and >3 in the Requirements per Test widget.

Compliance Thresholds:

This metric does not have predefined thresholds. Consequently, the compliance threshold overlay icon appears when you click Uncategorized in the Overlays section of the toolstrip.

Capabilities and Limitations:

The metric:

  • Analyzes only test cases in the project that test:

    • Unit models

    • Atomic subsystems

    • Atomic subsystem references

    • Atomic Stateflow charts

    • Atomic MATLAB Function blocks

    • Referenced models

  • Counts only links where the Type is set to Verifies that link to requirements where the Type is set to Functional. This includes links to requirements that are not linked to the unit or are linked to other units. For each test case that is linked to requirements, check that the links are to requirements that are implemented by the unit that the test case runs on.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

Test Case Breakdown

The metrics associated with the Test Case Breakdown section of the dashboard are:

TestCaseType

Return the type of the test case.

Metric Information

Metric ID: TestCaseType

Description:

This metric returns the type of the test case. A test case is either a baseline, equivalence, or simulation test.

  • Baseline tests compare outputs from a simulation to expected results stored as baseline data.

  • Equivalence tests compare the outputs from two different simulations. Simulations can run in different modes, such as normal simulation and software-in-the-loop.

  • Simulation tests run the system under test and capture simulation data. If the system under test contains blocks that verify simulation, such as Test Sequence and Test Assessment blocks, the pass/fail results are reflected in the simulation test results.

To collect data for this metric:

  • In the Model Testing Dashboard, click a widget in the section Tests by Type to display the results in a table.

  • Use getMetrics with the metric ID TestCaseType.

Collecting data for this metric loads the model file and test files and requires a Simulink Test license.

Results:

For this metric, instances of metric.Result return Value as one of these integer outputs:

  • 0 — Simulation test

  • 1 — Baseline test

  • 2 — Equivalence test

Capabilities and Limitations:

The metric includes only test cases in the project that test the model or subsystems in the unit for which you collect metric data.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

TestCaseTypeDistribution

Distribution of the types of the test cases for the unit.

Metric Information

Metric ID: TestCaseTypeDistribution

Description:

This metric returns a distribution of the types of test cases that run on the unit. A test case is either a baseline, equivalence, or simulation test. Use this metric to determine if there is a disproportionate number of test cases of one type.

  • Baseline tests compare outputs from a simulation to expected results stored as baseline data.

  • Equivalence tests compare the outputs from two different simulations. Simulations can run in different modes, such as normal simulation and software-in-the-loop.

  • Simulation tests run the system under test and capture simulation data. If the system under test contains blocks that verify simulation, such as Test Sequence and Test Assessment blocks, the pass/fail results are reflected in the simulation test results.

This metric returns the result as a distribution of the results of the Test case type metric.

To collect data for this metric:

  • In the Model Testing Dashboard, view the Tests by Type widget.

  • Programmatically, use getMetrics with the metric ID TestCaseTypeDistribution.

Collecting data for this metric loads the model file and requires a Simulink Test license.

Results:

For this metric, instances of metric.Result return Value as a distribution structure that contains these fields:

  • BinCounts — The number of test cases in each bin, returned as an integer vector.

  • BinEdges — The outputs of the Test case type metric, returned as an integer vector. The integer outputs represent the three test case types:

    • 0 — Simulation test

    • 1 — Baseline test

    • 2 — Equivalence test

Compliance Thresholds:

This metric does not have predefined thresholds. Consequently, the compliance threshold overlay icon appears when you click Uncategorized in the Overlays section of the toolstrip.

Capabilities and Limitations:

The metric includes only test cases in the project that test the model or subsystems in the unit for which you collect metric data.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

TestCaseTag

Return the tags for a test case.

Metric Information

Metric ID: TestCaseTag

Description:

This metric returns the tags for a test case. You can add custom tags to a test case by using the Test Manager.

To collect data for this metric:

  • In the Model Testing Dashboard, click a widget in the Tests with Tag section to display the results in a table.

  • Use getMetrics with the metric ID TestCaseTag.

Collecting data for this metric loads the model file and test files and requires a Simulink Test license.

Results:

For this metric, instances of metric.Result return Value as a string.

Capabilities and Limitations:

The metric includes only test cases in the project that test the model or subsystems in the unit for which you collect metric data.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

TestCaseTagDistribution

Distribution of the tags of the test cases for the unit.

Metric Information

Metric ID: TestCaseTagDistribution

Description:

This metric returns a distribution of the tags on the test cases that run on the unit. For a test case, you can specify custom tags in a comma-separated list in the Test Manager. Use this metric to determine if there is a disproportionate number of test cases that have a particular tag.

This metric returns the result as a distribution of the results of the Test case tag metric.

To collect data for this metric:

  • In the Model Testing Dashboard, view the Tests with Tag widget.

  • Use getMetrics with the metric ID TestCaseTagDistribution.

Collecting data for this metric loads the model file and requires a Simulink Test license.

Results:

For this metric, instances of metric.Result return Value as a distribution structure that contains these fields:

  • BinCounts — The number of test cases in each bin, returned as an integer vector.

  • BinEdges — The bin edges for the tags that are specified for the test cases, returned as a string array.

Compliance Thresholds:

This metric does not have predefined thresholds. Consequently, the compliance threshold overlay icon appears when you click Uncategorized in the Overlays section of the toolstrip.

Capabilities and Limitations:

The metric includes only test cases in the project that test the model or subsystems in the unit for which you collect metric data.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

Model Test Status

The metrics associated with the Model Test Status section of the dashboard are:

TestCaseStatus

Return the status of the test case result.

Metric Information

Metric ID: TestCaseStatus

Description:

This metric returns the status of the test case result. A test status is passed, failed, disabled, or untested.

To collect data for this metric:

  • In the Model Testing Dashboard, click a widget in the Model Test Status section to display the results in a table.

  • Use getMetrics with the metric ID TestCaseStatus.

Collecting data for this metric loads the model file and test result files and requires a Simulink Test license.

Results:

For this metric, instances of metric.Result return Value as one of these integer outputs:

  • 0 — The test case failed.

  • 1 — The test case passed.

  • 2 — The test case was disabled.

  • 3 — The test case was not run (untested).

Capabilities and Limitations:

The metric:

  • Includes only test cases in the project that test the model or subsystems in the unit for which you collect metric data.

  • Does not count the status of test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode. The metric shows these test cases as untested.

  • Reflects the status of the whole test case if the test case includes multiple iterations.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

TestCaseStatusPercentage

Calculate the fraction of test cases that passed.

Metric Information

Metric ID: TestCaseStatusPercentage

Description:

This metric counts the fraction of test cases that passed in the test results.

This metric calculates the results by using the results of the TestCaseStatus metric.

To collect data for this metric:

  • In the Model Testing Dashboard, in the Model Test Status section, place your cursor over the Passed widget.

  • Use getMetrics with the metric ID TestCaseStatusPercentage.

Collecting data for this metric loads the model file and requires a Simulink Test license.

Results:

For this metric, instances of metric.Result return Value as a fraction structure that contains these fields:

  • Numerator — The number of test cases that passed.

  • Denominator — The total number of test cases that test the unit.

Compliance Thresholds:

The default compliance thresholds for this metric are:

  • Compliant100% of test cases passed

  • Non-Compliant — Less than 100% of test cases passed

  • Warning — None

Capabilities and Limitations:

The metric:

  • Includes only test cases in the project that test the model or subsystems in the unit for which you collect metric data.

  • Does not count the status of test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode. The metric shows these test cases as untested.

  • Reflects the status of the whole test case if the test case includes multiple iterations.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

TestCaseStatusDistribution

Distribution of the statuses of the test case results for the unit.

Metric Information

Metric ID: TestCaseStatusDistribution

Description:

This metric returns a distribution of the status of the results of test cases that run on the unit. A test status is passed, failed, disabled, or untested.

This metric returns the result as a distribution of the results of the Test case type metric.

To collect data for this metric:

  • In the Model Testing Dashboard, use the widgets in the Model Test Status section to see the results.

  • Use getMetrics with the metric ID TestCaseStatusDistribution.

Collecting data for this metric loads the model file and requires a Simulink Test license.

Results:

For this metric, instances of metric.Result return Value as a distribution structure that contains these fields:

  • BinCounts — the number of test cases in each bin, returned as an integer vector.

  • BinEdges — The outputs of the TestCaseStatus metric, returned as an integer vector. The integer outputs represent the test result statuses:

    • 0 — The test case failed.

    • 1 — The test case passed.

    • 2 — The test case was disabled.

    • 3 — The test case was not run (untested).

Compliance Thresholds:

The default compliance thresholds for this metric are:

  • Compliant:

    • 0 unit tests are untested.

    • 0 unit tests failed.

    • 0 unit tests are disabled.

  • Non-Compliant1 or more unit tests are untested, disabled, or have failed.

  • Warning — None

Capabilities and Limitations:

The metric:

  • Includes only test cases in the project that test the model or subsystems in the unit for which you collect metric data.

  • Does not count the status of test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode. The metric shows these test cases as untested.

  • Reflects the status of the whole test case if the test case includes multiple iterations.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

TestCaseVerificationStatus

Determine whether a test case has pass/fail criteria such as verify statements, verification blocks, custom criteria, and logical or temporal assessments.

Metric Information

Metric ID: TestCaseVerificationStatus

Description:

Use this metric to determine whether a test case has pass/fail criteria.

A test case has pass/fail criteria if it has at least one of the following:

  • at least one executed verify statement

  • at least one executed temporal or logical assessment

  • custom criteria that has a pass/fail status in Simulink Test Manager

  • baseline criteria which determine the pass/fail criteria of the test case

To collect data for this metric:

  • In the Model Testing Dashboard, in the Model Test Status section, click the Inconclusive widget to view the TestCaseVerificationStatus results in a table.

  • Use getMetrics with the metric ID TestCaseVerificationStatus.

Collecting data for this metric loads the model file and test result files and requires a Simulink Test license.

Results:

For this metric, instances of metric.Result return Value as one of these integer outputs:

  • 0 — The test case is missing pass/fail criteria.

  • 1 — The test case has pass/fail criteria.

  • 2 — The test case was not run.

Capabilities and Limitations:

The metric:

  • Includes only test cases in the project that test the model or subsystems in the unit for which you collect metric data.

  • Does not count the pass/fail criteria of test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode. The metric shows these test cases as Missing Pass/Fail Criteria.

  • Reflects the status of the whole test case if the test case includes multiple iterations.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

TestCaseVerificationStatusDistribution

Distribution of the number of test cases that do not have pass/fail criteria compared to the number of test cases that do have pass/fail criteria.

Metric Information

Metric ID: TestCaseVerificationStatusDistribution

Description:

Use this metric to count the number of test cases that do not have pass/fail criteria and the number of test cases that do have pass/fail criteria.

A test case has pass/fail criteria if it has at least one of the following:

  • at least one executed verify statement

  • at least one executed temporal or logical assessment

  • custom criteria that has a pass/fail status in Simulink Test Manager

  • baseline criteria which determine the pass/fail criteria of the test case

This metric returns the result as a distribution of the results of the TestCaseVerificationStatusDistribution metric.

To collect data for this metric:

  • In the Model Testing Dashboard, in the Model Test Status section, place your cursor over the Inconclusive widget.

  • Use getMetrics with the metric ID TestCaseVerificationStatusDistribution.

Collecting data for this metric loads the model file and test files and requires a Simulink Test license.

Results:

For this metric, instances of metric.Result return Value as a distribution structure that contains these fields:

  • BinCounts — The number of test cases in each bin, returned as an integer vector.

  • BinEdges — The outputs of the TestCaseVerificationStatus metric, returned as an integer vector. The integer outputs represent the three test case verification statuses:

    • 0 — The test case is missing pass/fail criteria.

    • 1 — The test case has pass/fail criteria.

    • 2 — The test case was not run.

Compliance Thresholds:

The default compliance thresholds for this metric are:

  • Compliant0 unit tests are missing pass/fail criteria

  • Non-Compliant1 or more unit tests do not have pass/fail criteria

  • Warning — None

Capabilities and Limitations:

The metric:

  • Includes only test cases in the project that test the model or subsystems in the unit for which you collect metric data.

  • Does not count the pass/fail criteria of test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode. The metric shows these test cases as Missing Pass/Fail Criteria.

  • Reflects the status of the whole test case if the test case includes multiple iterations.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

Model Coverage for the Unit

The Model Coverage section of the dashboard shows the aggregated coverage for the unit.

The metrics associated with the Model Coverage section of the dashboard are:

ExecutionCoverageBreakdown

Overall model execution coverage achieved, justified, or missed by the tests in the unit.

Metric Information

Metric ID: ExecutionCoverageBreakdown

Description:

This metric returns the model execution coverage measured in the test results, aggregated across the unit. The metric result includes the percentage of execution coverage achieved by the test cases, the percentage of coverage justified in coverage filters, and the percentage of execution coverage missed by the tests.

To collect data for this metric:

  • In the Model Testing Dashboard, in the Model Coverage section, place your cursor over the bars in the Execution widget.

  • Use getMetrics with the metric ID ExecutionCoverageBreakdown.

Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage™ license.

Results:

For this metric, instances of metric.Result return the Value as a double vector that contains these elements.

  • Value(1) — The percentage of execution coverage achieved by the tests.

  • Value(2) — The percentage of execution coverage justified by coverage filters.

  • Value(3) — The percentage of execution coverage missed by the tests.

Compliance Thresholds:

The default compliance thresholds for this metric are:

  • Compliant — Test results return 0% missed coverage

  • Non-Compliant — Test results return missing coverage

  • Warning — None

Capabilities and Limitations:

The metric:

  • Returns aggregated coverage results.

  • Does not include coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.

  • Returns 100% coverage for models that do not have execution points.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

DecisionCoverageBreakdown

Overall model decision coverage achieved, justified, or missed by the tests in the unit.

Metric Information

Metric ID: DecisionCoverageBreakdown

Description:

This metric returns the model decision coverage measured in the test results, aggregated across the unit. The metric result includes the percentage of decision coverage achieved by the test cases, the percentage of coverage justified in coverage filters, and the percentage of decision coverage missed by the tests.

To collect data for this metric:

  • In the Model Testing Dashboard, in the Model Coverage section, place your cursor over the bars in the Decision widget.

  • Use getMetrics with the metric ID DecisionCoverageBreakdown.

Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage license.

Results:

For this metric, instances of metric.Result return the Value as a double vector that contains these elements:

  • Value(1) — The percentage of decision coverage achieved by the tests.

  • Value(2) — The percentage of decision coverage justified by coverage filters.

  • Value(3) — The percentage of decision coverage missed by the tests.

Compliance Thresholds:

The default compliance thresholds for this metric are:

  • Compliant — Test results return 0% missed coverage

  • Non-Compliant — Test results return missing coverage

  • Warning — None

Capabilities and Limitations:

The metric:

  • Returns aggregated coverage results.

  • Does not include coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.

  • Returns 100% coverage for models that do not have decision points.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

ConditionCoverageBreakdown

Overall model condition coverage achieved, justified, or missed by the tests in the unit.

Metric Information

Metric ID: ConditionCoverageBreakdown

Description:

This metric returns the model condition coverage measured in the test results, aggregated across the unit. The metric result includes the percentage of condition coverage achieved by the test cases, the percentage of coverage justified in coverage filters, and the percentage of condition coverage missed by the tests.

To collect data for this metric:

  • In the Model Testing Dashboard, in the Model Coverage section, place your cursor over the bars in the Condition widget.

  • Use getMetrics with the metric ID ConditionCoverageBreakdown.

Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage license.

Results:

For this metric, instances of metric.Result return the Value as a double vector that contains these elements:

  • Value(1) — The percentage of condition coverage achieved by the tests.

  • Value(2) — The percentage of condition coverage justified by coverage filters.

  • Value(3) — The percentage of condition coverage missed by the tests.

Compliance Thresholds:

The default compliance thresholds for this metric are:

  • Compliant — Test results return 0% missed coverage

  • Non-Compliant — Test results return missed coverage

  • Warning — None

Capabilities and Limitations:

The metric:

  • Returns aggregated coverage results.

  • Does not include coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.

  • Returns 100% coverage for models that do not have condition points.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

MCDCCoverageBreakdown

Overall model modified condition and decision coverage (MC/DC) achieved, justified, or missed by the tests in the unit.

Metric Information

Metric ID: MCDCCoverageBreakdown

Description:

This metric returns the modified condition and decision (MC/DC) measured in the test results, aggregated across the unit. The metric result includes the percentage of MC/DC coverage achieved by the test cases, the percentage of coverage justified in coverage filters, and the percentage of MC/DC coverage missed by the tests.

To collect data for this metric:

  • In the Model Testing Dashboard, in the Model Coverage section, place your cursor over the bars in the MC/DC widget.

  • Use getMetrics with the metric ID MCDCCoverageBreakdown.

Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage license.

Results:

For this metric, instances of metric.Result return the Value as a double vector that contains these elements:

  • Value(1) — The percentage of MC/DC coverage achieved by the tests.

  • Value(2) — The percentage of MC/DC coverage justified by coverage filters.

  • Value(1) — The percentage of MC/DC coverage missed by the tests.

Compliance Thresholds:

The default compliance thresholds for this metric are:

  • Compliant — Test results return 0% missed coverage

  • Non-Compliant — Test results return missing coverage

  • Warning — None

Capabilities and Limitations:

The metric:

  • Returns aggregated coverage results.

  • Does not include coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.

  • Returns 100% coverage for models that do not have condition/decision points.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

Model Coverage for each Model in the Unit

When you click on a bar in the Model Coverage bar chart, the Metric Details show the coverage for each model in the unit.

The metrics associated with the model coverage for each model in the unit are:

ExecutionCoverageFragment

Execution coverage for each model in the unit.

Metric Information

Metric ID: ExecutionCoverageFragment

Description:

This metric returns the model execution coverage measured in the test results for each model in the unit. The metric result includes the percentage of execution coverage achieved by the test cases, the percentage of coverage justified in coverage filters, and the percentage of execution coverage missed by the tests.

To collect data for this metric:

  • In the Model Testing Dashboard, in the Model Coverage section, click one of the bars in the Execution widget.

  • Use getMetrics with the metric ID ExecutionCoverageFragment.

Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage license.

Results:

For this metric, instances of metric.Result return the Value as a double vector that contains these elements.

  • Value(1) — The percentage of execution coverage achieved by the tests.

  • Value(2) — The percentage of execution coverage justified by coverage filters.

  • Value(3) — The percentage of execution coverage missed by the tests.

Compliance Thresholds:

This metric does not have predefined thresholds.

Capabilities and Limitations:

The metric:

  • Does not include coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.

  • Returns 100% coverage for models that do not have execution points.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

DecisionCoverageFragment

Decision coverage for each model in the unit.

Metric Information

Metric ID: DecisionCoverageFragment

Description:

This metric returns the model decision coverage measured in the test results for each model in the unit. The metric result includes the percentage of decision coverage achieved by the test cases, the percentage of coverage justified in coverage filters, and the percentage of decision coverage missed by the tests.

To collect data for this metric:

  • In the Model Testing Dashboard, in the Model Coverage section, click one of the bars in the Decision widget.

  • Use getMetrics with the metric ID DecisionCoverageFragment.

Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage license.

Results:

For this metric, instances of metric.Result return the Value as a double vector that contains these elements:

  • Value(1) — The percentage of decision coverage achieved by the tests.

  • Value(2) — The percentage of decision coverage justified by coverage filters.

  • Value(3) — The percentage of decision coverage missed by the tests.

Compliance Thresholds:

This metric does not have predefined thresholds.

Capabilities and Limitations:

The metric:

  • Does not include coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.

  • Returns 100% coverage for models that do not have decision points.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

ConditionCoverageFragment

Condition coverage for each model in the unit.

Metric Information

Metric ID: ConditionCoverageFragment

Description:

This metric returns the model condition coverage measured in the test results for each model in the unit. The metric result includes the percentage of condition coverage achieved by the test cases, the percentage of coverage justified in coverage filters, and the percentage of condition coverage missed by the tests.

To collect data for this metric:

  • In the Model Testing Dashboard, in the Model Coverage section, click one of the bars in the Condition widget.

  • Use getMetrics with the metric ID ConditionCoverageFragment.

Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage license.

Results:

For this metric, instances of metric.Result return the Value as a double vector that contains these elements:

  • Value(1) — The percentage of condition coverage achieved by the tests.

  • Value(2) — The percentage of condition coverage justified by coverage filters.

  • Value(3) — The percentage of condition coverage missed by the tests.

Compliance Thresholds:

This metric does not have predefined thresholds.

Capabilities and Limitations:

The metric:

  • Does not include coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.

  • Returns 100% coverage for models that do not have condition points.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

MCDCCoverageFragment

Modified condition/decision coverage (MC/DC) for each model in the unit.

Metric Information

Metric ID: MCDCCoverageFragment

Description:

This metric returns the modified condition and decision (MC/DC) measured in the test results for each model in the unit. The metric result includes the percentage of MC/DC coverage achieved by the test cases, the percentage of coverage justified in coverage filters, and the percentage of MC/DC coverage missed by the tests.

To collect data for this metric:

  • In the Model Testing Dashboard, in the Model Coverage section, click one of the bars in the MC/DC widget.

  • Use getMetrics with the metric ID MCDCCoverageFragment.

Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage license.

Results:

For this metric, instances of metric.Result return the Value as a double vector that contains these elements:

  • Value(1) — The percentage of MC/DC coverage achieved by the tests.

  • Value(2) — The percentage of MC/DC coverage justified by coverage filters.

  • Value(1) — The percentage of MC/DC coverage missed by the tests.

Compliance Thresholds:

This metric does not have predefined thresholds.

Capabilities and Limitations:

The metric:

  • Does not include coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.

  • Returns 100% coverage for models that do not have condition/decision points.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

Requirements-Based Tests for the Unit

The Achieved Coverage Ratio section of the dashboard shows the sources of achieved coverage for the unit. The Requirements-Based Tests section shows how much of the overall achieved coverage comes from requirements-based tests.

The metrics associated with the Requirements-Based Tests section of the dashboard are:

RequirementsExecutionCoverageBreakdown

Fraction of the overall achieved execution coverage that comes from requirements-based tests.

Metric Information

Metric ID: RequirementsExecutionCoverageBreakdown

Description:

This metric returns the fraction of overall achieved execution coverage that comes from requirements-based tests.

To collect data for this metric:

  • In the Model Testing Dashboard, in the Achieved Coverage Ratio section, point to the widget for Requirements-Based Tests, point to the three dots, and click the run button.

  • Use getMetrics with the metric ID RequirementsExecutionCoverageBreakdown.

Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage license.

Results:

For this metric, instances of metric.Result return the Value as a structure that contains these fields:

  • Numerator — The number of requirements-based tests that contribute to the overall achieved execution coverage.

  • Denominator — The total number of tests (requirements-based and non-requirements-based) that contribute to the overall achieved execution coverage.

Compliance Thresholds:

The default compliance thresholds for this metric are:

  • Compliant100% of the overall achieved execution coverage comes from requirements-based tests

  • Non-Compliant — Less than 100% of the overall achieved execution coverage comes from requirements-based tests

  • Warning — None

Capabilities and Limitations:

The metric:

  • Analyzes the overall aggregated coverage results.

  • Does not analyze coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

RequirementsDecisionCoverageBreakdown

Fraction of the overall achieved decision coverage that comes from requirements-based tests.

Metric Information

Metric ID: RequirementsDecisionCoverageBreakdown

Description:

This metric returns the fraction of overall achieved decision coverage that comes from requirements-based tests.

To collect data for this metric:

  • In the Model Testing Dashboard, in the Achieved Coverage Ratio section, point to the widget for Requirements-Based Tests, point to the three dots, and click the run button.

  • Use getMetrics with the metric ID RequirementsDecisionCoverageBreakdown.

Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage license.

Results:

For this metric, instances of metric.Result return the Value as a structure that contains these fields:

  • Numerator — The number of requirements-based tests that contribute to the overall achieved decision coverage.

  • Denominator — The total number of tests (requirements-based and non-requirements-based) that contribute to the overall achieved decision coverage.

Compliance Thresholds:

The default compliance thresholds for this metric are:

  • Compliant100% of the overall achieved decision coverage comes from requirements-based tests

  • Non-Compliant — Less than 100% of the overall achieved decision coverage comes from requirements-based tests

  • Warning — None

Capabilities and Limitations:

The metric:

  • Analyzes the overall aggregated coverage results.

  • Does not analyze coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

RequirementsConditionCoverageBreakdown

Fraction of the overall achieved condition coverage that comes from requirements-based tests.

Metric Information

Metric ID: RequirementsConditionCoverageBreakdown

Description:

This metric returns the fraction of overall achieved condition coverage that comes from requirements-based tests.

To collect data for this metric:

  • In the Model Testing Dashboard, in the Achieved Coverage Ratio section, point to the widget for Requirements-Based Tests, point to the three dots, and click the run button.

  • Use getMetrics with the metric ID RequirementsConditionCoverageBreakdown.

Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage license.

Results:

For this metric, instances of metric.Result return the Value as a structure that contains these fields:

  • Numerator — The number of requirements-based tests that contribute to the overall achieved condition coverage.

  • Denominator — The total number of tests (requirements-based and non-requirements-based) that contribute to the overall achieved condition coverage.

Compliance Thresholds:

The default compliance thresholds for this metric are:

  • Compliant100% of the overall achieved condition coverage comes from requirements-based tests

  • Non-Compliant — Less than 100% of the overall achieved condition coverage comes from requirements-based tests

  • Warning — None

Capabilities and Limitations:

The metric:

  • Analyzes the overall aggregated coverage results.

  • Does not analyze coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

RequirementsMCDCCoverageBreakdown

Fraction of the overall achieved MC/DC coverage that comes from requirements-based tests.

Metric Information

Metric ID: RequirementsMCDCCoverageBreakdown

Description:

This metric returns the fraction of overall achieved MC/DC coverage that comes from requirements-based tests.

To collect data for this metric:

  • In the Model Testing Dashboard, in the Achieved Coverage Ratio section, point to the widget for Requirements-Based Tests, point to the three dots, and click the run button.

  • Use getMetrics with the metric ID RequirementsMCDCCoverageBreakdown.

Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage license.

Results:

For this metric, instances of metric.Result return the Value as a structure that contains these fields:

  • Numerator — The number of requirements-based tests that contribute to the overall achieved MC/DC coverage.

  • Denominator — The total number of tests (requirements-based and non-requirements-based) that contribute to the overall achieved MC/DC coverage.

Compliance Thresholds:

The default compliance thresholds for this metric are:

  • Compliant100% of the overall achieved MC/DC coverage comes from requirements-based tests

  • Non-Compliant — Less than 100% of the overall achieved MC/DC coverage comes from requirements-based tests

  • Warning — None

Capabilities and Limitations:

The metric:

  • Analyzes the overall aggregated coverage results.

  • Does not analyze coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

Requirements-Based Tests for each Model in the Unit

When you click on a bar in the Requirements-Based Tests section, the Metric Details show the coverage ratio for each model in the unit. For requirements-based tests, the coverage ratio is the percentage of the overall achieved coverage that comes from requirements-based tests. Requirements-based tests are test cases that are linked to at least one requirement in the project.

The metrics associated with requirements-based coverage for each model in the unit are:

RequirementsExecutionCoverageFragment

Fraction of the overall achieved execution coverage that comes from requirements-based tests.

Metric Information

Metric ID: RequirementsExecutionCoverageFragment

Description:

This metric returns the fraction of overall achieved execution coverage that comes from requirements-based tests.

To collect data for this metric:

  • In the Model Testing Dashboard, in the Achieved Coverage Ratio section, point to the widget for Requirements-Based Tests, point to the three dots, and click the run button.

  • Use getMetrics with the metric ID RequirementsExecutionCoverageFragment.

Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage license.

Results:

For this metric, instances of metric.Result return the Value as a structure that contains these fields:

  • Numerator — The number of requirements-based tests that contribute to the overall achieved execution coverage.

  • Denominator — The total number of tests (requirements-based and non-requirements-based) that contribute to the overall achieved execution coverage.

Compliance Thresholds:

This metric does not have predefined thresholds.

Capabilities and Limitations:

The metric does not analyze coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

RequirementsDecisionCoverageFragment

Fraction of the overall achieved decision coverage that comes from requirements-based tests.

Metric Information

Metric ID: RequirementsDecisionCoverageFragment

Description:

This metric returns the fraction of overall achieved decision coverage that comes from requirements-based tests.

To collect data for this metric:

  • In the Model Testing Dashboard, in the Achieved Coverage Ratio section, point to the widget for Requirements-Based Tests, point to the three dots, and click the run button.

  • Use getMetrics with the metric ID RequirementsDecisionCoverageFragment.

Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage license.

Results:

For this metric, instances of metric.Result return the Value as a structure that contains these fields:

  • Numerator — The number of requirements-based tests that contribute to the overall achieved decision coverage.

  • Denominator — The total number of tests (requirements-based and non-requirements-based) that contribute to the overall achieved decision coverage.

Compliance Thresholds:

This metric does not have predefined thresholds.

Capabilities and Limitations:

The metric does not analyze coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

RequirementsConditionCoverageFragment

Fraction of the overall achieved condition coverage that comes from requirements-based tests.

Metric Information

Metric ID: RequirementsConditionCoverageFragment

Description:

This metric returns the fraction of overall achieved condition coverage that comes from requirements-based tests.

To collect data for this metric:

  • In the Model Testing Dashboard, in the Achieved Coverage Ratio section, point to the widget for Requirements-Based Tests, point to the three dots, and click the run button.

  • Use getMetrics with the metric ID RequirementsConditionCoverageFragment.

Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage license.

Results:

For this metric, instances of metric.Result return the Value as a structure that contains these fields:

  • Numerator — The number of requirements-based tests that contribute to the overall achieved condition coverage.

  • Denominator — The total number of tests (requirements-based and non-requirements-based) that contribute to the overall achieved condition coverage.

Compliance Thresholds:

This metric does not have predefined thresholds.

Capabilities and Limitations:

The metric does not analyze coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

RequirementsMCDCCoverageFragment

Fraction of the overall achieved modified condition and decision (MC/DC) coverage that comes from requirements-based tests.

Metric Information

Metric ID: RequirementsMCDCCoverageFragment

Description:

This metric returns the fraction of overall achieved MC/DC coverage that comes from requirements-based tests.

To collect data for this metric:

  • In the Model Testing Dashboard, in the Achieved Coverage Ratio section, point to the widget for Requirements-Based Tests, point to the three dots, and click the run button.

  • Use getMetrics with the metric ID RequirementsMCDCCoverageFragment.

Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage license.

Results:

For this metric, instances of metric.Result return the Value as a structure that contains these fields:

  • Numerator — The number of requirements-based tests that contribute to the overall achieved MC/DC coverage.

  • Denominator — The total number of tests (requirements-based and non-requirements-based) that contribute to the overall achieved MC/DC coverage.

Compliance Thresholds:

This metric does not have predefined thresholds.

Capabilities and Limitations:

The metric does not analyze coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

Unit-Boundary Tests for the Unit

The Achieved Coverage Ratio section of the dashboard shows the sources of achieved coverage for the unit. The Unit-Boundary Tests section shows how much of the overall achieved coverage comes from unit-boundary tests.

The metrics associated with the Unit-Boundary Tests section of the dashboard are:

UnitBoundaryExecutionCoverageBreakdown

Fraction of the overall achieved execution coverage that comes from unit-boundary tests.

Metric Information

Metric ID: UnitBoundaryExecutionCoverageBreakdown

Description:

This metric returns the fraction of overall achieved execution coverage that comes from unit-boundary tests.

To collect data for this metric:

  • In the Model Testing Dashboard, in the Achieved Coverage Ratio section, point to the widget for Unit-Boundary Tests, point to the three dots, and click the run button.

  • Use getMetrics with the metric ID UnitBoundaryExecutionCoverageBreakdown.

Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage license.

Results:

For this metric, instances of metric.Result return the Value as a structure that contains these fields:

  • Numerator — The number of unit-boundary tests that contribute to the overall achieved execution coverage.

  • Denominator — The total number of tests (unit-boundary and non-unit-boundary) that contribute to the overall achieved execution coverage.

Compliance Thresholds:

This metric does not have predefined thresholds. Consequently, the compliance threshold overlay icon appears when you click Uncategorized in the Overlays section of the toolstrip.

Capabilities and Limitations:

The metric:

  • Analyzes the overall aggregated coverage results.

  • Does not analyze coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

UnitBoundaryDecisionCoverageBreakdown

Fraction of the overall achieved decision coverage that comes from unit-boundary tests.

Metric Information

Metric ID: UnitBoundaryDecisionCoverageBreakdown

Description:

This metric returns the fraction of overall achieved decision coverage that comes from unit-boundary tests.

To collect data for this metric:

  • In the Model Testing Dashboard, in the Achieved Coverage Ratio section, point to the widget for Unit-Boundary Tests, point to the three dots, and click the run button.

  • Use getMetrics with the metric ID UnitBoundaryDecisionCoverageBreakdown.

Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage license.

Results:

For this metric, instances of metric.Result return the Value as a structure that contains these fields:

  • Numerator — The number of unit-boundary tests that contribute to the overall achieved decision coverage.

  • Denominator — The total number of tests (unit-boundary and non-unit-boundary) that contribute to the overall achieved decision coverage.

Compliance Thresholds:

This metric does not have predefined thresholds. Consequently, the compliance threshold overlay icon appears when you click Uncategorized in the Overlays section of the toolstrip.

Capabilities and Limitations:

The metric:

  • Analyzes the overall aggregated coverage results.

  • Does not analyze coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

UnitBoundaryConditionCoverageBreakdown

Fraction of the overall achieved condition coverage that comes from unit-boundary tests.

Metric Information

Metric ID: UnitBoundaryConditionCoverageBreakdown

Description:

This metric returns the fraction of overall achieved condition coverage that comes from unit-boundary tests.

To collect data for this metric:

  • In the Model Testing Dashboard, in the Achieved Coverage Ratio section, point to the widget for Unit-Boundary Tests, point to the three dots, and click the run button.

  • Use getMetrics with the metric ID UnitBoundaryConditionCoverageBreakdown.

Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage license.

Results:

For this metric, instances of metric.Result return the Value as a structure that contains these fields:

  • Numerator — The number of unit-boundary tests that contribute to the overall achieved condition coverage.

  • Denominator — The total number of tests (unit-boundary and non-unit-boundary) that contribute to the overall achieved condition coverage.

Compliance Thresholds:

This metric does not have predefined thresholds. Consequently, the compliance threshold overlay icon appears when you click Uncategorized in the Overlays section of the toolstrip.

Capabilities and Limitations:

The metric:

  • Analyzes the overall aggregated coverage results.

  • Does not analyze coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

UnitBoundaryMCDCCoverageBreakdown

Fraction of the overall achieved MC/DC coverage that comes from unit-boundary tests.

Metric Information

Metric ID: UnitBoundaryMCDCCoverageBreakdown

Description:

This metric returns the fraction of overall achieved MC/DC coverage that comes from unit-boundary tests.

To collect data for this metric:

  • In the Model Testing Dashboard, in the Achieved Coverage Ratio section, point to the widget for Unit-Boundary Tests, point to the three dots, and click the run button.

  • Use getMetrics with the metric ID UnitBoundaryMCDCCoverageBreakdown.

Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage license.

Results:

For this metric, instances of metric.Result return the Value as a structure that contains these fields:

  • Numerator — The number of unit-boundary tests that contribute to the overall achieved MC/DC coverage.

  • Denominator — The total number of tests (unit-boundary and non-unit-boundary) that contribute to the overall achieved MC/DC coverage.

Compliance Thresholds:

This metric does not have predefined thresholds. Consequently, the compliance threshold overlay icon appears when you click Uncategorized in the Overlays section of the toolstrip.

Capabilities and Limitations:

The metric:

  • Analyzes the overall aggregated coverage results.

  • Does not analyze coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

Unit-Boundary Tests for each Model in the Unit

When you click on a bar in the Unit-Boundary Tests section, the Metric Details show the coverage ratio for each model in the unit. For unit-boundary tests, the coverage ratio is the percentage of the overall achieved coverage that comes from unit-boundary tests. Unit-boundary tests are test cases that test the whole unit (and not just lower-level subsystems of the unit).

The metrics associated with unit-boundary coverage for each model in the unit are:

UnitBoundaryExecutionCoverageFragment

Fraction of the overall achieved execution coverage that comes from unit-boundary tests.

Metric Information

Metric ID: UnitBoundaryExecutionCoverageFragment

Description:

This metric returns the fraction of overall achieved execution coverage that comes from unit-boundary tests.

To collect data for this metric:

  • In the Model Testing Dashboard, in the Achieved Coverage Ratio section, point to the widget for Unit-Boundary Tests, point to the three dots, and click the run button.

  • Use getMetrics with the metric ID UnitBoundaryExecutionCoverageFragment.

Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage license.

Results:

For this metric, instances of metric.Result return the Value as a structure that contains these fields:

  • Numerator — The number of unit-boundary tests that contribute to the overall achieved execution coverage.

  • Denominator — The total number of tests (unit-boundary and non-unit-boundary) that contribute to the overall achieved execution coverage.

Compliance Thresholds:

This metric does not have predefined thresholds.

Capabilities and Limitations:

The metric does not analyze coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

UnitBoundaryDecisionCoverageFragment

Fraction of the overall achieved decision coverage that comes from unit-boundary tests.

Metric Information

Metric ID: UnitBoundaryDecisionCoverageFragment

Description:

This metric returns the fraction of overall achieved decision coverage that comes from unit-boundary tests.

To collect data for this metric:

  • In the Model Testing Dashboard, in the Achieved Coverage Ratio section, point to the widget for Unit-Boundary Tests, point to the three dots, and click the run button.

  • Use getMetrics with the metric ID UnitBoundaryDecisionCoverageFragment.

Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage license.

Results:

For this metric, instances of metric.Result return the Value as a structure that contains these fields:

  • Numerator — The number of unit-boundary tests that contribute to the overall achieved decision coverage.

  • Denominator — The total number of tests (unit-boundary and non-unit-boundary) that contribute to the overall achieved decision coverage.

Compliance Thresholds:

This metric does not have predefined thresholds.

Capabilities and Limitations:

The metric does not analyze coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

UnitBoundaryConditionCoverageFragment

Fraction of the overall achieved condition coverage that comes from unit-boundary tests.

Metric Information

Metric ID: UnitBoundaryConditionCoverageFragment

Description:

This metric returns the fraction of overall achieved condition coverage that comes from unit-boundary tests.

To collect data for this metric:

  • In the Model Testing Dashboard, in the Achieved Coverage Ratio section, point to the widget for Unit-Boundary Tests, point to the three dots, and click the run button.

  • Use getMetrics with the metric ID UnitBoundaryConditionCoverageFragment.

Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage license.

Results:

For this metric, instances of metric.Result return the Value as a structure that contains these fields:

  • Numerator — The number of unit-boundary tests that contribute to the overall achieved condition coverage.

  • Denominator — The total number of tests (unit-boundary and non-unit-boundary) that contribute to the overall achieved condition coverage.

Compliance Thresholds:

This metric does not have predefined thresholds.

Capabilities and Limitations:

The metric does not analyze coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.

UnitBoundaryMCDCCoverageFragment

Fraction of the overall achieved MC/DC coverage that comes from unit-boundary tests.

Metric Information

Metric ID: UnitBoundaryMCDCCoverageFragment

Description:

This metric returns the fraction of overall achieved MC/DC coverage that comes from unit-boundary tests.

To collect data for this metric:

  • In the Model Testing Dashboard, in the Achieved Coverage Ratio section, point to the widget for Unit-Boundary Tests, point to the three dots, and click the run button.

  • Use getMetrics with the metric ID UnitBoundaryMCDCCoverageFragment.

Collecting data for this metric loads the model file and test results files and requires a Simulink Coverage license.

Results:

For this metric, instances of metric.Result return the Value as a structure that contains these fields:

  • Numerator — The number of unit-boundary tests that contribute to the overall achieved MC/DC coverage.

  • Denominator — The total number of tests (unit-boundary and non-unit-boundary) that contribute to the overall achieved MC/DC coverage.

Compliance Thresholds:

This metric does not have predefined thresholds.

Capabilities and Limitations:

The metric does not analyze coverage from test cases that run in software-in-the-loop (SIL) or processor-in-the-loop (PIL) mode.

See Also: For an example of collecting metrics programmatically, see Collect Metrics on Model Testing Artifacts Programmatically.