padv.builtin.task.CollectMetrics Class
Namespace: padv.builtin.task
Superclasses: padv.Task
Task for collecting model design and testing metrics
Description
This class requires CI/CD Automation for Simulink Check.
The padv.builtin.task.CollectMetrics
class provides a task that can collect
model design and testing metrics using the metric.Engine
API for the Model
Design and Model Testing Dashboards. By default, the task collects model maintainability
metrics that can help you monitor the size, architecture, and complexity of the software units
and components in your
project.
But you can reconfigure the task to collect model testing, SIL code testing, or PIL code
testing metrics by using the Dashboard
property to specify which
dashboard you want to collect metrics
for.
You can add these tasks to your process model by using the method addTask
.
After you add the tasks to your process model, you can run the tasks from the Process
Advisor app or by using the function runprocess
.
To view the source code for this built-in task, in the MATLAB® Command Window, enter:
open padv.builtin.task.CollectMetrics
The padv.builtin.task.CollectMetrics
class is a handle
class.
Creation
Description
creates a task for collecting model maintainability metrics like size, architecture, and
complexity. These are the same metric results that the Model Maintainability
Dashboard uses.task
= padv.builtin.task.CollectMetrics()
sets certain properties using one or more name-value arguments. For example, task
= padv.builtin.task.CollectMetrics(Name=Value
)task
= padv.builtin.task.CollectMetrics(Name = "MyCollectMetricsTask")
creates a
task with the specified name.
You can use this syntax to set property values for InputQueries
,
Name
, IterationQuery
,
InputDependencyQuery
, Licenses
,
LaunchToolAction
, and LaunchToolText
.
The padv.builtin.task.CollectMetrics
class also has other properties, but you cannot set
those properties during task creation.
Properties
The CollectMetrics
class inherits properties from padv.Task
. The properties listed in
Specialized Inherited Properties are padv.Task
properties that the CollectMetrics
task overrides.
The task also has properties for specifying Metric Collection Options. The task uses these
properties to specify input arguments for the getAvailableMetricIds
,
execute
, and generateReport
functions of the
metric.Engine
API.
Name
— Unique identifier for task in process
"padv.builtin.task.CollectMetrics"
(default) | string
Unique identifier for task in process, specified as a string.
Example: "MyCollectMetricsTask"
Data Types: string
Title
— Human-readable name that appears in Process Advisor app
"Collect Model Maintainability Metrics"
(default) | string
Human-readable name that appears in Process Advisor app, specified as a string.
Example: "My Metric Collection Task"
Data Types: string
DescriptionText
— Task description
"This task collects and reports metric data used by the
model design and testing dashboards."
(default) | string
Task description, specified as a string.
When you point to a task in Process Advisor and click the information icon, the tooltip shows the task description.
Example: "This task collects and reports metric data used by the model
design and testing dashboards."
Data Types: string
DescriptionCSH
— Path to task documentation
path to CollectMetrics
documentation (default) | string
Path to task documentation, specified as a string.
When you point to a task in Process Advisor, click the ellipsis (...), and click Help, Process Advisor opens the task documentation.
Example: fullfile(pwd,"taskHelpFiles","myTaskDocumentation.pdf")
Data Types: string
RequiredIterationArtifactType
— Artifact type that task can run on
"sl_model_file"
(default) | ...
Type of artifact, specified as one or more of the values listed in this table. To specify multiple values, use an array.
Category | Artifact Type | Description |
---|---|---|
MATLAB | "m_class" | MATLAB class |
"m_file" | MATLAB file | |
"m_func" | MATLAB function | |
"m_method" | MATLAB class method | |
"m_property" | MATLAB class property | |
Model Advisor | "ma_config_file" | Model Advisor configuration file |
"ma_justification_file" | Model Advisor justification file | |
Process Advisor | "padv_dep_artifacts" | Related artifacts that current artifact depends on |
"padv_output_file" | Process Advisor output file | |
Project | "project" | Current project file |
Requirements | "mwreq_item" | Requirement (since R2024b) |
| Requirement (for R2024a and earlier) | |
"sl_req_file" | Requirement file | |
"sl_req_table" | Requirements Table | |
Stateflow® | "sf_chart" | Stateflow chart |
"sf_graphical_fcn" | Stateflow graphical function | |
"sf_group" | Stateflow group | |
"sf_state" | Stateflow state | |
"sf_state_transition_chart" | Stateflow state transition chart | |
"sf_truth_table" | Stateflow truth table | |
Simulink® | "sl_block_diagram" | Block diagram |
"sl_data_dictionary_file" | Data dictionary file | |
"sl_embedded_matlab_fcn" | MATLAB function | |
"sl_block_diagram" | Block diagram | |
"sl_library_file" | Library file | |
"sl_model_file" | Simulink model file | |
"sl_protected_model_file" | Protected Simulink model file | |
"sl_subsystem" | Subsystem | |
"sl_subsystem_file" | Subsystem file | |
System Composer™ | "zc_block_diagram" | System Composer architecture |
"zc_component" | System Composer architecture component | |
"zc_file" | System Composer architecture file | |
Tests | "harness_info_file" | Harness info file |
"sl_harness_block_diagram" | Harness block diagram | |
"sl_harness_file" | Test harness file | |
"sl_test_case" | Simulink Test™ case | |
"sl_test_case_result" | Simulink Test case result | |
"sl_test_file" | Simulink Test file | |
"sl_test_iteration" | Simulink Test iteration | |
"sl_test_iteration_result" | Simulink Test iteration result | |
"sl_test_report_file" | Simulink Test result report | |
"sl_test_result_file" | Simulink Test result file | |
"sl_test_resultset" | Simulink Test result set | |
"sl_test_seq" | Test Sequence | |
"sl_test_suite" | Simulink Test suite | |
"sl_test_suite_result" | Simulink Test suite result |
Example: "sl_model_file"
Example: ["sl_model_file "zc_file"]
IterationQuery
— Find artifacts that task iterates over
padv.builtin.query.FindDesignModels
(default) | padv.Query
object | name of padv.Query
object
Query that finds the artifacts that the task iterates over, specified as a
padv.Query
object or the name of a padv.Query
object. When you specify IterationQuery
, the task runs one time
for each artifact returned by the query. In the Process
Advisor app, the artifacts returned by IterationQuery
appear under task title.
For more information about task iterations, see Overview of Process Model.
Example: padv.builtin.query.FindUnits
InputDependencyQuery
— Finds artifact dependencies for task inputs
padv.Query
object | name of padv.Query
object
Query that finds artifact dependencies for task inputs, specified as a
padv.Query
object or the name of a padv.Query
object.
The build system runs the query specified by
InputDependencyQuery
to find the dependencies for the task
inputs, since those dependencies can impact if task results are up-to-date.
For more information about task inputs, see Overview of Process Model.
Example: padv.builtin.query.GetDependentArtifacts
LaunchToolAction
— Function that launches tool
@launchToolAction
(default) | function handle
Function that launches a tool, specified as the function handle.
When you point to a task in the Process Advisor app, you can click the ellipsis (...) to see more options. For built-in tasks, you have the option to launch a tool associated with the task.
By default, the task CollectMetrics
can launch the Model
Maintainability Dashboard. If you specify the Dashboard
property as a value other than "ModelMaintainability"
, the task can
launch the Model Testing Dashboard instead.
Data Types: function_handle
LaunchToolText
— Description of action that LaunchToolAction
property performs
"Open Dashboard"
(default) | string
Description of the action that the LaunchToolAction
property
performs, specified as a string.
Data Types: string
InputQueries
— Inputs to task
padv.Query
object | name of padv.Query
object | array of padv.Query
objects
Inputs to the task, specified as:
a
padv.Query
objectthe name of
padv.Query
objectan array of
padv.Query
objectsan array of names of
padv.Query
objects
By default, the task CollectMetrics
gets the current model that the
task is iterating over by using the built-in query
padv.builtin.query.GetIterationArtifact
.
OutputDirectory
— Location for standard task outputs
string(fullfile('$DEFAULTOUTPUTDIR$','$ITERATIONARTIFACT$','metrics'))
(default) | string
Location for standard task outputs, specified as a string.
The built-in tasks use tokens, like $DEFAULTOUTPUTDIR$
, as
placeholders for dynamic path resolution during run-time. For more information, see
Dynamically Resolve Paths with Tokens.
Data Types: string
Dashboard
— Dashboard metrics to collect
"ModelMaintainability"
(default) | "ModelUnitPILTesting"
| "ModelUnitSILTesting"
| "ModelUnitTesting"
Dashboard metrics to collect, specified as one of these values:
"ModelMaintainability"
— Analyze the size, architecture, and complexity of the MATLAB, Simulink, and Stateflow artifacts in your project by using the Model Maintainability Metrics."ModelUnitPILTesting"
— Assess the quality and completeness of processor-in-the-loop (PIL) code testing by using the Code Testing Metrics. Collecting these metrics requires a Simulink Test license."ModelUnitSILTesting"
— Assess the quality and completeness of software-in-the-loop (SIL) code testing by using the Code Testing Metrics. Collecting these metrics requires a Simulink Test license."ModelUnitTesting"
— Assess the quality, traceability, and completeness of your models, requirements, tests, and test results by using the Model Testing Metrics. By default, collecting these metrics requires a Requirements Toolbox™ license and Simulink Test license. If you do not want to collect requirements metrics, you can specify the propertyIncludeRequirements
asfalse
. WhenIncludeRequirements
isfalse
, the task does not require a Requirements Toolbox license.
The task uses this property to get the available metrics using the function
getAvailableMetricIds
.
Note
If you specify a value other than "ModelMaintainability"
, make
sure to specify the task iteration query as
padv.builtin.query.FindUnits
since you can only collect model testing
and code testing metrics on units and not components.
Example: "ModelUnitTesting"
Installed
— Filter metrics based on MathWorks® product installation status
1
(true
) (default) | 0
(false
)
Filter metrics based on whether the associated MathWorks product is installed, specified as either:
1
(true
) — Only collect metrics associated with MathWorks products installed on the current machine.0
(false
) — Try to collect metrics for each of the available metrics, even if the associated MathWorks products are not installed on the current machine.
Example: false
Data Types: logical
IncludeRequirements
— Include requirements metrics
1
(true
) (default) | 0
(false
)
Include requirements metrics in model testing metric results, specified as either:
1
(true
) — If you specified the propertyDashboard
as"ModelUnitTesting"
, the task includes requirements metrics in the model testing metric results. Collecting requirements metrics requires a Requirements Toolbox license.0
(false
) — The task does not collect requirements metrics. The task excludes metrics where the metric ID contains the wordrequirement
(case insensitive).
Example: false
Data Types: logical
ReportPath
— Path to report output by task
string(fullfile('$DEFAULTOUTPUTDIR$','$ITERATIONARTIFACT$','metrics'))
(default) | string
Path to report output by task, specified as a string.
The task generates the report by using the function generateReport
.
Data Types: string
ReportName
— Name of output report
"$ITERATIONARTIFACT$_ModelMaintainability"
(default) | string
Name of output report, specified as a string.
Data Types: string
ReportFormat
— Format of output report
"pdf"
(default) | "html-file"
Format of output report, specified as either:
"pdf"
— PDF file."html-file"
— HTML report.
Example: "html-file"
Data Types: string
FilteredMetrics
— List of metrics to filter out
string.empty
(default) | string
List of metrics to filter out, specified as a string.
For example, if you are collecting model maintainability metrics
(Dashboard
property specified as
"ModelMaintainability"
), you can skip metric collection for a
metric by specifying the value of FilteredMetrics
as the metric ID
for the metric.
Example: "slcomp.ComponentInterfaceSignals"
Data Types: string
Methods
Specialized Public Methods
This class overrides the following inherited methods.
run | Collect metrics. Depending on the Note You do not need to manually invoke this method. When you run a task using
the Process Advisor app or the
The function taskResult = run(obj, input) ... end |
dryRun |
Dry run the task to validate task inputs and generate
representative task outputs without actually running the task. The function taskResult = dryRun(obj, input) ... end |
launchToolAction | By default, the task launches the Model Maintainability Dashboard.
If you specified the Dashboard property as a value other than
"ModelMaintainability" , the task launches the Model
Testing Dashboard instead. Process Advisor uses this method when
you open the tool associated with a task. |
Examples
Collect Model Maintainability Metrics During Process
Add a task that can collect model maintainability metrics using the
metric.Engine
API for the Model Maintainability
Dashboard.
Open the process model for your project. If you do not have a process model, open the Process Advisor app to automatically create a process model.
In the process model file, add the CollectMetrics
task to your
process model by using the addTask
method. By default, the
CollectMetrics
task collects model maintainability metrics.
mmMetricTask = pm.addTask(padv.builtin.task.CollectMetrics());
You can reconfigure the task behavior by using the task properties. For example, to have the task return the generated metric results report as an HTML file instead of a PDF:
mmMetricTask.ReportFormat = "html-file";
Collect Model Testing and Code Testing Metrics During Process
By default, the CollectMetrics
task collects model
maintainability metrics. To collect different types of metrics, you can add multiple
instances of the CollectMetrics
to the process and reconfigure those
instances to collect different metrics. For example, you can add tasks for model testing,
SIL code testing, and PIL code testing metrics.
Each task instance needs a unique value for the Name
property.
To specify which metrics you want the task to collect, use the
Dashboard
property of the task. Since the dashboards collect
model testing and code testing metrics for units, and not components, you need to
specify the IterationQuery
as
padv.builtin.query.FindUnits
. The other changes to the task property
values give the task instances unique titles in Process Advisor and unique
names for the reports that the task
generates.
%% Collect Model Testing Metrics mtMetricTask = pm.addTask(padv.builtin.task.CollectMetrics(... Name="ModelTestingMetrics",... IterationQuery=padv.builtin.query.FindUnits)); mtMetricTask.Title = "Collect Model Testing Metrics"; mtMetricTask.Dashboard = "ModelUnitTesting"; mtMetricTask.ReportName = "$ITERATIONARTIFACT$_ModelTesting"; %% Collect SIL Code Testing Metrics stMetricTask = pm.addTask(padv.builtin.task.CollectMetrics(... Name="SILTestingMetrics",... IterationQuery=padv.builtin.query.FindUnits)); stMetricTask.Title = "Collect SIL Code Testing Metrics"; stMetricTask.Dashboard = "ModelUnitSILTesting"; stMetricTask.ReportName = "$ITERATIONARTIFACT$_SILTesting"; %% Collect PIL Code Testing Metrics ptMetricTask = pm.addTask(padv.builtin.task.CollectMetrics(... Name="PILTestingMetrics",... IterationQuery=padv.builtin.query.FindUnits)); ptMetricTask.Title = "Collect PIL Code Testing Metrics"; ptMetricTask.Dashboard = "ModelUnitPILTesting"; ptMetricTask.ReportName = "$ITERATIONARTIFACT$_PILTesting";
To specify a preferred execution order for your tasks, you can use
runsAfter
. For example, if you want your process to merge test
results before collecting model testing, SIL code testing, and PIL code testing
metrics:
mtMetricTask.runsAfter(mergeTestTask);
stMetricTask.runsAfter(mtMetricTask);
ptMetricTask.runsAfter(stMetricTask);
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)
Asia Pacific
- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)