Integrating Cloud-Based Continuous Integration - MATLAB & Simulink

Technical Articles

Performing Virtual Testing and Simulation in a Cloud-Based Continuous Integration Environment

By Felix Wempe and Anas Hamrouni, AGSOTEC


“MATLAB and Simulink products are characterized by their platform-agnostic nature, with support for Windows, Linux, and macOS operating systems. This versatility is key when planning integration into a cloud solution, as it offers the flexibility to choose the most fitting infrastructure.”

As automotive companies continue using software to deliver more value to customers, engineering teams are increasingly incorporating continuous integration (CI) pipelines into their workflows. At the same time, the need for virtual testing and simulation is also increasing as teams use Model-Based Design to deliver higher quality products on tighter schedules. Combined, these two trends have led many teams to look for cloud-based CI solutions that can scale automatically as needed.

Cloud-based solutions offer significant advantages in terms of cost, scalability, and flexibility. Compared to hardware, which needs time and investment to procure, set up, and maintain on premises, cloud platforms offer a range of services that enable CI teams to only pay for the computing power required as needed. With support for a variety of infrastructure components and multiple versions of servers, cloud platforms provide the flexibility to adapt to changing CI technology stack, configuration, and environment requirements.

While the benefits of cloud-based CI are clear, the path to implementation is less so. Teams are faced with key decisions to make, including which cloud provider to use, what CI technology to deploy, and whether to use containers or virtual machines (VMs), to name a few. Because the answers depend on each team’s specific scenario, my team at AGSOTEC has built cloud-based CI proof-of-concept implementations for a variety of use cases to better understand the advantages and drawbacks when considering available options.

This article walks through a case study for testing a Simulink® model for an automatic emergency braking (AEB) system (Figure 1) with cloud-based CI using a variety of technologies including GitHub®, GitLab®, and Jenkins®, as well as VM and container instances launched as needed in the cloud. After the walkthrough, it covers key takeaways that engineers can use to help make the right technology choices for their team’s specific cloud-based CI needs.

Note: We have implemented the examples presented in this article on both Microsoft® Azure® and Amazon® Web Services (AWS). For simplicity, the focus here is on AWS. Similarly, although MATLAB® and Simulink products support multiple operating systems, the examples show only Linux® VMs and containers to keep them simple.

Note: The examples and recommendations in this article are based on cloud services and capabilities as of winter 2023–2024. As cloud technology evolves and vendors enhance their offerings, the optimal cloud-based CI solution for a particular team’s needs may change.

A screenshot of the Simulink model used in the testing of an automatic emergency braking system in different scenarios.

Figure 1. Simulink model used in the testing of scenario variants of an AEB system.

What Makes Some Tools a Good Fit for Cloud-Based CI?

Before diving into the technical details, it is important to note that not all software is well-suited for use with CI or in a cloud environment.

MATLAB and Simulink products are characterized by their platform-agnostic nature, with support for Windows®, Linux, and macOS operating systems. This versatility is key when planning integration into a cloud solution, as it offers the flexibility to choose the most fitting infrastructure. This, in turn, can lead to cost savings, as Linux is often a more cost-effective option than the alternatives.

You can also use MATLAB Package Manager (MPM) to simplify the installation of MATLAB and Simulink products on Linux virtual machines and containers, including Docker® containers.

Lastly, MathWorks offers plugins and other integrations for a variety of widely used CI software and services that provide an abstraction layer for engineering teams and can simplify the use of CI for those who are not experts in the chosen CI platform. MathWorks also provides the Process Advisor app to create pipelines directly in MATLAB and Simulink. These pipelines support incremental builds via the digital thread of artifacts inside a MATLAB project. Additionally, the app can be used interactively with MATLAB for pre-submit verification and in an automated way on CI platforms. On some platforms, there also exists the possibility to automatically generate multistage CI Pipelines from the MATLAB pipeline description. The Process Advisor is part of CI/CD Automation for Simulink Check™.

VMs and Containers for Virtual Testing Pipelines

A key component of a cloud-based CI pipeline is the environment in which virtual tests are run. One option is to create a virtual machine image and then configure the pipeline to automatically launch instances of this VM to execute queued jobs. The alternative is to use containers instead of VMs and integrate a serverless container execution service, which imports preconfigured containers from a registry into the CI pipeline.

Creating a VM Image for a Virtual Testing Pipeline

AWS provides a number of supported virtual machine images called Amazon Machine Images (AMIs), which can be modified to create custom AMIs.

To create a custom AMI for use in a CI pipeline, follow these steps:

  1. Launch an instance of an existing AMI with the desired configuration.
  2. Connect to the instance and install all software needed.

You can install MATLAB, Simulink, and any required toolboxes in a Linux environment using the MPM. All dependencies for a specific MATLAB release can be found in the relevant dependencies file available in the MathWorks MATLAB Dependencies GitHub repository.

For example, the following commands install MATLAB r2022b and its dependencies:

apt-get update

# Download dependencies

apt-get install --no-install-recommends -y ca-certificates unzip libasound2 libc6 libcairo2 libcairo-gobject2 libcap2 libcrypt1 libcrypt-dev libcups2 libdrm2 libdw1 libgbm1 libgdk-pixbuf2.0-0 libgl1 libglib2.0-0 libgomp1 libgstreamer1.0-0 libgstreamer-plugins-base1.0-0 libgtk-3-0 libice6 libnspr4 libnss3 libodbc1 libpam0g libpango-1.0-0 libpangocairo-1.0-0 libpangoft2-1.0-0 libsndfile1 libsystemd0 libuuid1 libwayland-client0 libxcomposite1 libxcursor1 libxdamage1 libxfixes3 libxft2 libxinerama1 libxrandr2 libxt6 libxtst6 libxxf86vm1 linux-libc-dev locales locales-all make net-tools odbcinst1debian2 procps sudo unzip wget zlib1g

export DEBIAN_FRONTEND=noninteractive

wget -q GIT HUB.com/mathworks/build-glibc-bz-19329-patch/releases/download/ubuntu-focal/all-packages.tar.gz

tar -x -f all-packages.tar.gz --exclude glibc-*.deb --exclude libc6-dbg*.deb

apt-get install --yes --no-install-recommends ./*.deb

# Download MATLAB Package Manager

wget -q https://www.mathworks.com/mpm/glnxa64/mpm

chmod +x mpm

# Use MPM to download MATLAB, Simulink and model required toolboxes

./mpm install --release=r2022b --destination=/opt/matlab --products MATLAB Simulink Simulink_Test Image_Processing_Toolbox Computer_Vision_Toolbox Automated_Driving_Toolbox Control_System_Toolbox Model_Predictive_Control_Toolbox
  1. Stop the instance and create an AMI from it using the AWS Management Console.
  2. Save the AMI ID to be used later when configuring the CI system.

After the custom image is created, it can be deployed into a VM-based pipeline that implements the following workflow (Figure 2):

  1. The test pipeline is triggered when developers commit changes to a source code repository.
  2. A new VM based on the custom AMI is launched (using the saved AMI ID).
  3. Project files are cloned from a web-based repository such as GitLab, GitHub, or another cloud file hosting service.
  4. MATLAB is executed with the -batch argument to load testing scenarios into Simulink Test™ and initiate their execution.
  5. A report containing the results of the tests is generated and stored as an artifact and the VM is terminated.
Screenshot of a virtual machine-based testing pipeline, beginning with developers pushing changes and ending with a report generated with the results of the tests.

Figure 2. A VM-based virtual testing pipeline.

Creating a Container for a Virtual Testing Pipeline

Preparing to implement a CI pipeline based on containers with AWS is a two-step process. First, create a container image locally and then upload the image to a web-based registry, such as Amazon Elastic Container Registry (Amazon ECR).

The first part of this process is greatly simplified by following the instructions documented in the Create a MATLAB Container Image GitHub repository.

For the second part—uploading the image to Amazon ECR—follow these steps:

  1. Create an ECR image repository using the AWS Management Console.
  2. Tag the local container for the newly created cloud repository.

The following command tags a local container image with the ID e5be3y248h47 as aws_account_id.dkr.ecr.us-west-2.amazonaws.com/my-repository:tag.

docker tag e5be3y248h47 aws_account_id.dkr.ecr.eu-central-1.amazonaws.com/my-repository:tag

  1. Push the newly tagged container image to the repository:

docker push aws_account_id.dkr.ecr.us-west-2.amazonaws.com/my-repository:tag

After a container image is created and pushed to an Amazon ECR repository it can be deployed in a container-based pipeline that implements the following workflow (Figure 3):

  1. The test pipeline is triggered when developers commit changes to a source code repository.
  2. The AWS serverless compute engine, Fargate, is notified, and a container is launched from the container image registry (ECR).
  3. Project files are cloned from a web-based repository such as GitLab, GitHub, or another cloud file hosting service.
  4. MATLAB is executed with the -batch argument to load testing scenarios into Simulink Test and initiate their execution.
  5. A report containing the results of the tests is generated and stored as an artifact and the serverless container is terminated.
Screenshot of a container-based virtual testing pipeline, beginning with developers pushing changes and ending with a report generated with the results of the tests.

Figure 3. Container-based virtual testing pipeline.

Configuring CI Systems to Use Virtual Testing Pipelines

Once a team has the capacity to run virtual testing in the cloud via a container-based or VM-based pipeline, it can then make use of that capacity to autoscale CI processes. It is possible to use GitHub, GitLab, and Jenkins in combination with either VMs or containers, but we have found that the following three options are generally easier to configure and work with.

Option A: Configuring GitHub to Autoscale Using VMs

A variety of GitHub Actions available from the GitHub Marketplace are designed to facilitate the integration of GitHub with cloud services.

To integrate GitHub with our AWS virtual testing pipeline, my team used a GitHub Action that simplifies tasks such as dynamically starting, stopping, and managing AWS EC2 runner instances during a CI process.

To configure this GitHub Action, we created a new workflow file named workflow.yml in .github/workflows/directory of our GitHub repository. Within this file, we defined our CI workflow (Figure 4), including the actions to be performed, their sequence, and the conditions under which they should run.

An AWS Cloud screenshot of the continuous integration workflow for GitHub and virtual machines.

Figure 4. CI workflow for GitHub and VMs.

The example code below illustrates a typical GitHub Action configuration combined with GitHub secrets to encrypt credentials and sensitive parameters. It specifies the necessary AWS credentials, defines the mode of operation (starting the EC2 runner), and provides the necessary inputs to the process, including the GitHub personal access token, image ID (AMI ID), and instance type.

name: Create and start an EC2 runner 
    runs-on: Linux 
    outputs: 
      label: ${{ steps.start-ec2-runner.outputs.label }} 
      ec2-instance-id: ${{ steps.start-ec2-runner.outputs.ec2-instance-id }} 
    steps: 
      - name: Configure AWS credentials 
        # Configure the AWS credentials with the required Access Key, secret Key and Token as secrets  
        uses: aws-actions/configure-aws-credentials@v1-node16 
        with: 
          aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} 
          aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} 
          aws-session-token: ${{ secrets.AWS_SESSION_TOKEN }} 
          aws-region: ${{ secrets.AWS_REGION }} 
      - name: Start EC2 runner 
        id: start-ec2-runner 
        # Configure the auto spawned instance settings such as image-id and security-group-id 
        uses: machulav/ec2-github-runner@v2 
        with: 
          mode: start 
          github-token: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }} 
          ec2-image-id: ${{ secrets.IMAGE_ID }}  
          ec2-instance-type: m5.large  
          subnet-id: ${{ secrets.SUBNET_ID }} 
          security-group-id: ${{ secrets.SECURTITY_GROUP_ID }} 
          iam-role-name: ${{ secrets.ROLE_NAME }} # optional, Role profile allows instances to use AWS other resources with the specified profile roles  
          aws-resource-tags: > # optional, tag the instance with extra info for clarity  
            [ 
              {"Key": "Name", "Value": "ec2-github-runner"}, 
              {"Key": "GitHubRepository", "Value": "${{ github.repository }}"} 

Next, we added the following GitHub action code to execute the MATLAB script, which initiates the testing process:

testing:  
  name: Start MATLAB  
  needs: start-runner # required to start the main job when the runner is ready  
  runs-on: ${{ needs.start-runner.outputs.label }} # run the job on the newly created runner  
  steps:  
    - name: Execute MATLAB tests  
      run: | 
        export MLM_LICENSE_FILE=27000@ec2-35-156-32-190.eu-central-1.compute.amazonaws.com  
        matlab -batch "addpath(genpath(cd)); \ 
        testFile = sltest.testmanager.load('AutonomousEmergencyBrakingTests.mldatx'); \ 
        testSuite = getTestSuiteByName(testFile,'Test Scenarios'); \ 
        testCase = getTestCaseByName(testSuite,'scenario_25_AEB_PedestrianTurning_Nearside_10kph'); \ 
        resultObj = run(testCase); \ 
        sltest.testmanager.report(resultObj,'Report.pdf', Title='Autonomous Emergency Braking', IncludeMATLABFigures=true, IncludeErrorMessages=true, IncludeTestResults=0, LaunchReport=false);"

Running a GitHub CI job that includes the above actions will trigger the initiation of a new virtual machine in the cloud based on the previously created AMI and use it as a node to execute the MATLAB testing script (Figure 4).

Option B: Configuring GitLab to Autoscale Using Containers

To use GitLab with serverless containers, we configured GitLab Runner to work with the AWS Fargate compute engine. This was done in three steps:

  1. We edited our runner config.toml file as follows:
  2. [[runners]] 
      name = “fargate-test” 
      url = “https://gitlab.com/” 
      token = “__REDACTED__” 
      executor = “custom” 
      builds_dir = “/opt/gitlab-runner/builds” 
      cache_dir = “/opt/gitlab-runner/cache” 
      [runners.custom] 
        volumes = [“/cache”, “/path/to-ca-cert-dir/ca.crt:/etc/gitlab-runner/certs/ca.crt:ro”] 
        config_exec = “/opt/gitlab-runner/fargate” 
        config_args = [“—config”, “/etc/gitlab-runner/fargate.toml”, “custom”, “config”] 
        prepare_exec = “/opt/gitlab-runner/fargate” 
        prepare_args = [“—config”, “/etc/gitlab-runner/fargate.toml”, “custom”, “prepare”] 
        run_exec = “/opt/gitlab-runner/fargate” 
        run_args = [“—config”, “/etc/gitlab-runner/fargate.toml”, “custom”, “run”] 
        cleanup_exec = “/opt/gitlab-runner/fargate” 
        cleanup_args = [“—config”, “/etc/gitlab-runner/fargate.toml”, “custom”, “cleanup”]
    
  3. We installed the AWS Fargate custom executor driver:
  4. sudo curl -Lo /opt/gitlab-runner/fargate “https://gitlab-runner-custom-fargate-downloads.s3.amazonaws.com/latest/fargate-linux-amd64” 
    sudo chmod +x /opt/gitlab-runner/fargate 
    
  5. We configured the driver’s fargate.toml file with the required AWS parameters:
[Fargate] 
  Cluster = “test-cluster” # Cluster name 
  Region = “us-east-2” # AWS region to deploy in it 
  Subnet = “subnet-xxxxxx” # VPC subnet 
  SecurityGroup = “sg-xxxxxxxxxxxxx” # Container networking 
  TaskDefinition = “test-task:1” # Task name that will be executed in AWS Fargate 
  EnablePublicIP = true # Allow external connection by assigning a public IP address to the container 

[TaskMetadata] 
  Directory = “/opt/gitlab-runner/metadata” 

[SSH] 
  Username = “root” # User that will be used in the container 
  Port = 22 # SSH to port 22

After configuring the Fargate executor and connecting the runner to the project repository, any GitLab CI job will automatically initiate a container in the cloud and use it to execute the job (Figure 5).

An AWS Cloud screenshot of the continuous integration workflow for GitLab and containers.

Figure 5. CI workflow for GitLab and containers.

The following CI stage script was used to execute the MATLAB testing script:

Run MATLAB:  
  script:  
    - export MLM_LICENSE_FILE=27000@ec2-35-156-32-190.eu-central-1.compute.amazonaws.com  
    - > 
      matlab -batch "addpath(genpath(cd));  
      testFile = sltest.testmanager.load('AutonomousEmergencyBrakingTests.mldatx'); 
      testSuite = getTestSuiteByName(testFile,'Test Scenarios'); 
      testCase = getTestCaseByName(testSuite,'scenario_25_AEB_PedestrianTurning_Nearside_10kph'); 
      resultObj = run(testCase); 
      sltest.testmanager.report(resultObj,'Report.pdf', Title='Autonomous Emergency Braking', IncludeMATLABFigures=true, IncludeErrorMessages=true, IncludeTestResults=0, LaunchReport=false);"

Option C: Configuring Jenkins to Autoscale Using VMs

The Amazon EC2 Jenkins plugin enables the integration of AWS EC2 instances into Jenkins CI pipelines, with support for automated instance creation, management, and termination. To configure the EC2 Plugin, we followed these steps:

  1. Install the EC2 plugin through the Jenkins plugin manager.
  2. Navigate to Manage Jenkins > Configure System > Cloud > Add a new cloud > Amazon EC2.
  3. Enter the AWS credentials, select a region, and specify an AMI ID to use for the instances (Figure 6).
  4. Configure the instance type, security group, and other settings as required.
  5. Configure the autoscaling options such as usage and idle termination time.
  6. Save the configuration.
A screenshot showing options to configure the EC2 plugin for Jenkins when installing.

Figure 6. Configuring the Amazon EC2 plugin for Jenkins.

With this setup, we assigned our pipelines to use the newly created cloud node and automatically create EC2 instances to execute queued jobs (Figure 7).

 An AWS Cloud screenshot of the continuous integration workflow for Jenkins and virtual machines.

Figure 7. CI workflow for Jenkins and VMs. (Logo courtesy of Jenkins.)

For the Jenkinsfile that defines our Jenkins pipeline, we specified the working environment as the following:

environment { 
    MLM_LICENSE_FILE= '27000@ec2-35-156-32-190.eu-central-1.compute.amazonaws.com' 
} 

Additionally, we used the following stage code to initiate the MATLAB test script:

stage('Build and Test') {  
    steps {  
        sh ''' 
matlab -batch "addpath(genpath(cd));\ 
testFile = sltest.testmanager.load('AutonomousEmergencyBrakingTests.mldatx');\ 
testSuite = getTestSuiteByName(testFile,'Test Scenarios');\ 
testCase = getTestCaseByName(testSuite,'scenario_25_AEB_PedestrianTurning_Nearside_10kph');\ 
resultObj = run(testCase);\ 
sltest.testmanager.report(resultObj,'Report.pdf', Title='Autonomous Emergency Braking', IncludeMATLABFigures=true, IncludeErrorMessages=true, IncludeTestResults=0, LaunchReport=false);"'  
        ''' 
    }  
} 

Comparing Alternatives: Key Takeaways

When choosing between VMs or containers for cloud-based CI, it’s helpful to consider multiple dimensions:

  • Set up. Creating an Amazon Machine Image (AMI) typically involves launching an instance, connecting to it, installing the necessary software, and saving it as an AMI, while containers are created from a Dockerfile script, which contains all the commands required to assemble an image. It is possible to create AMIs from Python® or Terraform scripts as well, so both alternatives support versioned infrastructure or infrastructure as code. In cases requiring a single image for a small project, using an AMI can be more convenient as it doesn’t require extensive knowledge of writing installation processes or configuring machines. An AMI is also the only option in situations when using certain tools not provided by MathWorks that do not support containers and need to be manually installed.
  • Maintenance. Both container images and AMIs can be updated and managed by modifying the creation script. Additionally, you can connect to and modify AMIs or container images and save them as new images.
  • Cost. Container images have an advantage in terms of cost, as they are generally more lightweight.
  • Portability. Virtual machine images, including AMIs, cannot be transferred to other cloud providers like Azure or Google. In contrast, container images are platform independent and can be shared across multiple platforms.
  • Operating System Support. Virtual machine images and container images both support various operating systems, including Linux and Windows Server®. However, Windows containers do not offer GUI support, so any applications executed on them must be able to run without a GUI.

Taking these factors into account, we generally prefer container images, due to their cost-effectiveness, portability, and flexibility. However, when deploying a VM instance for the purpose of downloading and executing a container image to run tests, virtual machine images such as AMIs may be a more suitable solution, particularly if Windows desktop support is required.

Choosing among GitHub, GitLab, and Jenkins is less clear-cut, as each CI platform has strengths and weaknesses.

  • GitHub offers a marketplace with numerous workflow actions, but what sets it apart is the ability to consolidate all instance configurations and test stages within a single YAML file. This centralized approach makes GitHub a good solution for comprehensive control and management of individual test cases from one location. However, we found it lacks scalability control options, such as managing idle instances and idle time.
  • GitLab also provides a variety of executors, comparable to actions and plugins, which facilitate integration with a range of cloud services. It provides scalability control, such as idle time and the number of instances available at specific times of the day. However, it has a complex setup and configuration process, and teams must access the runner manager instance to modify certain hardcoded configurations.
  • Jenkins, for us, stood out as easy to set up and manage. It can connect with repositories hosted on GitLab, GitHub, or other web-based Git™ repositories using webhooks, and automatically trigger pipelines when code is pushed. Jenkins offers an extensive collection of plugins, along with a notification and plugin manager system.

In conclusion, we recommend using containers when possible and basing your decision on which CI platform to use on the specific requirements of the project or factors such as a team’s existing knowledge or use of a particular platform.

Published 2024

View Articles for Related Industries