Using Cloud to enhance Model-Based Design workflows
Overview
Cloud computing has become an essential technology for organizations of all sizes, providing a variety of benefits that can help businesses run more efficiently and cost-effectively. One of the primary advantages of cloud computing is that it allows organizations to access computing resources on-demand, without having to invest in expensive hardware upfront.
In this webinar, you will learn how to easily run MathWorks tools in the cloud to enhance various workflows related to model-based design, and how to create cloud-based applications from your Simulink models.
Highlights
- Using Simulink in the cloud for interactive model development
- Speeding up co-simulation workflows by leveraging cloud resources
- Executing CI/CD pipelines in the cloud
- Deploying Simulink –based digital twins in the cloud as a microservice
About the Presenters
Antti Löytynoja is a senior application engineer at MathWorks, focusing on the data analytics and AI area, such as predictive maintenance. In addition to this, Antti spends a lot of time helping customers integrate their MATLAB and Simulink applications to other environments, such as cloud. Antti has worked for MathWorks over 12 years, and prior to joining MW he worked as a researcher at Tampere University of Technology, focusing on signal processing and AI.
Håkan Pettersson is a senior application engineer at MathWorks, focusing on ASIC/FPGA implementation and verification workflows but also on CI/CD workflows using cloud. Håkan has been with MathWorks for 5 years, and prior to this worked with one of the major EDA tool vendors focusing on synthesis and verification for ASIC/FPGA.
Olof Larsson is an application engineer at MathWorks, supporting customers with cloud integration, application deployment and large-scale computing. Olof has been at MathWorks for 6 years.
Recorded: 15 Jun 2023
Hello, and welcome to this webinar, Using Cloud to Enhance Model-Based Design Workflows. Before we get into the material, we as presenters would like to introduce ourselves. My name is Olof Larsson, and I work as an application engineer in Stockholm, in Sweden, and I support customers with integrating their MATLAB and Simulink workflows to the cloud. I also work with computational scaling and application deployment. I'm now going to hand it over to Hakan.
Thank you, Olof. So yes, I'm Hakan Pettersson, also working here in the MathWorks Stockholm office. My main areas are ASIC and FPGA workflows, but I also do a lot of things around Docker containers and continuous integration, so that will be my section later on in the presentation. And let's hear from Antti.
Hello, everyone. My name is Antti Loytynoja, and I'm also an application engineer at MathWorks. I've been working for MathWorks more than 12 years. And at MathWorks, I focus on MATLAB applications, things like data analytics, AI predictive maintenance, and also deployment of MATLAB and Simulink algorithms.
Thank you, Hakan and Antti. And with that, let's get into the material. I'm assuming many of you know or have an idea of what the cloud is. But for those who don't, you can think of it like a collection of servers that run somewhere and that users can connect to and work with. And they provide services such as compute, data storage, and web applications.
More interestingly, I think, is to say what we can use these services for in order to work with our model-based design workflows. And we see a lot of interest, for example, of people that want to run MATLAB and Simulink in the cloud, actually accessing through the browser, or maybe utilize the computational power of the cloud by scaling up their Simulink simulations in cloud clusters.
We also see a lot of interest in automating testing and integrating with continuous integration services that are available in the cloud. And we also see a lot of interest when it comes to building digital twins, so a model of your asset that you can run in the cloud and monitoring it, for example, for predictive maintenance applications.
And these are all workflows that we're going to take a look at this webinar. But before that, I want to discuss the different personas that might be interested, and maybe you feel that you are either one of these. So either maybe you're an engineer, or you're a cloud admin architect.
And these personas have some different needs. So on the engineering side, you're typically the MATLAB/Simulink user, and you're not very interested in the underlying architecture of the cloud services, but you're rather interested in how you easily can access it and work with your tools, so running your model-based design workflows.
On the cloud admin side, contrary, that's where you typically are interested in the underlying infrastructure, how you can integrate it into your company's infrastructure and comply with your security requirements. Now, from the MathWorks side, we try to provide services that should be applicable for both, so we have the high level services like the Cloud Center, for example. And then on the more detailed side, we have the reference architecture so we can provide infrastructure as code, as well.
So looking here at these maps to the services that we provide, you can see the more high-level ones to the left, with the MathWorks Cloud. And then to the right, we have some different services where you integrate into, for example, a private cloud or a public cloud. And the services that we're going to go through in this presentation are the ones that I have marked here in blue.
So with that, we're going to get into the first part of the agenda, where we're going to take a look at how we can integrate our model-based design workflows in the cloud. The easiest way to do this is through the use of MATLAB Online, and this is available to all our users with an individual license or a academic license. And if you have that, you can just go into MATLAB.MathWorks.com, log in, and, for example, start Simulink and start working with your models.
You don't need to install MATLAB locally, but everything is taken care of through the browser. As we said, that collaborative workflow is a big reason of going to the cloud. We provide a storage solution with the MATLAB Drive so you can store and share your models with your colleagues or friends.
For customers that require more customization, we have the MathWorks Cloud Center. And so this is what we would call a platform as a service. It runs and enables you, without the need of being a cloud expert, to set up virtual machines running on Amazon Web Services. A virtual machine, you can think of it like a regular workstation. So it comes with the operating system, libraries, and applications like MATLAB and Simulink, but it all runs virtually in the cloud.
You can customize what kind of machine you want to run, and we're going to take a look at that in just a minute. One thing that we see as an important driver for going to the cloud is the reduce of costs. So to support with that, we provide a service to the machine will automatically terminate if it doesn't-- if it's idle for too long, so it helps you not spending unnecessary resources.
So I go to Cloud Center. I log into my MathWorks account. And then I can create my virtual machine. I use this web application that we provide. I can make some customization. For example, what operating system, which MATLAB version, and also where the machine is going to run.
Most importantly, I think, is to select what type of machine we're going to run. You can do a compute optimized, memory optimized, and even a machine running with a GPU. I will set some login credentials for my machine. And if you have a licensed server, you can connect your virtual machine to that one, so it will automatically be able to then access and check out your license.
It takes a few minutes to start, and then I can connect it either to a web interface directly through my browser that I'm going to show you here, or I can use remote desktop, if you prefer that. And then my machine is running and connected. I can start MATLAB and then start doing my model-based design workflows in the cloud.
For customers that require more customization, we have the cloud reference architectures. And this is what we would call infrastructure as a service. The cloud reference architectures, they consist of a few different parts. There are these machine images that contain your software, so MATLAB and Simulink. And you have customizable templates. And you have some documentation to make it easy for you to get started.
These are all available on GitHub for free, so you can just go there, clone or download the templates, run it, and then you're up and running. And these templates are all runable, so you don't need to know much about it. You can start it, and then you will get your virtual machine running directly. Then you can use MATLAB and Simulink in the cloud.
But typically, we see this as an interest for the cloud admins, where you want to customize or integrate in your infrastructure. So we have written them in a cloud-native way, meaning that if you have some cloud experience, you should be able to understand how they work and then customize them to your needs. for example, to comply with your security requirements.
So these are some ways of getting MATLAB and Simulink up and running in the cloud. You can, of course, do it all yourself. Create a virtual machine, install MATLAB and Simulink, and run there. That's also an option that is available. For some customers, there's also a need to customize your virtual images. And then, for example, on Amazon Web Services, we do provide infrastructure as code that you can use.
And this is what's used to build these reference architectures. So you get this infrastructure as code. Build your own machine images, and then you set up your virtual machines there. So there are a few options. And the reference architectures, they are available on Amazon Web Services and on Microsoft Azure.
Those were a few examples of how to get MATLAB and Simulink running in the cloud. And why would you want to do it, you might think. Well, we talked about it already. Maybe you want to collaborate, for example, with your colleagues, work with models that are already stored in the cloud.
Say we have a lot of data or large models in the cloud. Instead of sending it back and forth to our development workstation locally, we set up a development workstation in the cloud, so we minimize unnecessary sending of data. It's a way of saving resources.
There's also possibility of access to hardware that we don't have available. For example, large multi-core CPUs or GPUs. And this also provides us with a way to scale up our models by utilizing parallel computing. And then we use the one virtual machine to prototype and then scale it up.
And this would bring me into the next part of our agenda, and this is how we can scale our simulations in the cloud. And using the cloud for scaling simulations is very powerful as the cloud provides us with an environment that very easily lets us scale up and build large compute clusters.
So we're going to take a look at a few different options on how we can access those, build those, and run them from MATLAB and Simulink. And it all starts with the Parallel Computing Toolbox, and this gives you the possibility to run, for example, parallel simulations on a machine that has multiple CPUs, or you can also utilize some powerful compute on a GPU. And this is available both on your local workstation and on virtual machines in the cloud.
Many times, you do want to scale up further in order to get or be able to run much larger simulations, maybe get your results much faster. So maybe you're going to run 100 or 1,000 cores. Then what you use is the MATLAB Parallel Server.
And this allows you to run MATLAB and Simulink in parallel on multiple machines or virtual machines, so typically a cloud cluster. It uses the same functions as the Parallel Computing Toolbox, so this means that you can prototype, build your models, test them in parallel on a virtual machine, and then scale it to without needing to change anything than where it's running-- the target, you can call it.
You access it directly from MATLAB. So this means that you, as somewhat of a Simulink user, you don't need to know much about the cloud infrastructure. So you'll say, oh, I want to run my simulations here. So we try to make it as easy as possible for you, as the users, to access this.
And one example where you might want to use this is let's say that you build a vehicle model, and you want to run parameters to see how that affects or some parameters affects the fuel consumption. You built the model in Simulink, and you also integrated using a engine model built in GT-Power, which is a third party tool.
And this provides a very high fidelity ending model. However, this takes a long time to run. Even if you run it in parallel, each simulation will take some time. So what we're going to do is we're going to build an approximate model. We're going to do a lot of simulations, build an approximate ending model, and then when we have this approximate ending model, we can scale up and do our large-scale simulations to estimate our fuel consumption.
We prototype this locally, and then we send it to a computational environment in the cloud. All the computations are run in the cloud, and we can then access them locally. This is our Simulink model that we built. The engine model looks like this. I'm just going to start and run this now in parallel.
I'm sending models to the cloud and start running the simulations in parallel. This, what you see here, is the Simulation Manager. It allows us to monitor our parallel simulations, so we see how the simulation starts and how they finish, so the green squares and dots there.
As the simulations are progressing, you can see how we can build up the engine maps that we're going to build, our approximate engine models. As the results come in in parallel, we continue to get them, and we can monitor this.
You can also see on the left side that we can plot some other variables of interest as well. If there would be an error in one of simulations, we would see this in the simulation manager, so we can afterwards go in and look and see what went wrong. But the other simulations will still run, of course.
And if you look to the lower part of the screen, you can see that in this case, we ran on 32 parallel workers. We could have scaled up further, but we saw that for this example, the speed up we got from 32 workers was sufficient.
We're just going to wait a few more seconds for all of the simulations to be finished, and then it's going to automatically clean up and clear our cloud cluster. Now here's what we got, about the 20 times speedup, which is quite good, I think. It's not perfectly linear, but you will always get some overhead, of course, running in parallel, and also in sending everything to the cloud.
How do we do this? Well, we have built in support in many of our Toolboxes, for example, for your Simulink design work or testing. This means with this built-in support, you can very easily get access to the power of parallel computing without need of any kind of coding or being a parallel computing expert. You just maybe need to press a button or check a checkbox.
In many cases, we see that you have a model, and you want to run parallel simulations on it, for example, as a parameter sweep. And we have a graphical support tool for this, so this allows you to just with a few of click buttons, run your simulations in parallel.
In the Simulation tab, you select Multiple Simulations. You check the checkbox to run this in parallel, and then you select where you want to run it, so either on your local machine, your virtual machine, or maybe on a cloud cluster that you have available.
We also have a programmatic interface, if you prefer that, with a parsing command. This is typically, again, for a parameter sweep, and you can run many simulations in parallel. The simulations are independent. They're not dependent of each other or the order you want to run, so you can run them all at the same time or a few at the same time, as you prefer. You build your simulation vector, which parameters you want to sweep over, for example. And then you select, say, parsing, and then it will run in parallel locally or in your cluster.
How can we then set it up? Well, similarly for virtual machines, we have the MathWorks Cloud Center that allows you to set up cloud clusters on Amazon Web Services machines. And we provide, again, this web application, so it should be easy for you, even if you're not a cloud expert, to set it up. You'll select a few things like MATLAB version, what type of machine you want to run, how many machines, of course, because you can build this on many machines to scale up further.
You can find and access it directly through MATLAB, so you don't need to worry much about the underlying infrastructure. And we also provide these services to make it easier for you to control costs by both auto resizing the cluster and automatically terminating it if it runs idle for too long.
And we're seeing interesting in combining these two Cloud Center services that we saw. Maybe you want to run everything of your development and scaling in the cloud. You create first a virtual machine using Cloud Center in the cloud and then create a cloud cluster in Cloud Center that you can scale up through.
And so this means that you don't need to install MATLAB locally. You can just access it through the web app API that is available. And the benefit of this is that if you move all of your workforce to the cloud, then you don't need to send all of the data, all of the models back and forth between the cloud, but you can keep it in the same environment. It can reduce both transfer costs and time, so it's a kind of efficient way of working with these services together.
We also have the reference architectures for Parallel Server available on Amazon Web Services and Microsoft Azure. And similarly to the ones you saw from virtual machines, they consist of the machine images, the runable templates, and the documentation.
And it helps you set up this infrastructure with the compute machines. You have a scheduler that takes care of the resourcing in the cluster, and you're also connected to your license server. We also provide reference architectures for the license servers, both running in Amazon Web Services and Microsoft Azure.
As a typical use case we see where customers want to run parallel computing with Simulink in the cloud is, for example, as we saw, the parameters sweeps, maybe Monte Carlo runs of Simscape models, or some large scale cosimulations of models that we saw.
All right, so that was it on the computational scaling side. I'm now going to hand it over to Hakan.
OK, so let's now take the step into continuous integration. What is it? Well, continuous integration is an agile methodology best practice in which developers regularly submit and merge their changes into a central repository. These change sets are then automatically built, qualified, and released.
We see that many of our customers are increasingly starting to use Docker containers for either running the build process and/or delivering the finished result, the application itself, into Docker containers as microservices or something else.
The good thing with that is that you can easily use, for instance, Kubernetes to scale the application in the cloud. Looking at continuous integration platforms, you can see them running either locally, on premise, or we can see that customers are using the cloud-hosted platforms, as well.
For local premises, we commonly see Jenkins or GitLab CI/CD systems or Bamboo, TeamCity, or something like that. And in the cloud, we see GitHub Actions, Azure Paper Pipelines, and such things. If you don't see your CI system here, don't be worried.
We are working with other systems as well. These are just a few of them. Contact us if you need help getting started with your system, if you need it. Otherwise, look into documentation. We do have more support there.
I spoke about Docker containers. What is a Docker container? Well, a Docker container is an executable package containing an application and all its dependencies. In our case, it could be-- it would be MATLAB and Simulink and all the relevant toolboxes that are needed to run the application or develop the application.
The benefits of using Docker containers are that they are portable. If you run it locally, it runs in the cloud as well because it's the same container. It's also scalable. If you have one container, you can easily run two, 10, 15, and so on. It's just using the same container and starting multiple times.
We have several different options to create a container using MATLAB, and we have, for instance, you can start by this example, Docker file that we have released on GitHub MathWorks reference architecture. It includes step-by-step instructions on how to tailorize it. It comes prepared to do just only MATLAB based, MATLAB.
But the step-by-step instruction shows you how to install Simulink, how to install different toolboxes, and so on, so you get a customized, tailored Docker container for just your particular project. We also for every release now since many releases ago release it to Docker Hub. So you can go into Docker Hub and download a container that runs MATLAB. And this one can be used either locally or in the cloud as well.
But for those who are using NVIDIA GPUs, for instance for deep learning workflows, we also release different Docker containers into the NVIDIA NGC cloud that you can download from. There are several benefits of using continuous integration, which makes it very popular. Let's take a look at some of the benefits.
The CI provides a consistent, repeatable automated process, which saves time and allow developers to focus on what's important. So it's repeatable, always. It runs the same automated process all the time. We also see that manual testing is great, but it's often based on days-old snapshots and lacks repeatability.
Continuous integration always tests the tests against this latest source base. You don't end up with you tested something that someone else changed later on, and so on. That one increases the quality of the model itself.
We also see that continuous integration enables developers to deliver higher quality faster. Repeatable processes with built-in quality assurance lead to higher quality and faster development. Also, a very important part is that collaboration increases. Developers that they are using continuous integration have a defined process for how to manage the changes and how to merge their code into the production line.
And also, finally, the very important phase is that it provides an extensive audit trail. For every change that makes its way into the system, one can identify who made the change, who reviewed the change, the nature of the change, and any number of related test results for that change, and also the artifacts, the documentation, and so on for that change. Everything is traceable.
OK, so let's see what it looks like, using a continuous integration system. We start by using MATLAB projects. The MATLAB project itself we connect to a source control system. And when we have built our model, and we want to check it into the source control system, we press the Commit button. And that one goes into the source control system.
The continuous integration server reacts to that, and it starts a pipeline. A pipeline within the CI world is a series of tasks that should be run always for the same different types of seconds. In this case, it would run a build stage. It would run the test stage, package, and deployment, let's say.
In the background, what it looks like, that it runs MATLAB batch. It could be a run script or as we released a new build tool in the previous release, 2022b, it could run that one. It's up to you whatever system you are using, but it runs a script in some way.
We also commonly see that these steps are using to be run on Docker container, as I previously mentioned. In that case, the continuous integration server would run a Docker container that runs the build steps.
And the build steps are tailorable, so if you're using the build tool, it's made for enhancing it or extending it with different stages. In this case, we added a static model analysis stage before doing the model testing. And then we had a generation of code, and that generated code should be analyzed statically by maybe using Polyspace or something like that.
And then you can run back-to-back testing or hardware in the loop testing. It's all defined in this run script or build files, I would say. But it's being run by the continuous integration server.
And as the continuous integration server runs, you have the access to the logs. And these are accessible asset runs, but also when it has finished, they are accessible for all the runs that you have done. So all the builds that have been run on your project, you can go back in time and see what happened. In this case, we have a failing test case.
Looking at it more explicitly, how it looks like is like this. We have a MATLAB project. We have a model that I've done a change to, so we go into the Commit button over here. We write a good example of a commit message. We submit that one to the source control system.
We have a look at the branching and see that we are ahead of the main task or main branch, but we need to see if this one is suitable to be merged later on. We push it to the source repository, and this one triggers the pipeline stage.
And in this case, we are connected to GitLab, so we can go in to GitLab and have a look at our project. And we can see that it's now actually starting our pipeline. It's running. And I can click on that one and go into the Run process and take a look at what's going on.
And here we can see what's exactly going on for my project and this change, what is running, how many tests are passing, and so on. And this is stored for as long as you have it still in the source control-- well, in the continuous integration server.
We also do integration with the continuous integration server in more detailed way so that we can actually produce artifacts for the CI server itself, which it could be good to have a quick overview of what happened, how many tests did pass. So here we have a junit support. We can generate TAP results reports or Cobertura coverage codes reports that could be easily accessed and viewed on the continuous integration server itself.
Here we can see that we have a project where we've had a number of tests passing, three tests that were passing, and then finally, someone added a test, but it fails. So we now have four test cases. Three is OK, the green one. And the red one is failing.
And this is traced for the project over all the builds so you can see what's going on. Are more and more test cases passing, or do we have a negative trend that more and more tests are failing? We need to go back into the project and see what's going on in that case.
But also very importantly is that all the artifacts that are being generated by the build process are accessible. You can either download them to your local machine and have a look through the test reports, or you can access certain artifacts directly, so you can go into the web use and see what does my code coverage report look like and so on.
And these are things that are commonly used in functional critical workflows that you need to have this traceability because they are stored for every build. So you can follow what is the trending of the test reports? Did a certain change go into the test reports? And did it actually affect the number of tests failing, and so on. And you can see who did the checking of that test case or that model that broke the test itself.
And as I mentioned, if something does go wrong-- we have those three tests that were passing and then all of a sudden we had a fourth test case that is failing-- we can go into the continuous integration tool and download the artifacts. In this case, it would be the failing test report or test result file.
And we can download that to your local machine and do local debugging of that one to figure out what is causing this new test case to fail. Is there a failure, something wrong in your model you need to fix, some tolerances or something like that.
Another thing that we have done within MathWorks is that last year, we released a CI/CD automation for Simulink Check support package. This package comes with a customizable process modeling system to define your build and verification process. It uses a build system to generate and optimally execute the process in your CI system.
It comes with a process advisor app to deploy and automate your pre-qualification process that actually connects to your Simulink windows. And it also has integration with the common CI systems, such as Jenkins and GitLab. It generates YAML files for GitLab and also the Docker pipeline syntax.
And with that, let's move over to the deployment phase.
Thanks, Hakan. So the last part of this presentation discusses how you can deploy Simulink models and integrate them into different kinds of applications. Typically in model-based design workflows, deployment means automatically generating C or C++ code for the control software, which is then deployed on an embedded system.
However, you can also repurpose your existing Simulink models and use them, for example, as digital twins, which are tuned to match the behavior of a specific machine or system. Digital twins follow the behavior of the physical system, which during its lifetime might undergo aging and degradation as it's being operated.
We also see many of our customers develop business or sales applications where Simulink plays a role, such as applications to configure and size the solution. For example, salespeople can enter requirements from their customers into the application, and by leveraging simulations in the background, an optimal configuration can be found.
To integrate Simulink models into these types of applications, generating C code for the model is usually not the best solution. A modern and flexible way of integrating Simulink or MATLAB models into various applications is to use something called microservices.
Microservices are small, independent software components that are scalable, reusable, and easy to maintain. By using microservices, you use a modular approach, instead of a monolithic architecture. You can communicate with microservices using well-defined APIs, such as REST API. And basically any application capable of communicating through that API can use the microservice.
Let's have a look at how we can deploy Simulink models as a microservice. In this demo, the goal is to develop a digital twin based on a Simulink model that was originally built to help design the system. As you know, digital twins are an up-to-date representation of a real, physical system and operational data from the asset is used to periodically tune the digital twins so that the twins are up to date.
And this needs to be done online. That is, typically in the cloud, as part of some kind of IoT pipeline. This can be achieved by first designing a parameter tuning routine in MATLAB and then packaging that tuning algorithm along with a Simulink model into a microservice image that can be run at scale in the cloud and used by any application. Once the model is updated, asset specific simulations can be run, and the operation of the asset can be optimized, for example.
Now, I was talking about parameter tuning. What do I actually mean by that? Consider this scenario where we have a system consisting of pumps, valves, and motors, and a Simulink model that was used to design the system.
If the physical system has been used for years in the field, chances are that its behavior doesn't exactly match the behavior of the Simulink model, which may have been developed to model an ideal system. This difference is shown by comparing the pressure output of the pump system and the Simulink model.
To tune the Simulink model and match the behavior, we can vary the parameters of the Simulink model to change the pressure output so that the responses match and the Simulink model becomes a digital twin of a physical asset.
Matching the responses is essentially an optimization problem. We're trying to minimize the difference between the responses. To set up and solve this problem, you can use Simulink Design Optimization Toolbox and the Parameter Estimation app that ships with the Toolbox.
With the app, you can choose the parameters to tune, select the tuning data, and the signals to match. Then you can solve the optimization problem, examine the conversions, and visualize results. Finally, you can generate a MATLAB function automatically that can be used to repeat the tuning routine without an interactive app.
This function supports deployment to cloud and can be easily modified to access operational data from the assets in cloud-based data storage such as databases, data warehouses, and file storages. This is the function we want to deploy in the cloud so that it can be integrated into IoT systems.
Let's see what this looks like. What we're seeing here is a MATLAB function that we want to deploy. It's called Estimated Pump Parameters. As input arguments, it takes the ID of the asset because we want to tune the Simulink model to a specific asset. And then it takes some variables associated with the optimization.
As tuning data, we are using data that is stored on Amazon Web Services, specifically on S3 buckets. Here I'm setting up some environment variables that I can use to access the data. And then I can read the data into this function from the S3 bucket.
Then I have another function, which actually performs the optimization with the data on S3 buckets. This is the function that was automatically generated from the Parameter Estimation app. Here, for example, we have the optimization method. And eventually, this function returns the tuned parameters for the Simulink model.
Let's see how we can use this function. Let's try it out from the MATLAB command window. Just type the name of the function. Give an asset ID and then the optimization variables. And we see that this starts an optimization routine.
Now, this is an iterative process where we iteratively call a Simulink model, and we're trying to match the responses with the Simulink model and the tuning data. And here, we see the tuned parameter values. If we want to package this into a microservice container, we need to do two things.
First, we need to package this function into an archive with a single command. And once this is finished, we can then create the container image from this archive. Along with the archive, everything that is needed, such as MATLAB runtime, is packaged into the Docker image.
And this might take a few minutes to run. But in this case, it actually runs quite fast because I've already done this, and some of the parts are already available for me. Once the packaging is finished, along with the Docker file, I get a Getting Started file, which includes useful information about the microservice.
Here, I'm copying the Docker run command, which I can execute on the command prompt like this. And after some seconds, I have the microservice running and listening for connections from client applications.
Here I'm using Postman to call the microservice which is running. I can call the microservice using a URL where the function name is part of the URL. I can specify the input arguments as a JSON string. And then I can call my microservice. And after the MATLAB function has executed, I see the results, the tune parameter values, also as a JSON string.
Now, there are actually two ways to create microservices from MATLAB. First, there is something called Matlab Production Server, which is a server software that we license. It is a turnkey solution to expose your MATLAB and Simulink applications as a scalable service.
It's easy to set up in the cloud using a reference architecture, and you can use an interactive dashboard to manage and monitor your applications. The requests have a very low latency, so a MATLAB Production Server is an ideal solution for applications that use streaming data.
Then there's the Docker microservice that I used in the example. Docker containers can be run almost anywhere and at massive scale, so it's a very flexible solution. To run the containers, you don't need any licenses, only MATLAB Compiler and Compiler SDK Toolbox to create the container image. And as you saw, it's very easy to create the container image-- just two MATLAB commands.
So with that, let me summarize what we discussed today. Cloud services can be used in many ways in workflows related to model-based design. You can do your design work on powerful cloud machines and speed up the work.
You can further scale simulations on compute clusters and, for example, run large parameter sweeps. You can run your continuous integration and continuous deployment jobs in the cloud, and MATLAB and Simulink integrate with those workflows.
You can integrate Simulink and MATLAB applications with other applications by deploying them as a microservice that runs in the cloud. Resources can be easily set up using reference architectures and MathWorks Cloud Center so that you don't have to manually install software on cloud platforms.
Finally, if you'd like to use our tools in the cloud but don't quite know how to do it, let us know. We can help you. This concludes our webinar. Thanks for watching.