Access Factory Floor Data from MATLAB to Build and Deploy Anomaly Detection Algorithms
Overview
Access your factory data to build advanced analytics like predictive maintenance and anomaly detection algorithms. With MATLAB, you can design custom smart manufacturing solutions that can be deployed into your production environment. You can access plant data directly from a variety of sources such as OPC UA servers, AVEVA PI data historians, raw sensors and logged files. You can use AI to detect anomalies in data and predict future performance. We’ll demonstrate how to access your plant data, build predictive maintenance algorithms, and move your MATLAB algorithm into your production environment where it will run on data as it streams in.
In this webinar, you will learn how to
- Access plant data from a variety of sources
- Develop AI algorithms for predictive maintenance using low-code tools
- Deploy algorithms in a production environment for online anomaly detection
About the Presenters
Arvind Hosagrahara is the technical lead a of a team that helps organizations deploy MATLAB algorithms in critical engineering applications, with a focus on integrating MATLAB into the enterprise IT/OT systems. Arvind has extensive hands-on experience developing MATLAB and Simulink applications and integrating them with external technologies. He has helped design the software and workflow for a variety of production applications focusing on robustness, security, scalability, maintainability, usability, and forward compatibility across automotive, energy and production, finance and other industries.
Eric Wetjen is the Product Marketing Team Lead for Test and Measurement. He leads the product marketing and strategic planning efforts for the Test and Measurement products which include Industrial Communication Toolbox, Vehicle Network Toolbox, Instrument Control Toolbox and Data Acquisition Toolbox. Eric enjoys working with customers to get input on requirements and product direction with the end goal of making it easier for MATLAB and Simulink users to access and work with data from industrial plants, automotive vehicle networks, benchtop test equipment, and data acquisition systems.
Adarsh Narasimhamurthy leads the Predictive Maintenance Toolbox software development team. He focuses on workflow and AI algorithm development and deployment for data driven predictive maintenance applications. Prior to that he worked on enabling analytics on cloud based IoT platforms. Adarsh earned his M.S. and Ph.D. in Electrical Engineering from Arizona State University.
Recorded: 10 Dec 2024
Hi, everybody. Welcome to this webinar entitled Access Factory Floor Data from MATLAB to Build and Deploy Anomaly Detection Algorithms. My name is Eric and I'll be joined later by my colleagues, Adarsh and Arvind, as we illustrate how MATLAB can help you harness your OPC UA data to develop predictive maintenance applications.
Let's briefly go over what we will cover in this webinar. We'll start by describing why anomaly detection is important. Next, we will show a common workflow for developing and deploying an anomaly detection algorithm. This workflow will cover accessing the data, building the predictive model using machine learning, and ways to deploy the model in a production setting. At the conclusion of the webinar, we will summarize the key points and address any questions you may have.
Before we get started, let's talk a bit about what is motivating this topic. You hear a lot about smart manufacturing and industry 4.0, but these general terms don't help us build a smart, connected factory. We find that, in many cases, anomaly detection is what people are trying to do. Anomaly detection can be useful in a number of different scenarios. For example, if you're monitoring a process and you want to improve that process, detecting anomalies can be the first step.
Another common application is in quality control, where detecting and addressing anomalies can directly improve yield on the production line. Anomaly detection is also frequently used for predictive maintenance of plant equipment. In this case, detecting anomalies can help you predict faults in advance or in real time. We will dig more into this type of anomaly detection application in this webinar. Finally, in the area of test data analysis, we also see anomaly detection being used to remove bad sensor readings and to help understand performance issues.
Let's look at the algorithm development workflow to understand how to actually create these algorithms and how the MathWorks tools can support the creation process. To build an anomaly detection algorithm requires three main steps. The first step is to acquire the data. The data often comes from sensor data that you may have available on your plant, or have collected with a plant historian. Vibration data, flow data, or pressure data are common examples. The data can also be generated from simulations in situations where anomalous data is rare. In any case, in order to build your predictive model, the first step is to bring the data into MATLAB.
The second step is to actually develop the predictive model. This step includes identifying features, training, and testing the model. Finally, the end goal is often to deploy and integrate the anomaly detection algorithm into another system. This could be a SCADA system or some other system that may be running in the cloud. The deploy and integrate step allows the algorithm to run in a production setting as data streams in from your plant. We will now dive into each of these steps in more detail.
Let's look more closely at the data acquisition step. Without data, we can't build the anomaly detection algorithm. MathWorks provides a number of tools to help you access data wherever it may reside. The data you use for your anomaly detection algorithm may come from a variety of places. You may have the data stored in files in formats like Excel or text. You may even have image or audio files that you need access to. In some cases, you may have dedicated databases that store the data that you want to use to build your anomaly detection algorithm. These databases may be accessible with SQL queries, or they may be NoSQL type databases.
Or, and this is the case we're going to talk most about today, you may need to access raw sensor and image data directly, or you may need to bring in industrial data that's being stored in OPC UA or AVEVA PI servers. MathWorks provides a variety of toolboxes and capabilities to access data no matter where it may be collected.
Let's talk a little bit about raw sensor data access. MATLAB provides tools to access data directly from industrial cameras that support the Gige Vision and GenTL standards. You can also access raw sensor data directly from data acquisition devices that may be connected to temperature or strain sensors, for example.
These tools provide apps to get started collecting your data and can generate MATLAB code. Direct access to raw sensor and image data is often an early step in collecting data for building anomaly detection algorithms. However, if you already have your sensor data aggregated in a plant historian or OPC UA server, it will make more sense to directly access the data from there.
If you're watching this webinar, there's a good chance that you're collecting your data using OPC UA. OPC UA is growing in popularity for the collection of aggregate-- and aggregation of industrial data, and there's good reason for that. The OPC UA standard provides a common set of information models to describe the data. It also has a robust security built in, and the OPC UA servers themselves are very versatile in that they can be installed on PLCs and on the cloud.
As you can see in the diagram here provided by the OPC foundation, OPC UA can be used to communicate data to the cloud, and it can be used to communicate data from field devices to field controllers, and it can even be used to communicate between field devices. If you're using MATLAB, the MATLAB based OPC UA client typically resides at the level in the diagram where you see the MES or SCADA systems. Let's talk in some more details about how MATLAB can help you access your data collected using OPC UA.
MATLAB provides the Industrial Communication Toolbox to enable you to easily exchange data with your OPC UA server. This includes both read and write operations. It's important to note that the MATLAB OPC UA client allows you to connect securely to your OPC UA server. Once connected, you can graphically browse the data on your OPC UA server and decide what data you want to bring into MATLAB.
With support for OPC UA subscriptions, you can receive OPC UA data only when certain predefined conditions have occurred. With OPC UA methods, you can trigger actions like starting and stopping a machine directly from MATLAB. If you are working in Simulink, you can use the Simulink blocks to bring OPC UA data into a Simulink model. This is especially useful if you are developing or testing an algorithm that will be deployed on a PLC.
If you're not using OPC UA, it is also possible to access data from plant historians such as AVEVA PI. Plant historians like AVEVA PI are often used in process industries and by utilities companies. The large amount of data stored by these process historians can be a really great source for developing anomaly detection algorithms.
With MATLAB and Industrial Communication Toolbox, you can bring data directly from PI servers into MATLAB. You can view this data using apps provided by the Toolbox, and you can write data back after it's been processed in MATLAB.
OK, let's summarize how you can access your industrial data from MATLAB. The basic function of the Industrial Communication Toolbox is to provide an OPC UA client to let the user connect MATLAB or Simulink securely to an OPC UA server to exchange data. You can also connect to an OPC HDA server if your data resides there. Apps in MATLAB functions are available for connecting to plant historians like AVEVA PI.
Support for exchanging data over MQTT and MODBUS are also provided by the Industrial Communication Toolbox. With the broad range of support for industrial data, Industrial Communication Toolbox plays a key role in the workflow for developing anomaly detection algorithms.
Next, I'd like to show a demo of how to find and access industrial data on a connected OPC UA server. In this demo, we will show how to access raw vibration data and process vibration data from the OPC UA server. Let's jump into MATLAB to see how we would do this.
OK, we have a script here that will allow you to retrieve raw vibration data as well as some computed feature data directly from the OPC UA server. So let's walk through the script and then we'll run it. The first section of code here establishes the connection with the OPC UA server and allows you to connect using the security protocols that are set up for your server. In our case, we're using username and password.
The next section of code here allows you to find the data that you're looking for using find node by name. So here, we're looking for the anomaly detection training data, in particular, we want to grab three channels of data as well as the label associated with that data. Finally, the third section of code that allows us to retrieve that vibration data by providing a start and a stop time to let the server know where we're looking for the data.
And the last section of code here just changes the data bit so that it appears as a table. And you can see the end result here will be a table where we have three channels of data as well as their labels.
And then, for the second part of the script, what we're going to do is we're going to retrieve feature data from the OPC UA server. And to do that, we're going to use the namespace browser, which allows you to graphically select the nodes. So I'm going to show that live in a second. And then, once we've got the node selected, the nodes that we're going to choose are here, we're going to bring all this data into MATLAB. And then, we're going to display that as a table. So the end result here will be the labels and the 12 features that we just showed.
So now, let's run the script. The namespace browser has appeared. So you remember, I'd like to get the featured data. So my feature data is located in this node here. And you can see those are the 13 different pieces of data that I'd like to bring in. So what I can do is I can select anomaly detection feature data and add child nodes. Now, this gives us all the nodes that we want to bring into MATLAB. And of course, everything that's on the OPC UA server is available here. But for this demo, we're just going to show these 13 nodes. So we hit OK.
And now we can see that the script has executed and the data is in MATLAB. So the feature data here, all of that data has been brought into MATLAB. And if we go look at the vibration data, the vibration data is in this train data table. And if we wanted to look more closely at the vibration data, we could open up the train data and look at, say, channel two, and we could plot that data. And there you go. So that's the raw vibration data, and we have the feature data now in MATLAB.
So that's how it's done. Once the data is in MATLAB, you can easily move on to the next step and start building your anomaly detection model. Now that we've summarized and demonstrated how MATLAB can help you access your industrial data, I am going to hand it over to my colleague Adarsh who will describe the second step in the workflow, how to build the predictive model.
Thanks, Eric. My name is Adarsh Narasimhamurthy. I'm an engineering manager at MathWorks. Today, we're going to be talking about time series anomaly detection for predictive maintenance. So anomaly detection is one type of a predictive maintenance algorithm, but it's often the most common one that users get started with when building out a predictive maintenance system. And all it's really about is being able to tell whether something is working as expected or not.
Now, this can be helpful. But oftentimes, we want to go beyond just being able to say a simple yes or no, and that's where we start moving into deeper predictive maintenance algorithms like fault detection and remaining useful life estimation. As you can imagine, being able to answer why something's failing, or even being able to tell when something's going to fail before it's failed is quite useful. But in order to be able to get to that point, we need a lot more data. And more specifically, we need labeled data that has information about where these failure events have occurred.
Let's dig in a little deeper into anomaly detection. There are different types of anomalies out there. Some are relatively straightforward to be able to pick up even without any specialized techniques or tools. Things like point anomalies, or outliers, as they're often called, can be detected pretty easily either through visual inspection or by using statistical methods. But often, there's more complex anomalies.
Things like collective anomalies where the anomaly isn't a single point, but rather, a series of points, or maybe even anomalies across multiple time series. And sometimes, these anomalies will only manifest themselves when we're looking at these time series in conjunction with each other.
In order to be able to recognize those, either you need a lot of expertise and experience, sometimes you have to-- sometimes even decades of experience just staring at the data, or build out some models, some anomaly detection algorithms, in order to help enable that sort of exploration. But at the end of the day, we're really just looking to answer the question, is this system working as expected?
Now that we've seen different ways that anomalies can manifest themselves, let's talk about two common use cases for detecting anomalies. The first is just data exploration. The possible goals for exploration are to retroactively gain insights about system behavior in the past and label any deviations in the behavior for further investigation.
For this use case, we will start with a large amount of data that we've collected over time. We know there's probably anomalies in this somewhere, but we don't either have the time or the expertise to go through and sift through all of the data to find them. So what we want to be able to do is explore these patterns quickly and efficiently so that we can identify these anomalies with good confidence.
Now, once we have that, we can start building up a small, but hopefully, growing over time data set where we have actually labeled the anomalies. And even when we've just got a few of them, that's when we can start moving into the AI algorithm development option on the right side of the slide where we're going to train a model in order to be able to make predictions on new data that are coming in. So the common goal for training an AI model is to quickly detect anomalies in new data as they're coming in.
Over the next few slides, we will focus on only the AI algorithm development option. We had a dedicated webinar a couple of weeks ago that dives into a lot more depth into the exploration use case, and I recommend that you watch that if you're interested in learning more about that.
So the next thing we're going to do is look at the workflow of developing an AI algorithm for anomaly detection. It's pretty standard where we start by acquiring data. In this case, we're fetching the data from an OPC UA server. Once the data is accessible in MATLAB, we might want to do some pre-processing or cleaning up of the data. And then, we get into the actual modeling part of it where we first need to do some feature engineering. This is where we take our standard time series data and then convert it into features that can be used to train an AI model.
Now, you might be thinking, why don't we just feed the raw time series in? Why do we need features? Well, a lot of these machine learning models aren't necessarily designed to take just raw time series data. And we are much better served by doing a bit of feature engineering to extract the important information in the time series so that the models can learn effectively.
Further, feature engineering will also reduce the dimensionality, and that is the size of the data that the model needs to learn from. So training is more efficient. I'll go through how exactly we do that and how we start exploring what features to keep as we get into the exam. But once we have a trained model, that's when we can actually deploy it, use it in the real world, and start to build out things like a dashboard that Arvind will show in his part of the presentation.
So before going into MATLAB, let's learn more about the example. The example will focus on anomaly detection in an industrial machine where we're using three axis vibration measurements. So we're working with three time series.
We've broken this into a bunch of different chunks. And as you can see, we've buffered the data into 70,000 sample sections for each of the three axes. And what we want to do is we want to label that 70,000 points as an anomaly or not. So the entire subsequence is going to be classified as an anomaly or not.
We'll start by bringing in a small amount of raw time series data from the OPC US server for feature design. And once the features have been designed, we will then bring in features computed on our significantly larger time series data set. And then, we use that for training the model. The trained AI model is then validated against some test data to measure its performance.
So switching over to the MATLAB example here, the first thing that we're going to do is we're going to load the time series data in from OPC UA server as Eric showed us earlier. Now, this data is basically a bunch of 70,000 sample long time series, as we discussed it, and each one has a label. Either it was gathered before maintenance occurred, so generally, it's anomalous, or after maintenance has occurred, in which case, it's operating under normal conditions.
Here's a quick look at some of the data. So the blue here is the anomalous data and the orange is the normal data. So before and after-- in this case, it refers to before and after maintenance. So we're assuming that after maintenance everything is working as it should. And you can see that there are some visual distinctions. There's a little bit of shift between them here. You can also see some spikes in the orange data that don't seem to exist in the blue data.
While we can visually see some things, the question really is, how can I represent these in the model that we want to train. Well, that's where we go into the feature engineering workflow. So to do this, I'm going to use an application in MATLAB called Diagnostic Feature Designer. This can be found up here in the Apps tab. I actually have this already open, so let's go start a new session and select the current data set.
We can see here that it's our three time series, one for each axis of vibration. Let's start with a quick visualization of the signal trace, and this is what we already saw in MATLAB. The Diagnostic Feature Designer enables you to extract features from several different domains such as time, frequency, or time frequency. You can also leverage time series models to identify impactful features.
If you're working with signals from rotating machinery, there are specialized data processing and feature extraction algorithms for this domain as well. Generally, feature design is an iterative workflow where you need to try features from these different domains, find features that have the most impact on anomaly detection or the classification workflow.
This can take a lot of experiments, and therefore, a lot of time. So to simplify this workflow, the app provides an auto feature button that computes a large batch of features across all of these domains that I mentioned. Once the large number of features have been extracted, ranking algorithms within the app can be used to identify the most impactful or most suitable features.
Through this workflow, we've identified 12 specific features that were the most impactful for this anomaly detection task. So now, instead of a bunch of 70,000 long time series, each one of these time series have been, essentially, compressed into a 12 value vector of features. So that's a huge amount of data compression for when we actually get into training.
So far, we've been working with raw time series data that we downloaded from the OPC UA server. To train the model, we'll bring in a much larger data set. However, instead of downloading all of the raw time series data from the OPC UA server, we're assuming that the 12 features that we're interested in has already been computed, let's say, on a gateway. And these features are now stored on the OPC UA server. Here you can see that we read in a feature table with 18,000 rows. As you can imagine, the size of that data in the original time series format would have been significantly larger.
Once we have loaded this feature data into MATLAB, we need to prep them for actually training a model, and a big step for that is separating the data out into training and test sets. In this case, we'll be working with a one class model, and therefore, the training set will only be normal data. The test data, though, will be composed of a mixture of normal and anomalous data to help assess the performance of the AI model.
Once we've done that, we can actually go into training the model. In this example, we're using what's called a one class isolation forest model. And for the isolation forest model, we only feed it normal data for training. Once it's trained, we're going to bring in the test data. And this test data has both anomalous and normal data in it with labels. And we can see a measure of its performance. In this case, it's doing pretty well, so we're seeing about 95% accuracy for the normal detection case.
In this particular data set, the anomalous and normal data separates quite nicely, at least in the features that we have extracted. So that means that we've done a fairly good job in the feature engineering to let this happen.
Now, there's more that we can do to improve the model accuracy. We could experiment with hyperparameter optimization in order to fine tune the performance of the model. Or we could even try different models. But at least on first pass, these are doing fairly well in terms of determining what's anomalous versus what isn't. If you're interested in learning more about these techniques or other optimizations for anomaly detection, please watch the other webinar that we had a few weeks ago.
Now, we have a trained model, and our next step in the workflow is to take this trained model out into the world where it can be run against online data. In the next section, Arvind will cover the topic of taking this trained model and using it for detection against online data and developing a dashboard visualization for visualizing the detection results.
Thank you, Adarsh. Hi, I am Arvind Agrahara. I'm the technical lead of a team of engineers that focuses on putting our products into production. Now, Eric described the different data access options available to our users through our products, and others described how to build these anomaly detection models. I am here today to describe how these MATLAB models are put into production.
Looking at the workflow, I focus on the last bit of taking our models all the way out to deploying these models and integrating it with the rest of your systems. The challenges in this space, where do we run these anomaly detection algorithms? Do we run it on the edge, or do we run it as a part of the OT infrastructure, or do we move this over to IT managed resources, like either plant specific or central data centers?
Do we run this on premise? Are there cloud resources that we can leverage? Are there any hybrid cloud configurations that we should consider? In this space, since we are supporting manufacturing operations, high uptime, a strategy for handling failover, building these resources to have high availability and disaster recovery ability, all of these factor into these decisions, as do how this system will scale and how much it will cost to actually execute at scale.
So far in the webinar, we have talked about the algorithm development workflow, how to connect up to OPC UA infrastructure, pull the data into MATLAB, develop the model, and if necessary, test the predictive maintenance algorithm. The next step would involve a couple of support packages for us to actually connect up both MATLAB interactively as well as our production server product that is designed to run 24/7, 365 in these environments-- in these production environments, to deploy the algorithm on streaming plant data.
We provide several interfaces as part of support packages for supporting these production workflows. For example, the ability to connect up to PI and PI resources like the AVEVA PI Server, and interfaces for us to actually connect up to OPC UA tags and to OPC UA hardware infrastructure in the plant systems.
With these, you can easily move your algorithm into the production environment. By leveraging these interfaces that connect up to MATLAB Production Server on one side and either AVEVA PI servers or OPC UA infrastructure on the other end, these interfaces are services that run completely outside MATLAB and pass data into the MATLAB Production Server.
The Production Server itself is a server like product that can be installed in your data center. We do publish recipes for running them on Kubernetes, to run-- irrespective of what kind of production environment you have, we should be able to fit this product to run at scale on your production infrastructure.
The first support package that I referenced is available off our website. It allows you to stream OPC UA traffic into the MATLAB Production Server and keep it running continuously. It also supports streaming of these analytics for near real time insights into your data. Like in this case, we're going to take the model that others built and take it out into one such production environment and show how it could run against the stream of continuously available data on OPC UA, easily moving the MATLAB model from algorithm experimentation and development into a full production environment. We also publish support packages if you already have PI hardware, how you can actually connect up to AVEVA PI systems and push data.
In this space, a quick demo is worth a thousand words. Let me show you the last mile of taking your MATLAB model into production. For starters, let's take the model that others built. This is a classification SVM model. This model can be saved locally, and this model would move into the production environment. The simplest way to save it is to just save it as a mat file, in this case. I save it as a mat file locally. I can pick the appropriate model, and now I have a representation or a piece of baggage that contains my model that can go into the production system.
To expose MATLAB as a service, I built a service entry point, in this case, I called it detect anomaly. And in this entry point, what I am doing is actually taking the input data, casting it as a table, and calling the model. Calling the model predict function to compute a score. Now, there are several ways of building a dashboard from the predicted score. You've got a wide variety of options for dashboarding technologies, too.
But in this case, all I'm doing is using our support for telemetry to be able to set that score SVM as a metric that I publish to a Prometheus server. Once I have my entry point, it becomes possible for me to test this first locally in MATLAB. I can call my entry point, passing it some sample data, and I can see the predictive score.
One step beyond this, we want to test it as a deployment artifact. And that, again, can be done in MATLAB. For example, I can start my deployment tool. So if I say deploy tool, I have the ability to bring up the MATLAB Compiler SDK Production Server compiler deploy tool. I can add my exported function. In this case, the entry point that I just built the detector VM. Adding this gives me the ability to test this almost as a single worker locally using the Compiler SDK.
I give it a name for the archive that produces a single piece of baggage. I call this the vibration CTF, or the Component Technology File. It automatically detects that I need a model.mat for this archive to run. If not, you can add additional files for your archive to run and test. The service endpoint in MATLAB itself. Like in this case, I open up a service on port 9910 enabling cross-origin requests if required so that I can call it from other computers.
It is possible to debug and test how your application works, too. You can add breakpoints so that you can call this and see how the features pass through your computation. Once you actually start this, you have an endpoint at localhost running on port 9910, and this is callable from anywhere.
As an example, I'm going to exercise this function from just a simple curl command. In this case, posting a JSON request that contains a set of features to the endpoint that I just described here. And every time I actually call this endpoint, the MATLAB service, you can see the request land in MATLAB complete, if necessary, work with breakpoints, but essentially compute a predicted score.
So at this point, we know that our endpoint code is correct and I can actually test it multiple times just to make sure that it's actually robust and add the necessary error checking. At the push of a button. I can stop this test and then produce my compiled artifact. In this case, when I ask it to package, it goes through the process of building this either one or multiple entry points into a single CTF file and giving you an output, a single piece of baggage that you can take out to the MATLAB Production Server.
Moving this model to the MATLAB Production Server is as simple as a copy. Typically, what you'd want to do is use a modern DevOps process. So this process that I'm doing here manually, just for a demonstration, can be completely automated so that MATLAB and Simulink modelers are pushing to their source control module and the automation kicks in, packages it up, compiles it, moves it over to the production environment, if necessary, through development gates.
Going to my Production Server, I can look at the status. I have a running Production Server here. And dropping the CTF into the auto deploy folder provisions it as a MATLAB-based service. In this case, I'm going to just copy my CTF file from my output of my Compiler SDK. And I can see this appear in the log file. I can see that I've actually deployed a new CTF to my Production Server. At this point, I can shut down MATLAB, and I can call Production Server directly. The same curl command now goes through the process of being executed on MATLAB Production Server.
The first call to Production Server is when it warms up. It deploys. The worker needs to warm up. But subsequent calls on it are extremely fast. This completes up in-- very quickly. You can actually call the MATLAB model and see the prediction. This becomes the service endpoint for our MATLAB model.
From here, all we need to do is start the connector. The connector will connect up to the MATLAB Production Server endpoint on one end and to the OPC UA channel. So essentially, subscribe to the OPC UA channel, anything that is published to the OPC UA channel, any feature that's published to the OPC UA channel gets delivered to MATLAB Production Server for the prediction.
In this case, I give it the discovery API. I can actually configure this connector interactively, like I'm doing here, or it is also possible to configure it in a more production environment. You're using either environment variables or you're using a configuration script. But it is possible to do this interactively and log into it and play around by hooking it up to the right channels. Or you could use a script like I show here.
At the end of it you have the second service, the MATLAB interface for OPC UA, that connects up the OPC UA channel on one end and MATLAB Production Server on the other end and runs as a standalone executable. The executable itself is very slim. It runs outside MATLAB. It is possible to dockerize it and run it completely standalone. So at this point, I actually pointed to my anomaly.
And once I'm done with that and start the connector, there's a few more configuration steps to set up certificates and whatnot. I set up my last two services. In this case, I'm using Prometheus and Grafana. Prometheus gives me the collector for the OpenTelemetry data that I'm publishing, and Grafana looks into Prometheus and gives me the ability to very quickly build dashboards.
It doesn't have to be Grafana. We have support packages for Power BI and Spotfire and Tableau and your dashboard technology of choice that is intended to run 24/7 in your manufacturing. This looks as follows. You actually have a dashboard. And I threw a couple of widgets in there, one that tells how many features and processing. And then, it is possible to go back and take a look at historically how this has been running across time. It is backed by a time series database. It is possible to refresh these data dashboards, picking either the last five minutes and, if necessary, asking you to refresh every minute, every five seconds. Continuous live view as to what is happening on your OPC UA channel as predicted by the MATLAB model as shown on the dashboard.
That completes the last mile you have taken in MATLAB model from data access through to building the model, compiling it, deploying it, and standing it up with a dashboard like what you see here. That concludes the demonstration.
To wrap it all up, Eric described how to acquire data, how to look at generating data, and how to connect up to sensor data sources to pull data into MATLAB for analysis. We talked a little bit about leveraging toolboxes to pre-process the data and make it available for analysis. Others talked about developing the detection or prediction models. And the last demo here talked about how to deploy that to take it out into a real production environment to support operations running continuously and with high availability and integrity.
So to summarize, that concludes how you would take anomaly detection out into production environments. To help you with this journey, the MathWorks offers consulting services. You can amplify your return on investment by connecting up with highly qualified resources at The MathWorks, who have done this many, many times over, to be able to actually work with the experts that have on the average 20 years of experience with thousands of projects under their belt to be able to get it right the first time around. To be able to actually build this, integrate it into your systems, and have it operational as quickly as possible.
So the benefits being quicker development cycles, higher quality, and of course, enhanced collaboration. We try to teach you how to use our tools to their best potential. With that, we conclude the webinar and open up for Q&A.
Thank you.
Featured Product