Medical Image Analysis and AI Workflows in MATLAB
Overview
Medical images come from multiple sources such as MRI, CT, X-ray, ultrasound, and PET/SPECT. The challenge is to visualize and analyze this multi-domain image data to extract clinically meaningful information and conduct other tasks such as training AI models.
MATLAB provides tools and algorithms for end-to-end medical image analysis and AI workflows – I/O, 3D visualization, segmentation, labeling and analysis of medical image data. This webinar shows the complete medical image analysis workflow for AI applications. You will learn how to import visualize, segment and label medical image data and utilize these data in AI model training.
Highlights
- Importing and visualizing multi-domain DICOM medical images
- Segmenting and labeling 2D and 3D radiology images
- Designing and training AI and deep learning models
About the Presenter
Renee Qian is an Application Engineer supporting the Medical Devices Industry in Data Analytics and Technical Computing applications. She works closely with engineers and researchers in the biomedical community to understand and address the unique challenges and needs in this industry. Renee graduated Northwestern University with an M.S. in Biomedical Engineering. Her research was in medical imaging focusing on quantitative cerebrovascular perfusion MRI of the brain for stroke prevention. She joined the MathWorks in 2012 helping customers with MATLAB, analysis, and graphics challenges, and later transferred to Application Engineering where she specialized in Test and Measurement applications before transitioning to her current role.
Recorded: 29 Sep 2022
Hello, and welcome to today's webinar on medical image analysis and AI Workflows. My name is Renee. I am a senior application engineer at MathWorks, specializing in the medical devices industry.
So our agenda for today will primarily consist of following the AI Workflow. So preparing some data, our images, and performing some ground truth labeling, taking a look at how to turn those labels into a fully functional AI model, and perform the training for that. And then a brief discussion at the very end taking a look at some of the additional steps that really go beyond those two basic steps.
So tuning, verifying, and deploying that final model into perhaps a medical device, or a medical devices, medical-based workflow. Today I also have the pleasure of introducing you to the Medical Imaging Toolbox, new with this most recent release 22B. And basically it's a Toolbox that specializes in tasks around medical images, primarily around the fields of visualizing, image registration, segmentation, and labeling medical images, and handling the medical image metadata.
And when I say medical images in this case, I very specifically mean radiology images, so MRIs, CT scans, x-rays, ultrasounds, and PET scans. To take a closer look at the AI Workflows portion of our agenda here, we will be covering like the agenda said, data preparation, preparing our data for training an AI model, doing the actual training and designing of the model. And we won't really have time to go into an example of this but we'll have a brief discussion about what we're doing after the model has been trained.
So some deployment perhaps to embedded devices or enterprise systems, verifying and validating your model for eventual use in say an FDA regulated device, et cetera. And then the thing to remember in this process is that even though we've presented it in a linear fashion, this is not a linear process. You will be going back and forth between the different steps.
Data collection seems to be an ever present and continuous need, and you're always going to be tweaking and improving your model especially as that new data comes in. All right so the demo highlights for today, we're going to be looking at some of the really neat new features in the medical imaging Toolbox, especially around automated and semi-automated labeling, before moving into our case study for the day, which will be around designing a segmentation model for cardiac MRI images.
And then finally, I will show the example for generating code from the trained model, but we won't actually go into the demo itself. So you'll see a teaser for that. So let's get started with that first task. Like I said, we will be centralizing our discussion on a segmentation algorithm of the left ventricle of cardiac MRI images. And so that's the case study for the day.
Now before we jump into the example and before we jump into MATLAB, I do want to take a quick discussion on why we need to spend time on data preparation in the first place. If for most of you if you're coming from academia, this Andrej actually presented this back in 2018 and had a discussion around the discoveries of transitioning from research to industry. But you tend to find yourself with an enormous amount of publicly available data sets.
But when you enter industry that tends to not be the case, because your data is your gold mine. It is what can give you the edge over competition. And so moving from research into industry, you'll tend to find folks tend to spend a lot more time preparing and labeling their data sets. And that is a challenge that we are hoping to address in some shape or form today.
So as many of you know, we have a lot of very sophisticated and efficient ways to perform pre-processing of signal and image data in MATLAB. But specifically what I would like to call out today is our ground truth labeling that we have available as part of our AI Workflow support. So we have a number of apps here that are shown. Ground truth label are for automotive, audio and signal labeler for signals, image labeler, and most recently the medical image labeler.
And that's what we're going to show right now actually, before we even jump into the case study for the day. So what I've just pulled up here is our new medical image labeler app. I've pulled it up with an existing data set from our cardiac patients here. This particular patient we've taken at systole. This particular patient has I think heart failure due to an infarct, and we will use this patient as our example for just a quick tour of what the medical image labeler app looks like.
So you see we have three different views. Traditionally a lot of our medical images you'll see these three different views will be like, the sagittal, coronal, et cetera, transverse planes of the body. But because the heart actually sits at an angle within the body, and traditionally the slices that will be taking through the heart are at an angle that's not adherence to the traditional body planes. That's why we currently have these oblique angles that are just labeled direction one, two, and three.
And then of course in the corner, we also can see our volumetric view of the images as well. So this app is a labeler app. And what we want to do first, is we want to define a label. Now this particular example, we want to specifically identify and segment out the left ventricle volumes. So I'll create a label that I will name left ventricle volume, LV volume. And I personally find direction three, the slices through the heart to be the easiest direction to work off of.
So that's what we're going to play with today. So I have my labels defined. I want to go into the Draw tab and start drawing my volume. So the heart's kind of small, so let's zoom in a little bit here, and we will go ahead and start defining my heart volume. So we can imagine this could take a while, especially if you want to be perfectly accurate to the contours of the left ventricle, you might give yourself carpal tunnel trying to finish this.
And so one of the things that I really like to use is use some of these semi-automated or assisted labeling features that exist in the labeler app. One of my favorites is called Paint by Superpixels, and that is basically defining these meta pixels if you will, inside of my heart image. Superpixels is part of the image processing Toolbox. So if you're already familiar with what happens inside of the Superpixels algorithm, this is essentially just a really neat way to draw those pixels over the image that you're trying to label.
And you can select which pixels you want to work with. So pretty standard stuff. Maybe I'll move on to another slice somewhere and say, all right that looks good. Select a different Superpixel relative to the heart region that I'd like to label. And so I might decide this looks good. This doesn't look so good.
I might want to fine-tune it a little bit. And so when you go to the Automate tab, you actually have a few fine-tuning based algorithms that you can play with. And some of the things that I like to do, is I like to take advantage of some of these. So maybe for direction three, this specific slice, I want to fine-tune some of the boundaries in this slice. I will use active count contours to attempt to better segment out the left ventricle volume that I'm looking for.
So not sure if you noticed, but it did extend the label region just a touch, to fill out some of the missed pixels that I missed from the Superpixels. I can also try to automate the labeling from one slice to another inside of a volume. So let's see, this is slice 10. I want to travel back to slice seven. So I can say start at slice 10, travel back to slice seven, and perform that same technique, but this time across multiple slices.
And you got a similar attitude here of attempting to get some of that volume information based off of just two segmented slices. And then again, I can come back here. Maybe this little extra bit off to the side is not necessary. It's not correct. And I'll go and clean that up. A couple other methods that you can utilize.
One of my personal favorites, it's new in the medical imaging Toolbox, is something called Level Tracing. And so basically I can maneuver around in the image to find a pixel that best describes the intensity level for the entire region that I want to segment out. And I'll be able to use that trace boundary of level tracing to segment out the region that I'm looking for, and also able to interpolate between slices as well using a different algorithm.
So I can select a region that I'm interested in to interpolate. And between gaps in my labels, I can ask MATLAB to auto interpolate for me and fill in the rest of the left ventricle. And again, go back through and clean it up as needed. So this is just a brief tour of some of the features that are available for labeling in the medical image labeler app. Again really, really enjoy the semi-automated and automated labeling features that are there. It helps avoid carpal tunnel. It helps you get through your labeling tasks a bit faster.
One last thing I'll call out to is that I can also add my own algorithm. So if I have a semi-trained neural network, or an algorithm that is pretty good but not all the way good at performing the segmentation that I'm looking for, I can load that up automatically here, and use that algorithm instead of one of these default ones to label my entire volume or image stack. So then from there when you're done, you can export your files which we will see shortly when we look at our full label data set from our full patients population.
So now let's actually get into the real demo here, the cardiac MRI demo. So let's switch over to our cardiac demo set here. So if we're brand new to MATLAB, here do a quick tour of what you're looking at. This is the MATLAB desktop. What I have pulled up here is two live scripts. And they're basically MATLAB scripts, so MATLAB code, MATLAB function's, code that can be run but put together in a nice notebook-style format so it's easier for humans to read.
We have a few additional panels like the command window, or I can say Hello to MATLAB and MATLAB will talk back to me. A workspace for just showing me what variables I happen to have available to me on my Workbench. What have I pulled out and what have I thrown away. And this tool strip which allows me to navigate through whatever things I need to do in MATLAB. So importing data, cleaning up data, Plots tab for visualizing data, Apps tab for doing various things through the apps, and the Live Editor editing tab for all of my Live Editor editing needs.
So we'll take a look at the actual demo for this session here. So if you're not familiar, I am using the cardiac data set from the Sunnybrook Hospital. And basically it's a really nice pre-label data set. So I didn't have to do all of the labeling myself of 45 different MR patients that we have. Scans throughout an entire heart cycle. So it's a nice full data set here.
And so for data preparation and some of the features that are available here, we have essentially just really quick and easy ways to load up and visualize the DICOMs. I'm not sure about you, one of my personal favorite things about working with medical images is the fact that I can look at them. I'm a very visual person. I really need to orient myself when I'm looking at a new data set for the first time, and the DICOM browser, and the apps in MATLAB make that really, really easy.
So let's pull up a single stack of MR images. So in this case, we're looking at patient number one with I think the sense for heart failure due to an infarct at systole, so at the systolic portion of the heart cycle. And we'll pull it up in the DICOM browser. And this is a really great place to start in general for most of your imaging applications, either the DICOM browser, or the image browser. It's just a really good starting point to get a sense of what your images look like.
Maybe export to view it in a different place, the volume viewer for example, to very quickly visualize and see what's going on inside of my data set. So our goal right now is to actually get to the labeling part of the Workflow, but I did want to introduce the DICOM browser initially. A couple of other unique things that are new this release. We also have really convenient new objects for importing data into MATLAB, and storing, handling, and incorporating the metadata around DICOMs, NIfTIs and NRRD medical image formats.
So just a very quick snapshot of an example, right. Being able to load in your medical volume from a DICOM file, use the metadata for volumetric geometry to create a visualization of the data quickly and be able to retrieve the metadata quickly as well. So here's that nice quick visualization of a data set.
On to medical image labeler app. This time I'm going to actually pull it up programmatically. This is the commands that you would use to pull it up. Just so that I can pull up the data, and then also maybe we'll look at the fully labeled data set as well. So that we can see what that looks like.
So if I'm going to just pull up a new session of MATLAB-- actually let me go back and highlight this. You see that we actually have two different types of sessions that we can create, a volume session and for things such as 2-D images such as X-rays and ultrasounds, you can also load in an image session that just has better understanding, inherent understanding of the fact that we're working with image data.
So like I mentioned, this data set actually came pre-labeled, so thank goodness I don't need to label all of them myself. And the pre-labeled images, I think I should also mention the label pre-labeled images are actually not a perfect volume. They've selected specific slices out of the data set.
OK so our data set has loaded. Let me open up the Data Browser so we can see how many images there are. So we have a whole bunch of DICOMs that are labeled, which is part of the reason why it might be a little bit laggy right now. But also on the highlights, so you notice that there are actually two labels going on right now. So we have the left ventricle volume, which is obviously the left ventricle that's been expertly highlighted. And we also have the background. And that's just so that we have a little bit more than just one class to segment here.
So let's pretend that you, the experts, the cardiologists, have gone through and performed segmentations across all of these images using the wonderful tools that are available in this medical image labeler app. At the end of the day, what you need to do is we're going to export a ground truth file. And this file is going to be what we take into training with us. So we save that groundTruthMet.mat. And then we don't need this app anymore. We can go ahead and close it down.
And with that our data preparation is done. OK so let's move on to the model design and training portion of AI Workflows. This is where I think a lot of people want to spend a lot more of their time doing the training, seeing what kind of accuracies you're getting, seeing what kind of changes you can make to your neural network to achieve better and better results. So what's really nice about some of the tools that are available in MATLAB, is the fact that you're not starting from scratch.
I don't know if you're like me, but I certainly didn't graduate with pretty much any background maybe besides linear-- least squares regression for fitting and modeling using what we have come to think of these days as machine learning and deep learning. But you don't have to start from scratch. We have all of these algorithms, pre-built models that you can take and make use of as you want, as well as examples, specific examples even for the medical imaging space that is available to take advantage of.
I actually want to very quickly jump into the medical imaging Toolbox to also highlight some of the available documentation examples there as well. So let's jump there. Easiest way to get into doc is to type doc into your command window, and medical image Toolbox. Best if you're just getting started for the first time in any Toolbox in MATLAB and you want to just find an example to learn from, go to examples and we have plenty of examples that specialize in whatever topic you're looking for.
And so this is a really great place to go, particularly for medical imaging going from labeling different types of 1, 2-D, and 3-D medical images, all the way through to brain segmentation labeling, and segmenting of breast tumors, et cetera. So it's a good place to check out, a good place to spend a little bit of time in.
Additionally, not only do we have apps to increase our productivity with data preparation, image processing, and segmentation, but we also have apps that are intended to really help streamline your work for deep learning, especially if you are not an expert black belt level expert in developing your own layers for your own neural networks, et cetera. So we also we have the Deep Network Designer app, which I am showing here right now.
It's a really convenient way to access your pre-trained models, or build a blank network from scratch, explore the models. I like to say performing neural network surgery. You can be a artificial neural surgeon for a day, put together a network, connect them all up and do some pretty neat analysis. Additionally, you not only can design your neural network in this app, but you can also take it through the entire task of training.
You can point to a data folder, have all of that data loaded up as you can see here, and perform the training, including sending the training to a cloud cluster of workers or GPUs. So the other app I do really, really want to highlight, which we will unfortunately not have enough time to go over today, is the Experiment Manager app. Like I said at the beginning, this process for training up your neural network and deploying it, is not linear.
You will spend a lot of time fine-tuning the model, getting more data, figuring out what the correct starting values are for your hyperparameters. And this Experiment Manager app helps streamline and automate that for you. It's a really great way to also keep track of your little trail of breadcrumbs as it's really important, especially if you are working under FDA regulation, to be able to show all the exact steps that you had taken in the process of designing your model. And being able to reference back to that record can be very important.
So two extremely useful apps that I will unfortunately not have enough time to show today, but are very important to call out nonetheless. So for this next section, we're going to take a look at actually designing and implementing the model for this particular cardiac data set. Let's double check that.
So let's jump into our live script here. And it's going to start off very similarly to our previous live scripts, where we do a little bit of configuration work. One thing I do want to highlight is this training, I've done it so many times already, but this particular training takes at least six hours. So we are not going to do the training live today, but you will get the code and instructions for how to get the data set. So if you are interested in performing the training yourself, have at it. I strongly recommend using a GPU.
So where did we leave off. We left off with producing a ground truth dot map file right, from our labeler app. So now we're going to see how we can take that ground truth file, that ground truth data, and actually extract all of the information from it to put into our training of our algorithm. So first we just need to find that groundtruth.mat file. When we load it up into MATLAB, you can take a quick look at it.
It is a unique object inside of MATLAB, and it contains basically just all you really need, the data source, the labels, and the label definitions. So once all of that is established it's very, very easy to just have at it. So we're going to tease apart the data source label data and label definitions. And we can see here, label definitions are exactly like they were defined in the app, our background pixels, as well as our left ventricle volume pixels.
So for those of you who may not be familiar with working with images in MATLAB, so if this is your first exposure to any kind of image processing, deep learning on images in MATLAB, a really important phrase to learn is data stores. This is my personal best, most favorite way to work with lots of different files in MATLAB, because all you got to do is point.
So I'm pointing to-- I'm saying, hey, the ground truth data set, the ground truth file says all of the data source files are here. So I want you to tell me to get all of them, and read them all in a particular way. I'm specifying that these are DICOMs, specifying how I want them to be read, and I do all of that, and I suddenly have access to all 805 files. I have information about them all.
I have all of the things that I need for reading and managing my data set. You can also do a used data stores with your Excel spreadsheets, and just any other data file. If you're working with more than a single file, which most of the time we are, you want to use data stores. For our labels, we're going to use a specific type of data store, the pixel label data store. Again, essentially all the same things. We're going to tell it where to find the label data, and we're going to give it a little bit more information.
We're going to tell it what the label names are, what the label definitions are, and how to read them. And then once we do that, now we literally have all the information that we need to train. OK, another good reason why I really like working with data stores, the data stores actually have a collection of functions or methods that are unique to them, and that you can use to extract information about the data store.
So for one example, is you can ask, hey, how much of each label is there in this pixel label data store. And I can see right away that I have a lot of background pixels and not quite as many left ventricle volume pixels. And this could be important later on in our training. We want to make sure that we don't over train to background pixels, and want to make sure that we correctly penalize incorrectly labeled left ventricle volumes.
Just a couple other things here again, I'm an extremely visual person. I just have to see the image, and I have to see what I'm doing before I feel comfortable with the code and with the process. So I wrote this quick little function as just a way to very quickly visualize the expertly labeled masks against the raw images. So we can look at a few random slices here to get that information.
OK so the magic function that we're working around today is this function called trained network. And it's very, very easy to use. You just tell it what the data source is, what the neural network or the layer graph is, and then what hyperparameters you want to use. So all we need to do is figure out what each of these three inputs are. After that, it's extremely easy.
So let's prepare the data source. You'll notice that there isn't a data and a label inputs pair that you're sending in. We're actually going to combine the two. And so we're going to do that a little bit further down at line 30 here. But before we even do that, I also want to implement a data augmenter.
So data augmentation can be really useful for a few different reasons. One being to supplement a small data set with more and more data. And then another reason that you might use a data augmenter is to try to add a little bit of noise to your data set to avoid overfitting. So I'm going to setup data augmenter, and then I'm going to combine my pixels and my image data stores into a single data source. And now I have my data source for the trained network function.
One final step I'm going to take is because my 45 patients is literally my entire data set. And so to perform best practices with our machine learning, we will be splitting up our data set between training, validation, and test data sets. And so we'll go ahead and do that. And we'll see our partition data set here using a majority of the images for training, setting aside a few for validation.
And then in smaller set for testing. Finally, we'll take a look at preparing the network, or input number two to our train network function. And essentially, just there's a couple of different options. I've commented one of them out for you so you can play with both when you get the code for this demo. But we're going to use segnet, because I have personally had the best luck with this particular example, getting the best results in this particular case.
And we're going to pre-load it with weights from the VGG16 neural network. So I run that, get my layer graph, and now we have input number two. At this point actually, if you're so inclined, I actually recommend taking a look at the Deep Network Designer app. Loading this up, and then loading up your layer graph in it. You can explore the neural network and even make a few tweaks and changes if you want to.
Maybe you don't want the segmentation layer out. Maybe you want to add to it to give a different result, and that's actually something that you can use to tweak, augment, and embellish your neural network there. One important thing to consider too, is also using the Analyze Network app. This particular function will give you a quick overview to do like a static analysis of your model, and basically say, let you know if there's any concerns, or any errors that might prevent you from being able to successfully train your neural network.
It's also a nice way to visualize the neural network if you would like to get that 5,000 foot view of the neural network. OK. Now I mentioned that there is a labeling imbalance in this particular demo. And one of the ways that we can-- we're going to just visualize that really quickly here. And you can see, it is actually a very significant difference between the background pixels and the left ventricle volume pixels, especially across these very large medical image data sets.
And so what we're going to do to handle that particular issue, is we're going to replace this last layer of the neural network, which is currently just a simple pixel classification layer. We're going to replace that with what's called a dice pixel classification layer. And what it does, is it uses the dice algorithm to assess accuracy, and help to control, or it calculates a value that helps to weigh the classes against each other by the size of the expected region. And that just helps balance out our label imbalance here.
So using dice pixel classification, I get to perform my neural network surgery, to insert a new layer at the end of my neural network. Final quick check, everything hopefully looks good, no warnings, no error messages, and I am ready to train. Perfect. We'll take a quick look at how I set up this neural network for training, including setting up the input hyperparameters.
But again, I will not actually perform the training so you can do this on your own time. So training options-- there was not necessarily a need to specify all of these separate training options. But just to highlight a couple, we are setting an initial learning rate for some reason that always seems to be one of the first things that will dramatically impact either the training time or the training accuracy of my model. So I often will change this particular hyperparameter.
We are asking to use validation accuracy as an indicator for when we can end training. So after five validations if the value doesn't change a whole lot, the validation accuracy doesn't change a whole lot, we'll assume that the training has now stabilized at a particular accuracy level, and further training is no longer needed. And then, down here is where you can change the value to GPU or multi CPU, or whatever it is that you are looking for.
So I'm going to run this and just show what the results look like. Two years ago when I last ran the training, took a significantly long time. I went out to lunch and dinner before I was able to come back and get the results. But we see that gradual progression towards a higher and higher accuracy. Didn't do as well with our loss function, but it was on the downward trend.
So this could be something to look at for improvement in the future, is how do I improve my loss in this training. And then finally, if we have our finally, our trains neural network, let's grab that test data set that we wanted to use, and perform some semantic segmentation on the test data set. I believe it's 40 images ultimately in my test data set. And we will get our results of the semantic segmentation. 41 images.
And again, we're going to reuse my overlay contour image, plotting function to get my plot at the end. So do it a few times. Some segmentations look pretty good, some segmentations not so good. I think we had an accuracy validation accuracy of like maybe 80 some percents if I recall correctly. So we're not expecting hyper, hyper accurate, but all in all, not a terrible job.
OK, and then visually we can gauge how well it performed but ultimately, we want a numeric objective answer. And we can see accuracy values, and IoU values for these results. OK. And that is how we train and do a little bit of validation on an AI model for a medical imaging example.
So we're going to jump back into the slides and finish out the rest of this presentation here. So we just finished labeling and training a data set for semantic segmentation of a set of cardiac images. And now I want to close out this discussion just by going over a few common challenges, and extraneous details beyond just training a labeled data set.
And so one of those top things is hardware acceleration. How do I increase the speed of my training? And there are a lot of different ways to take your training to the cloud with MATLAB. So I've just thrown out a handful of the different options you have. You can use GPUs, you can go on to aws, you can take advantage of all sorts of different resources.
My personal path of choice is I have an aws account. And so I'm oftentimes checking out aws resources, getting the GPUs off of aws to perform my training. So that's typically my particular Workflow.
I'm not so familiar with Azure, or Google Cloud, or those other platforms. But if you have questions on aws or cloud things in general, I'm also happy to answer those questions after this session. Now MATLAB also encounters scenarios where we have to play friendly with other frameworks, Python being one of the most common ones. And so we do have a lot of interoperability tools between using and training models in MATLAB, and playing with the other friends on our block.
So being able to input and output to the ONNX framework allows us to share models across these other frameworks. And then we also have specific TensorFlow and Caffe importers for those specific frameworks. So we definitely have no time to talk about this specific example today, but there is basically a fourth step in the AI Design Workflow, and that is simulation and test.
You don't go straight from training and AI models, and straight to sticking it on the robot or the device itself. There is a set of tests that you do need to perform, or integrate your model with the larger environment that you need it to perform in. And so that's something that you can explore further on. I can include some links to those particular tasks.
And then finally, I mentioned I will highlight that there is a section in this example that does go over code generation for embedded devices. But it's not something that we'll have a whole lot of time to talk about right now. But basically, the question is, I've trained this thing, now what? And the answer is, now you got to put it somewhere, so that it can do something useful.
And so looking at pulling it on to a CPU, GPU, or even an FPGA. We do have automated code generation tools that allow you to automatically generate that source code. And then also, the idea of deploying it onto an enterprise system, enterprise server. So if you are curious to go take a look at the demo, you can take advantage of that last module.
Run it against the MATLAB Coder. You will need you will need the MATLAB Coder for that, but you can actually automatically generate the C, C++ code for this particular example. And if not, I have saved the report and the final files in the demo file. So you can explore that as well.
So basically, the two options are using the MATLAB Coder products to deploy to C, C++-- excuse me, HDL, PLC and Cuda, or using the MATLAB Compiler Workflow for deploying and packaging it up as an executable, or Excel, Python add-in, et cetera. So we have basically those two options. Again, not a whole lot of time to talk about them in depth today, unless you want to stick around for questions afterwards.
So I'll go over this topic very, very quickly. But essentially I wanted to spend a little bit of time talking about how to validate an AI model, because it is a question that has been coming up a lot among my medical device customers. It's becoming a very, very critical step in preparing for eventual submission for FDA approval for example.
So there's a few different things that you might want to consider when you're looking at assessing the validity of a model. The first and foremost is just adherence to good machine learning practices. The FDA has produced several guidances on this, and produced several documents talking about what they think about it, and their plans for how to address it. So you can certainly go there and read up on that lovely material, and learn about that.
Another very common thing that folks will do is simply test your validation data set against your model, and look at the accuracy that results from that. That's your basic bottom line. You pretty much must do this, if you want to have any confidence that your model works as you think it might. But really the important emerging techniques are interpretability and explainability techniques.
So to very quickly talk through the ideas here, we're looking at a set of explainability techniques that are really, really, really difficult to tease out of from these models. So what you find with the different models is something like linear regression. It's a very, very simple model. You can look at the equation and you already have a sense of what it's doing.
And so it may not have a whole lot of predictive power. It's a very limited model that you can use to try to predict complex systems. But you can understand it right away. And then at the very other end of the spectrum, we have neural networks and deep learning, where they are essentially black box. It' very difficult to understand what's going on inside of them, but they are so powerful. You're getting accuracies that go beyond human accuracy.
And the unfortunate fact of the matter is, that we don't really have the best set of recommendations right now. There's not really an industry standard. But we do have a collection of options that you can use from, and a few basic choosing heuristics that you can use. And so I definitely encourage you to check out our article on interpretability for machine learning and AI models. It's a really good read and it's a really good introduction to some of the options that are available. It's really important to keep in your back pocket.
But again, there's just no current standardized industry process. In fact, that's the case for pretty much all these industries. We do have attempts at figuring out what might be the best way to perform V&V on AI models. So in automotive, we have this new white paper. In aerospace, we've got these new standards, and for medical devices, we have the FDA action plan.
The medical devices space that we're all in can be a little bit slow sometimes in terms of adopting a lot of these new regulations, new guidances. So at the moment the FDA just has an action plan. They don't really have a-- set forth a guidance that we can follow. But again, people and groups have been attempting to give us some type of rubric to follow. And so if you do take a look out in the literature, it is out there, you can pick and choose and Frankenstein your own V&V Workflow for this.
Not to put too fine a point on it, but there are a lot of different ways that you can implement. You can stay in that lab, but then do a lot of these V&V tasks there with you. Something I do want to highlight, I wish I had time to show it today, but we do have an app I strongly encourage you to go check it out and download it.
But it helps you explore Deep Network Explainability in an app, particularly well for images, image classification. And so I like to use it. It's just a really fun way to explore your images and your model, and see what exactly is your model focusing on in an image that's leading to a particular result.
So especially if you're doing image classification, take a quick look at it. Peek under the hood, so you see how the code works. It's a really, really fun app to use. Now I hope by now you might get the inkling that what I've been trying to cover, all the different topics, is very much the tip of the iceberg. There is so much more that you can explore in both just the fields of medical imaging and image analysis, and in AI and the two combined.
So I really hope that this session gave you some impetus to go check it out for yourself, to go try something new. And take a look at some of the new recent additions that we have within our toolboxes. Strongly recommends, or ask actually that you take a look at our medical imaging Toolbox, provide us your feedback. We're very excited to hear from you about that.
And then just some of these additional new features and apps and tools that have come out in recent years for AI in MATLAB. I'll close out by mentioning that we've been named a leader in part of the Gartner Magic Quadrant for Data Science Platforms and Machine Learning. So that's been very exciting for us to announce.
And if you do not currently have MATLAB or you are not currently up to speed or up to practice with MATLAB, you can get started with AI in MATLAB for free. We have several approximately two-hour free on ramps that you can take advantage of on our website. So please, go and take a look. And with that, that's all I have for you today. So thank you for dialing in, listening to this webinar. And we can move on to questions now.