Medical Image AI Analytics with NVIDIA Holoscan - MATLAB
Video length is 38:17

Medical Image AI Analytics with NVIDIA Holoscan

Marc Edgar, NVIDIA
Mikael Brudfors, NVIDIA

Medical imaging manufacturers need to evolve their products fast enough to stay competitive. MATLAB® and NVIDIA® Holoscan combine to help them accelerate time to market while ensuring compliance with regulatory standards. In this webinar, you’ll see the complete workflow of developing, validating, and deploying medical imaging algorithms on livestreamed medical imaging data.

Published: 30 Oct 2024

So I'll kick it away. So today, we have this special topic, developing real-time medical image AI analytics with NVIDIA Holoscan. And I would like to introduce our guest speakers today. So we have Marc from NVIDIA and Mikael, also from NVIDIA. And just a quick introduction about them, Marc is a senior alliance manager for medical devices at NVIDIA. He works with the health and medical device developers to create the next-generation solutions using AI, deep learning, large language models, and advanced visualization. And he has developed and commercialized numerous AI algorithms and holds 16 patents in the health care and industrial domain. So welcome, Marc.

And then very quickly, Mikael, your introduction. You are the senior solutions architect at NVIDIA and focus on medical images, medical devices, and AI in health care. And prior to your role at NVIDIA, you've also worked both in the industry and academia. You've got a PhD from UCL on the topic of generative models for pre-processing of hospital brain scans. So welcome, Marc, welcome Mikael and with that, I'll hand it over to you guys to take it away. Thank you.

Thank you. Hi Marc Edgar, senior alliance manager for medical devices. And Mikael and I are really pleased to be talking with you today about how to leverage the domain expertise that you have in MATLAB and potentially lots of legacy code in MATLAB, and be able to deploy it for real-time streaming applications with NVIDIA Holoscan. So you may know NVIDIA for lots of things. You may be a data scientist who knows us for our data center GPUs that are used to train models like ChatGPT, or you might be a gamer that knows our RTX gaming GPUs that run on laptops and workstations.

But at NVIDIA, we also build application frameworks like modulus for physics, informed neural nets, MONAI, the world's most popular open source medical imaging framework, or NeMo, an end-to-end platform for developing custom generative AI, including large language models, multimodal vision, speech AI, retrieval augmented generation, RAGs, and all with accelerated performance. Or you might know us for NVIDIA DRIVE for autonomous vehicles or Isaac for NVIDIA's robotic development platform.

But today, we're going to and talk about NVIDIA Holoscan, NVIDIA's platform for real-time sensor processing workloads, including video processing. And there's a lot here on this slide today. I just want to let we're going to focus on a thin, thin slice of this stack. We're going to be focused on using Holoscan, that sensor-based processing platform, that leverages CUDA, NVIDIA's acceleration framework, running on IGS. IGS is a platform specifically built for health care, medical, and safety critical applications. And leveraging our discrete GPUs, like an A6000 GPU or 6,000 ADA. So we'll be focused on this today.

So let me introduce Holoscan real quick. It's a full stack, software-defined, scalable platform. The software is, we think of it in three pillars. There's the software SDK, there's the hardware side, and long-term support. On the software stack, it's everything from the operating system on up through the drivers, the firmware, all of the everything that's needed to run an application. And the SDK. The hollow scan SDK is an operator-based framework that you can write in Python, C++, and MATLAB. That's what we're going to be introducing here today.

And when it comes time to deploying those applications in a real-time clinical setting, the hardware that you can run it on, we have an IGX developer kit for development and then an IGX production kit. And the IGX production kit is meant to be used at-scale for deployment in a clinical setting. And there's also Holoscan sensor bridge, which is used for connecting sensors to all of this.

And then last is long-term enterprise support. So it's designed for up to 10 years of support on both the hardware and software side. And it allows you to have a consistent software BOM, so a consistent set of bill of materials. So this platform has been adopted by lots of companies, and I'm just going to highlight a few of them. So Medtronic and the GI Genius, the Cosmo GI Genius platform, is using this platform for their next generation systems that are used for doing colonoscopies.

It's been adopted by Moon surgical that received their CE mark in 18 months. And actually, this slide is just a couple days out of date. They also received their FDA clearance just earlier this week. And it's using our platform, this platform, to provide the AI skills. It was used in the world's first demonstration of real-time surgical AI by the Camma lab and IHU lab in Strasbourg. And we've been partnered with the Orsi Academy to develop all sorts of things, including things like AI-enhanced surgical augmented reality.

So just to give you a sense of the sorts of things that you can be doing with Holoscan, you may be seeing the tool in the foreground here that's coming in and out of view. So this is an example of thing you can build. There is a 3D CT scan, which is capturing the organ of interest and using that to help provide clinical guidance. But what do you have? You have tools that are in the surgical scene, but there's no depth information. This is just a mono camera.

And so the goal here is to make sure that you don't occlude the things that would really disrupt the surgery rather than helping it. So there's depth from mono, and it's able to provide real time segmentation so that you get a correct z-based buffering on the image. So the Holoscan, as I mentioned, is meant for domain-agnostic, real-time, sensor-processing applications, with a focus on low on code, which means high code reuse, but high performance.

So behind the scenes, what Holoscan is doing for you is it's helping speed the processing. One of the things that we noticed as teams were trying to build AI-accelerated, real-time applications, that they continued to stub their toes on things like data transfer, being able to get data into the GPU where it can be processed very efficiently and then keep it there, so that it doesn't go from the GPU back through the CPU to the RAM and then back to the GPU. So if you can get the data there, be able to transfer data and then use it efficiently.

So what we're here to really talk about today is that you can access this performance with your MATLAB code and algorithms and be able to connect those in just like you would any other operator. This is, as I mentioned, is built for streaming AI performance, efficient ingestion, transfer, and execution of those AI workloads. And it provides a set of sensor abstraction. We're going to be talking mostly about video here today, but it's a generic sensor processing. And we'll show an example for ultrasound beamforming as well.

And one thing to note is it does have-- the SDK itself does have a permissive, ope-source Apache two license. So you can pick it up and start using it today. And if you want enterprise support, you can get enterprise support, long-term support as well. So just to make sure that we're all on the same page in terms of the type of workflows that we're talking about, let's start at those sensor inputs. It might be video, 4K, stereo coming in through a data capture card. It might be IP video that's coming from an IP camera and through ethernet. You might be doing a replay offline from storage in some video archive, or you may have one of those-- have a sensor of your own or ultrasound that's coming in through a capture card.

The types of things that you might be doing then, after you acquire it, are doing the data and image processing. And that's where MATLAB comes in, where you might be doing pre-processing, normalization, denoising, that sort of thing. And you can use it for then processing both SaMD-type functions, software as a medical device functions that would be clinically regulated and that would inform the clinical procedure.

So this might be AI tasks like organ identification, tissue identification, tool segmentation, providing the clinical guidance, then producing a rendering or overlay that gets sent back to the surgeons to display. Maybe it gets sent to the nurses console, might be sent out for remote telesurgery or being used for robotic control. You might also, then, use it to archive and process the data afterwards.

And these might be some non-regulated functions like de-identification, storage, doing a surgical-- running an AI, a skill that might be like phase of surgery that inform some surgical analytics that might be looking at, say, identifying critical views that were obtained during the surgery, doing summarization or efficiency tool movement, that sort of thing. They could go out to the cloud or back into say, a provider's EMR or used for operations and clinical workflows.

Awesome, cool. So yeah, next slide is what's related to today's webinar, which is the use case of MATLAB plus Holoscan. And I'll be covering the details of how to exactly implement this. So it will be much more technical soon, but the overview is that we will show an image processing application, quite simple, something a bit more complex, ultrasound beamforming. But there are also tons of other use cases you might be interested in. Robotics, perhaps, with Simulink. There's a lot of interesting features in MATLAB that could be used in Holoscan.

One thing that comes with Holoscan is a sample repository called HoloHub. Just wanted to mention this because I'll be talking a little bit more about this later. This is where you'll find the code that I'll be talking about. And this sample repository has sample applications for various use cases. A lot of imaging here, for example, ultrasound, endoscopy, et cetera. But there are also things like third-party capture manufacturers, drivers and operators for interfacing with certain capture cards and for getting data onto devices and so on.

And the hardware part that Marc mentioned. So there's a software part, which is the SDK and sort of operating system of the device. The hardware part is called NVIDIA IGX. This is quite a neat device. It has a Jetson SOC, a 12 core ARM processor. It has a ConnectX SmartNIC, network interface card, that can take data to 1 gigabit per second. Two PCI slots where you can install an optional GPU. So this is sort of like a Rolls-Royce of Jetson devices that also supports a discrete GPU.

And this is the hardware platform of Holoscan. There's also a smaller offering that only has the AGX Orin, which is the Jetson device. And this, then, can come with this NVIE support. And there's a ton of nice features with the IGX, but I'm not going to spend too much time on this so I'm going to continue. How it normally works, the idea is that there are different versions of the IGX. There's a development kit. So this is a ready-made box that has input and output. It has cooling and it's designed, and you can prototype your software on this device.

Eventually, if you move to production, you would go with the so-called board kit. This is a industrial-grade main board. And from this, you would then build your own form factor, usually through an OEM like Onyx or Dedicated Computing. And that's the idea. Keep in mind that NVIDIA provides documentation to do IEC 60601 certification, but it's the OEM that would carry out the certification. So we provide all the documentation, et cetera, to make this as simple as possible. And it has been shown together with partners Medtronic and Moon Surgical, that this is quite an efficient way of doing things.

And yeah, this slide, once again, just to reiterate the main concept of Holoscan and IGX. So you'd have some sensor, could be imaging, it could be something else, some data coming in real-time, ideally at very high bandwidth. That's why this SmartNIC is there with 200 gigabit per second. So easily deal with 4K at 100 plus frames per second, for example, for videos.

And then you would translate your model onto Holoscan. There are inference acceleration software like TensorRT already in there and there's a bunch of sample application. And this just shows you that you can get latencies as low as 10 milliseconds. And they also have the long-term support. So this is the part that I originally planned to present, and it's about how to actually build this MATLAB-enabled Holoscan applications.

So like Marc said, there is a C++ and Python API to Holoscan SDK. And then we can get MATLAB in there too, in fact. It's not the API, per se, of Holoscan, but you will see that it will just be a way of integrating efficient CUDA code generated with MATLAB into Holoscan. A very neat feature of MATLAB. The overall workflow is, and I'll go into detail, that you create a MATLAB function which up to you how you want to implement it, what functionality you want to use.

You then use a GPU coder, which is a MATLAB Toolbox to create CUDA code. This is then wrapped into Holoscan and you can then run the MATLAB Holoscan application. And this schematic shows how it's done. So as I said, the IGX and Jetson, that's actually ARM architecture. It's not x86, so MATLAB doesn't run on ARM. So the workflow is that you can deploy Holoscan on either arm or x86, but you would then run MATLAB on the x86 system. You have your MATLAB code, you can then generate the CUDA code in GPU Coder, and then link it into Holoscan and then deploy it on either platform. And in the end you might have an application doing something like basic image processing of a surgical video.

And dependencies. So it's the Holoscan open source available on GitHub. And we'll share the slides, we'll share the slides afterwards. And there's also recording that will be made available, by the way. So I'll be covering quite a lot of material. But the recording might be quite helpful for you to go back and have a look in more detail. So anyway, on this HoloHub repository, that's another GitHub repository, under sample applications, you'll find the MATLAB GPU code for sample application. And that's what we'll be going through today.

You also need MATLAB, of course, for Jetson. You need a new version of MATLAB, but on x86, it's not necessary. And you need the GPU Coder toolbox. Hardware-wise, either the NVIDIA IGX that I talked about, one of our Jetson devices, or an x86 machine with an NVIDIA discrete GPU running Ubuntu.

So quickly, CUDA, you might have heard about it. So CUDA is not only a way of writing CUDA applications, it's also a way of profiling them with Nsight systems. It's more or less a platform for creating parallel accelerated applications. And it has a couple of key components. It's the NVCC compiler, various CUDA libraries, the CUDA runtime. It supports C++, Fortran, but there's also extensions from many other languages. As I said, Nsight system for optimizing and debugging the code. And it supports multi-GPU, multi-node, Windows, Linux, and Mac OS. And there's a bunch of nice documentation and tutorials.

So HoloHub. If you go to the HoloHub landing page on GitHub, you will see that there are a bunch of different applications there. And as I said, this is where we will be basing our material for today's presentation. And there is, also, a nice interface. If you press the link that says visit the HoloHub landing page, you can see a collection of applications. And right now, maybe it's around a 50 sample application, but it is growing every month.

So here, you see, for example, the two MATLAB sample application, one for ultrasound beamforming and one for image processing. But on the popular tags, you can see there's everything from using CV CUDA, augmented reality, networking, . WebRTC, and so on. In fact, if you search for LLMs, there are also applications using large language models. For example, deploying Llama two, doing automatic speech recognition with the NVIDIA Riva SDK, and so on. So there's a bunch of different sample applications on HoloHub.

But for this particular webinar, what we want to do is we want to focus on one sample application, the MATLAB one. So the first thing we would do is just to clone Holoscans, HoloHub sample repository, and let's jump into the, as you can see, HoloHub applications lab, MATLAB, GPU coder sample application. So in this particular folder, you will find the Holoscan application that interfaces with the MATLAB-generated CUDA code. There are a couple of different folders under MATLAB utils. It's just some CUDA code that you will need.

There's, for example, the fact that there is row major versus column major-based ordering in C versus MATLAB and there's some CUDA code there to deal with, things like that, et cetera. So it's just a utility functions that helps out with creating these MATLAB Holoscan applications. Then, there's the more advanced ultrasound beamforming application and more basic image processing application.

So as I said, let's go into a very simple image processing sample application. So a hollow scan application generally has a few files. Let's focus on the MATLAB side of things first. So in the MATLAB folder there are a couple of scripts. There are two generate image processing, one for Jetson and one for x86. This is for scripts for generating the CUDA code that you should execute in MATLAB.

There's also the actual function MATLAB_image_processing, MATLAB function that we want to convert to CUDA. And there's also a test script where users can test this function. Other than that, the hollow scan uses CMake as its build system. So there's a cmakelists.txt that I'll cover a bit more detail later. There's the main CBP file, which is the actual application that gets compiled. And there is a YAML file with a bunch of config options.

So I'm going to next go into how to create CUDA code using MATLAB. So looking at the chevrons from the start, we're looking at the first section here. So this is MATLAB running, a screenshot of MATLAB. And you can see the four files from the GitHub repository in the folder view. And the file that's open is a very simple MATLAB function that we want to convert into CUDA. So it's very basic, but I think it gives you a good idea of how to do it. It's basically a Gaussian filter that has a standard deviation of sigma that's just going to blur an image. There's nothing else.

One thing to keep in mind is that not every single function in MATLAB supports GPU Coder, but a lot of them do. So one thing to keep in mind. And then we go into the test image processing. This basically just calls the function. And you can see, using some sample images from MATLAB, that you have one image, your run function, you blur. Very, very basic. But we want to do is then to create CUDA code from the MATLAB_image_processing function. They also generate scripts. These are basically doing automatically what I'm going to show you you can do with the Toolbox user interface of GPU Coder.

So let's see, how do we convert this function to CUDA? So if you have GPU Coder installed, if you press on the Apps and the Toolboxes under Code Generation, you'll find GPU Coder. So we press GPU Coder, and then we need to insert the function name. So we press the three dots and we select the image processing function and it automatically detects that there are two inputs needed. It's the in, which is the image, and sigma, which is the standard deviation of the smoothing kernel. And we have to specify the size of the image here, the data type. We also have to specify the GPU array, and we define the sigma simply as a scalar value.

We then press Next, and we can, here, choose a bunch of different things. If we want to create a DLLs or .SO files or even MX for running in MATLAB. But here, we decide to create a dynamic library. And there's not much else to it, really. But we press Generate and we wait for a little bit whilst the MATLAB code is being built into CUDA. Eventually, we have a success message here that the dynamic library was built. And in the same folder, and this is a folder you can specify yourself under codegen DLL MATLAB image processing, we now have CUDA source code and shared libraries.

So this has all been generated. And now, it's available to link to some other code, some other libraries, some other application. That's what we want to do with this code that's been generated. So one thing I want to just double click on the code conversion. You can use the GUI of GPU Coder, but you can, in fact, also simply use a script which is very handy. And as you can see, the two scripts, one for x86 and one for Jetson, I'll cover Jetson. And Jetson, this is arm 64 IGX in a bit more detail soon.

The x86 case is basically just open this script. The nice thing about using the script is, you basically you just run or press F5 in MATLAB and everything builds in the correct folder. You don't have to move anything around because the MATLAB code is in the same folder as the sample application itself. So relative paths are set correctly and you don't have to do anything with the GUI.

For the Jetson, _jetson script, It's a little bit different. Here, we have to use the embedded coder toolbox, and we have to specify the IP and some details of the Jetson. The Jetson device needs to be on the local network. And then MATLAB takes care of actually compiling the code on the Jetson device. So that's something that means you can run MATLAB on x86 and then deploy your CUDA code on a Jetson device.

And yeah. So what we do next is to just run the _x86 script. It will say code generation was successful. And this now ends up, not in the same folder as the MATLAB code, it ends up in this particular folder that I've specified. And I'm going to now show, when we have the CUDA code, the CUDA libraries in the right folder, how to wrap those into Holoscan.

So now, what you're seeing is no longer the MATLAB window. This is a VS code window running on an x86 workstation. It's the clone of the Apollo hub sample repository. It has an application folder. And you can see there's a bunch of applications here. There's an advanced networking benchmarking, and there's a endoscopy tool tracking. And also, it's the MATLAB GPU Coder folder. So under MATLAB image processing, as I covered a bit earlier, you have the main.cpp, the CMake and so on. There are in the readme file of each application, very detailed instructions. So that shouldn't be too hard to understand how to do everything I told you about just from reading the readme.

So do you see how the code gen folder that I talked about earlier is now in the same folder as the MATLAB image processing application? So that's where we have the code will be linking. And there's, in the readme, also details on how to configure the Holoscan for MATLAB. There's just a few things that are needed. So Holoscan runs in a Docker container because there are dependencies like CUDA and TensorRT.

So when we clone HoloHub, we need to do a script called dev container, and we need to build this Docker image. So that's the first thing we do. When it has been built, we now have this HoloHub Docker image available. For the x86 version, there are two environment variables we need to set. We want to set where the MATLAB install is located. So we'll set the MATLAB root and we also set the version. Those are then given to the launch command of the HoloHub container. And now, when we run that launch command, we jump into the HoloHub container where we now have all the dependencies that are needed.

The next step is then to just build. So this particular image processing example uses endoscopy sample data. So to get that sample data downloaded so you don't have to get any data from anywhere else, we just build the endoscopy tool tracking application and that, essentially, downloads endoscopy video data we can use for our sample application. And then, if you go to the build part of the readme, you see that there are two different sample applications, it's that image processing, and beamforming. We'll set the CMake option to on for the image processing application, and basically just configure and build with CMake.

That should not take very long at all. And once the application has been built, it will have linked with the CUDA libraries generated by MATLAB GPU Coder. So the last option here is just to simply run the MATLAB Holoscan application. The details are in the readme. And what you'll see is the endoscopy video having been smoothed, which is quite nice because then it's not as gory, maybe. And this is now running at a high frame rate reading data from disk. This could also just be, obviously, a sensor connected to your computer, your IGX or x86 workstation.

But the neat feature here that I talked about earlier is the support for the ARM architecture. The difference is that you'll still run MATLAB on your x86 machine-- GPU Coder on your x86 machine, but using the embedded code toolbox, so there are now two additional dependencies, the embedded coder toolbox and a support package for NVIDIA Jetson. You can run the script that I showed you earlier and deploy on, for example, the NVIDIA IGX. So you basically run the script and the coder ends up on the Jetson. And then you can run the exact same sample application on the IGX.

I'm not going to spend too much time because there's still a bit of material left. But as I said, you have to specify the location on the network and the particular hardware that is used by embedded toolbox. And then, you have to do a few small tweaks. It's all detailed in the readme. Basically, changing one path because how the folder structure is being mirrored from the x86 workstation to the arm deployable machine. And then you can run this on arm as well.

So let's jump quickly into how to wrap this with Holoscan. So in the MATLAB image processing folder, this is CMake. This is actually quite simple. You have to use point to particular include folder of MATLAB when you set your include directories. And then you link to the MATLAB image processing library and the utils, and then you're good to go. You can write your C++ code.

And the main CPP, I'm not going to go into details on how to implement all those applications. The idea is quite simple. You build a graph, each node in the graph is an operator. They talk to each other by sending messages over ports. The compute method here is the tick. At each tick of each operator, the compute method is called. But yeah, there's a good example, under our documentation, about how to create a simple hello world example. So not let's not spend too much time here.

So just how this relates to the MATLAB code? So you have to use the include the relevant MATLAB headers that were generated with MATLAB GPU Coder. And then, inside of the compute method, you can see how, at line 148, MATLAB image processing is being called. So there are a few utilities that are being called before and after, sort of transposing of row versus column-based ordering.

But this is really as simple as just calling the MATLAB function. It takes the buffers in CUDA on device memory and very, very efficiently does the image processing. But the example was smoothing. It could be anything else you would want to do in MATLAB, basically. And I'll show you, soon, a more complicated example.

So yeah, these are the steps. So you create the MATLAB function, you convert it to CUDA, and then you wrap it into Holoscan and you run it. So now, let's just go over some of the steps for something more advanced, which is ultrasound beamforming. So beamforming is the sort of reconstruction of ultrasound imaging in the MATLAB GPU Coder folder. On HoloHub, you have a MATLAB_beamform folder. And this is a detailed readme, it's a detailed readme in folder. So let's not spend too much time on that.

But there's a different MATLAB function, now, that's going to be converted. And this is MATLAB_beamform.m and if we look at this function, this is more complicated. So this is more of an algorithm, not a single function that is already preexisting in MATLAB. So you can see there are nested for loops and there's a bunch of other things going on here. If you run it, as you might have guessed, it's done a MATLAB nested for loops. Is going to be very, very slow. Generally, you would want to vectorize in MATLAB.

But another option is to use GPU Coder and convert this to CUDA code. So with the script generate beam for me, the Jetson x86, we can convert this to CUDA and then run following the exact same steps as I showed you before, the sort of beamforming. And this is simulated data, so you're not supposed to see anything too interesting. But in fact, running this accelerated by over 3000x. So it's very, very, very efficient to use MATLAB GPU Coder and then deploy on Holoscan with MATLAB GPU Coder. And Holoscan, also, has a bunch of neat features with very efficient memory handling, with zero copy transfers and so on.

So I'm going to finish up soon. I just wanted to mention a few common pitfalls. So you need to have the necessary MATLAB toolboxes. That's important. So bare minimum is a GPU Coder. But for running on arm64 Jetson, there are a few more dependencies that are needed. You also need to use MATLAB functions that are supported by GPU Coder or write your own from the more basic building blocks instead of using ready-made functions in MATLAB. Generally, for many NVIDIA GPUs, you get a good speedup if you're using float32 or float64. So that's one thing to keep in mind using the single precision MATLAB.

You have to, for Jetson, ensure that you have ssh access and cuda on the path of the Jetson machine to be able to do this remote compilation. You need to be precise with ensuring that the data types and the number of inputs and outputs to the CUDA function denoted by MATLAB is all correct. But as I said, I would advise you to follow the readme on GitHub to avoid any of these errors. I'm also more than happy to answer any questions you might have. My email is at the end of this webinar.

So this sort of encapsulates the steps to quickly get started. So if you have an x86 machine running Ubuntu and you have an NVIDIA GPU, you can simply follow these steps and run the MATLAB image processing example. So not that many steps to run it and you could do it on the device as shown at the bottom here. So with that, I think I've gone through quite a lot and there is, luckily, a recording. So if you want and if you're interested, you can go back and have a look in more detail.

But the main idea here is show you that you can use MATLAB, which is a great, great tool for prototyping algorithms and it has a lot of nice features. You can easily convert that code to CUDA, and for a real-time processing application, use that code in the Holoscan SDK. So I'm going to now hand over to Marc. You can just talk over this slide while you're showing it perhaps. Yeah.

Yeah. You show it this time. Sounds great. So thank you, Mikael. So for those of you that are looking to develop with this, there's a wealth of resources ranging from the GitHub repo, the videos that are on YouTube, as well as support forums and technical blogs. We'll include these links in the presentations that get shared out. NVIDIA's annual conference is called GTC. So in GTC '24, which was a couple of months ago, we had a special Developer Day along with lots of special events. And if you want to see those repeated, they were recorded and are available for offline viewing as well.

So lots of links there for the GitHub, both Holoscan, the Holoscan SDK, as well as the GitHub repo for all of them. It is supported. So we support Ubuntu operating system. But for those of you that are developing streamlined hardened systems, we also support Yocto for building a custom kernel at the OS layer. So there's information about doing that as well. And if you want to try it out either on your own hardware or on AWS, we've got links for that as well.

View more related videos