Medical Devices Speaker Series 2023: Achieving IEC 62304 Certification for Neuromodulation Devices - MATLAB & Simulink
Video Player is loading.
Current Time 0:00
Duration 27:35
Loaded: 1.41%
Stream Type LIVE
Remaining Time 27:35
 
1x
  • Chapters
  • descriptions off, selected
  • captions off, selected
      Video length is 27:35

      Medical Devices Speaker Series 2023: Achieving IEC 62304 Certification for Neuromodulation Devices

      From the series: Medical Devices Speaker Series 2023

      Adarsh Jayakumar, Boston Scientific

      Developing neuromodulation devices that interface with the nervous system presents unique challenges, especially when it comes to achieving IEC 62304 certification. This presentation by Adarsh Jayakumar from Boston Scientific, explores how MathWorks tools can help you navigate the complexities of the standard and accelerate your path to compliance.

      Discover how MathWorks solutions can support your neuromodulation device development by:

      • Optimizing software development lifecycle processes: Implement efficient and compliant processes tailored to IEC 62304 requirements.
      • Facilitating risk management: Identify, analyze, and mitigate risks associated with your device software using industry-standard methodologies.
      • Simplifying verification and validation: Leverage powerful tools for comprehensive testing and documentation to ensure software quality and compliance.
      • Streamlining documentation and evidence generation: Generate the necessary documentation to demonstrate compliance with IEC 62304 efficiently and effectively.

      This presentation was given at the 2023 MathWorks Medical Devices Speaker Series, which is a forum for researchers and industry practitioners using MATLAB® and Simulink® to showcase and discuss their state-of-the-art innovations in the areas of medical device research, prototyping, and compliance with FDA/MDR regulations for device certification.

      About the presenter:

      Adarsh Jayakumar is a manager of embedded systems in the neuromodulation division at Boston Scientific. His current focus is on scoping and mitigating the technical risk of embedded software projects such as IEC 62304 certification of Class-II and Class-III medical device software. Adarsh received his master’s degree from Cornell University, and his graduate work focused on designing a novel technical tool to study cardiac neurophysiology.

      Published: 18 May 2023

      So the next talk is from this gentleman from Boston Scientific. He is the manager of embedded systems in neuromodulation division at Boston Scientific. And he'll be talking today about working on a project for IEC 62304 or certification for class two and class three medical devices. So welcome, others. Thanks for being here, again. And the stage is all yours. You need the speakers on?

      No. Just do that. We're good. Thanks. All right. Hey, folks. So today I'll be talking about my favorite topic, which is embedded software. Basically, how at both Boston Scientific Neuromodulation we go about designing embedded software, some of our frameworks around verification, validation, and then how we've been working with Akhilesh and his team to leverage Model-Based Design, kind of make that process even better.

      Everything here is basically exploratory research, so kind of off label, but kind of interesting stuff that we've been working on and thought we could share. So agenda-wise, we'll do some introductions, you know, who am I, what am I about, what does Boston Scientific Neuromodulation do, what is embedded software--

      I think it's probably bread and butter for a lot of you folks, but quick intros --how do we go about creating embedded software, some of the regulatory standards around that, the development process that we have in place, as well as how do we design algorithms classically.

      And then we'll dive into the meat of using Model-Based Design, Simulink Embedded Coder, the stuff that Maeve and Kevin we're talking about, verification and validation, in particular, and then go through a bunch of toolboxes that we've found particularly useful from MathWorks.

      I think there should be time for questions in the middle. So if you have any, just raise your hand, and I'll try to get to them. Otherwise, we can do it at the end.

      So yeah, who am I? My name is Adash. As I was saying, I'm a manager on the embedded software team at Neuromod. My background's primarily electrical computer engineering, so a lot of transistor level design, microelectronics, embedded software, specifically towards neural implants. Very passionate about that field. I love neuroscience, digital signal processing, electronics.

      Ultimately, it's about our patients. I think we're all here to go into clinic, go in the OR, and see that difference actually happen. And so that's what I'm here for.

      Boston Scientific Neuromodulation-- pain and brain, that's our tagline. So we generally work throughout hacking the central nervous system, working through chronic neuropathic pain, and then trying to treat different types of neurodegenerative disorders, so stuff like Parkinson's, essential tremor, dystonia, et cetera.

      So over here on the far left, we have-- these are three core platforms. We have our Interspinous Spacer Vertiflex. So this is for folks with lumbar spinal stenosis. Basically, they get their spinal cord kind of compressed and narrowed, their nerves pinched together. The spacer comes in and then helps them get long lasting pain relief.

      Over in the middle, we have RF Ablation. This is our G4 system where ablating nerves just putting heat on nerve tissue, working in a similar way, if I like smack the side of my hand and rub it, stopping those nerve signals from traversing up to the brain and helping with their pain relief in that manner.

      And then kind of the more bread and butter for me is spinal cord and deep brain stimulators. So these are these palm sized implants for SVS. We place it close to the hip. We traverse leads up into the epidural space of the spinal cord and then shoot electrical current directly onto it to stop more chronic neuropathic pain.

      DBS-- very similar technology. We take a similar implant, put it up in the chest, route it up the neck, into the head, and target different types of brain structures, like subthalamic nucleus for Parkinson's-- shoot electrical current in there and stop their motor cognitive disorders, like tremors for example.

      For the context of today's talk, we're really talking about RF ablation, SCS and DBS. Vertiflex-- super cool, very effective, but a purely mechanical component. No embedded software, so a little out of scope for today.

      Embedded software-- all right. So I think everybody here seems to be pretty familiar. But a quick run through of what that is. Embedded software is basically a type of software. It's compiled onto specialized hardware, often called firmware. It's basically C/C++ residing on a microcontroller.

      It can do all sorts of stuff. It can communicate with your printed circuit board, your ASICs, your FPGAs. It can also control different types of microelectronics. So for our implants, for example, we might need Bluetooth communicating with an external remote control. It's the one that actually programs the actual therapy, delivering current into the body, as well as what we're talking about more today, which is the algorithms.

      How are we actually designing those algorithms that facilitate the patient therapy, do stuff around battery technology, do stuff around lead placement, all that sort of stuff. And it comes with its own slew of design complexities. For any kind of implant, you're going to have limitations on power.

      You have something that's-- if it's primary cell, it's just going to bleed out and die. It's rechargeable. You don't want your patient to continuously have to recharge. So you really have to think about computational complexity. How much constructions are you really running? How much stuff are you doing before this thing actually dies?

      Because of space, area, you want to keep surgical ease. You want to have as little memory usage as possible, so you're not taking too much space on your printed circuit board. And then our real focus for today is safety and regulation.

      We're in a highly regulated industry. And how do we approach things from inventing software to make sure that we're getting FDA certification and all that sort of stuff. So quick run throughs-- I'll stop real quick. That was my introduction slides. Any questions before I dive into the fun stuff. Awesome.

      So I know this is a dry, but it's important. This is the whole field within itself. I'm no expert. But I wanted to point out just a couple of the regulatory standards that we really focus on. This is not a holistic list. But the first is IEC 62304, which defines the lifecycle requirements for how we develop and test medical software.

      And what's really important here is that it doesn't say how that's accomplished. It's just kind of the process, as Layne was kind of describing, of what you need to do. And that's what allows us to leverage some of the tools from MathWorks, because you can kind of do this in a different type of manner.

      And the other one that's quite important, we've referenced a couple of times is ISO 14971, which is risk hazards. There's always some probability of risk. And how do we take that risk and mitigate for it and know the probability of that actually happening.

      And finally, with every company, there's some form of work instructions. How do you do that work? What are the safety classifications per product? How do you do testing, verification, code reviews, SOUP, software, unknown provenance, third-party drivers, compilers, that sort of deal. And then what is the documentation that gets you the finish line? How do you actually get FDA submission?

      And that will be the focus on this talk. Happy to talk about the actual design, but the focus will be primarily on the regulatory side.

      So this is just me spitballing now, kind of like how I see ICE 62304, and particularly section five, which is around medical device software. And the way I split it up is basically around these five different sections, which is requirements. What are you trying to do, your architecture, which is, how are you doing that at a high level.

      Your development process of, what is your unit integration, testing. What is your actual units that are part of that architecture? You're generating code that lives on your computer. What are you actually producing when you write it out in C and C++. And finally, your binary image that gets loaded into your medical device implant.

      And what's really interesting to note is, if you think about it this way, it-- obviously waterfall doesn't always work. But the dream is, you make your requirements, you make your architecture, and you kind of filter it down until you finally have a firmer image. And that follows the development process, ideally, for embedded software design.

      And when you think about this from a regulatory standpoint, there has to be traceability. So you have to really focus on how is your architecture traced to those requirements. How is your development process taking each unit and putting that as modules under your architecture? How are you taking your generated code and providing integration testing between these units, as well as units themselves.

      And then, how does your firmware image do static and dynamic code analysis. Static code analysis basically being like, just looking at it, without running it, is it following all the industry standards that you expect. Dynamic code being check every if L statement of your MC/DC. How are you going through the execution and making sure your testing covers every single path?

      And then finally, you have your firmware image system testing, which is, you've got your binary image, how is that tied back all the way to the requirements. And all of this stuff is tied to actual documentation. This is not a holistic list. There's going to be command protocols. There's memory maps. There's formal readiness.

      But these are some of the core documentation that we have and all regulatory industries have for embedded software. So the requirement specification, architecture design, module design, integration test one, and then the design failure modes effects analysis. It's very important kind of looking at your risk hazards and how you mitigate to that.

      OK. So this is the kind of workflow and the documentation that's related. And now we're going to get to, how do we design algorithms. This is something that I think a few of the other presenters brought up, which is the classical ways that you have two different teams. You have a research team and you have an embedded software team.

      And it doesn't matter the language. But let's say this research team is working in MATLAB kind of designing these algorithms. And they find something that works. Great. Then they have to design tests, probably in that same language. They figure out their whole thing. That's great.

      And now they have to work with a separate team, embedded software, to figure out how to get that into the actual implant. And that is fine. And it works. But it leads to this classic problem, which is that the algorithm development and the algorithm testing are decoupled from the actual device development and testing.

      And when you actually have to do that, you have to verify the functionality of the algorithm twice, effectively. You're doing just double work the entire time. And as other presenters have brought up, that is kind of the point of Simulink Model-Based Design and code generation, is to kind of avoid this iterative process of your algorithm is designing, your recoding, and iterate through.

      And the dream here is that you're saving long term resources for development and testing, reducing those manual code errors from doing that transfer of knowledge, providing flexibility for rapid deployment of new algorithms and parameters.

      I do want to focus on the BNP stuff, but I'll kind of quickly run through some of our key things that we really liked about Simulink an Embedded Coder before getting to BNP. So first of all, I mean, for any controls theory, it just looks great, right? I mean, that's kind of important.

      If you have a controls theory background, you have a set point, you have a process variable, you have some error, and you're trying to minimize that error. And you take a look at Simulink and it just kind of looks about the same. And so there's just some advantage of having something visual that you can show someone who doesn't have experience that hey, this is what we're trying to do.

      There's a lot of debugging tools you can do, like little step sizes, a lot of time dynamics that are really nice and kind of get really nitty gritty. So this is like, I forget, one of our PID algorithms, where you can really see some of the oscillations that happen in different time periods and see low level dynamics.

      Easy transition from MATLAB-- so this is kind of-- I asked this question about why choosing MATLAB Coder versus Simulink. And I know there is capability for just taking a MATLAB block and just putting it directly to Simulink. We haven't explored it that much. But I think it is kind of an interesting thought of how much can you leverage just MATLAB functions itself through MATLAB Coder versus just doing Simulink and Model-Based Design in isolation.

      And then depending on what you're doing-- nobody's taking just the standard algorithm and plopping it in a medical device, but there is something to kind of get you off the ground. There's native support for classical algorithms. So, for example, this PID is not something that we had to create, which is kind of put a block in and that leaves this.

      And then specifically around embedded code generation, this was really important for us, because our team, they're firmware engineers. They come from the ground of hardware and bits and bytes and so if you can click on C/ C++ and then see exactly where that's highlighting in Simulink, that's a huge value add. Just seeing that kind of bidirectional code traceability has been very valuable for us.

      A lot of options for code generation and optimization. This kind of gets back to that trade off between computational complexity and memory, which is the classical problem in embedded software design. You can kind of choose here, hey, do I want this thing to be a little bit faster. Do I want this thing to take up a little bit more memory? What do I really care about within the context of my application?

      And then diagnostics-- checking for any issues with the model, checking for issues with the code generation. We come from a background of, as I said, kind of embedded software. So having these diagnostics that just proofread the Simulink model itself helps us, so that we don't have to become Simulink or MATLAB experts. We can stay on the ground and work from the ground.

      And most importantly, it actually works. That kind of matters. So this is just a quick comparison of memory and computational time. We had one of our engineers, basically, take about a half-week-- and hey, go build a PID model. Just cook it up however you like to do it. And then do the same thing in Simulink. And obviously there can be performance improvements on both these sides. But what does it look like when you're doing this side by side?

      We can kind of ignore, for the context of floating point versus fixed point-- floating point being decimals, basically. Fixed point being you specify how many decimal points are integers versus the actual decimals.

      And we found that Simulink was effectively equivalent or, in this case, better in all cases, in terms of memory usage and computational efficiency, which is really surprising, because-- I don't know if it's our big egos, but we really thought like, OK, Simulink is going to be worse.

      But how much worse is it going to be? But it showed that if we throw it at a person, it can actually perform better, which is very encouraging and kind of put us down the path of Simulink. Great. So I'm going to take a quick pause look at the room see if there's any questions. I'm going to take a left turn into BNP. But are there any thoughts before I move on? Yeah.

      Quick question-- so can you go back home page where you show-- one more page. Yeah, that's it. The options for generating code here-- is anything-- the object is very particular to the medical device safety or the IEC 62304 or standard that kind of options that can give you a baseline data is optimized particularly for medical device software or better software?

      Yeah, that's a great question. So I'll kind of have that in one of the later slides of how it ties in directly to IEC 62304. I think these options are really specific to the target in question. So you might have an ARM Cortex and blank, right? And so based on that particular cortex, you can change the efficiency.

      I do see MISRA C in some of the guidelines here, but in our experience, we've been just using this for optimizations based on the target. And then for the regulatory stuff we use a different Toolbox, which I think was Simulink Check. And we use that for optimizations for regulatory stuff.

      Any other thoughts? Cool beans. All right. So coming back to this guy. So we talked a little bit about traceability, having this architecture, the requirements, the development, the generated code, and the image, and how this is kind of an irritating process to go from the researchers and then the embedded software team.

      And the real value added when we pitched this to the upper management is really this image, which is that MathWorks has these different toolsets that takes us through the flow of getting from requirements to images. And I'll go through some of these specific Toolboxes of what we've leveraged in the past.

      But the real takeaway here is that the verification of the requirements, the architecture, the development, and the testing are synchronized. I know I'm hammering that point home. And that's really important, because, in the past, our teams from research, which are typically PhDs in neuroscience, have been highly decoupled from the embedded software developers. And these tool sets basically allow us to bridge that gap.

      So here's some of the Toolboxes that we've kind of used. System Composer-- and I'm sure some of you have kind of used some of these before. --provides a centralized view of those system architecture pieces. So you get these high-level inputs, outputs of the models. The tax requirements to each portion of the system architecture show that traceability and make sure that those generated reports go into firmware documentation.

      And we literally just take it and plop it in. So our architecture design and module design descriptions basically take little snippets of this and just throw it in. Greatness. Simulink test. This one I think I have a little less experience with versus the low level stuff. But there's test harnesses that have access to a lot of the signals within the model.

      And for those of us who work in embedded software, doing time dynamics is a real pain. If you're doing averaging, if you're doing filtering, whatever you're trying to do, if you're trying to get a very specific subset of testing, it can be very tricky to kind of get that.

      And using these test harnesses allows us to access a piece of the model that would be very difficult in C and C++. Not impossible, but tricky. And this kind of ties into SIL and PIL testing, which is software in the loop, plant in the loop-- can be added for verification of biological models right for all of us we have, whether that's a brain or spine or whatever, there is always some biological model that is involved.

      And you can kind of put your control algorithm along with that plant model and just kind of fit it nicely within the Model-Based Design. Visualization-- you've got reports and all your good stuff at a high-level detail on this fine levels of details.

      And then I know Akhilesh was kind of mentioning continuous integration. DevOps is always a key thing. When you're trying to do iterations on your design, you-- a Jenkins is just a particular type of CI/CD platform. But you want to have something that fits nicely into a CI/CD platform so you can make a change in the actual design and then just run the same regression test suite without having to change a bunch of stuff. So that's the test.

      Coverage Design Verifier-- the same sort of thing around MC/DC-- all coverage traceable to lines of generated C++ code. So it's nice to kind of see that. This is something we were very worried about. We've always had a situation where our generated reports, through other vendors, have always had traceability to the actual lines.

      And it's nice to know that the reports that we get, basically, have the same kind of look and feel, where you can see true/false statements and how it actually looks. We haven't used any of this generation for missing coverage, but I do know it's there. And you can press the button and it generates a bunch of tests for you for you're missing coverage.

      It spooks me a little. I'd rather write the test myself, personally. But it's nice to know that it's there, if you want to use it. So this, I think, gets to your question around regulatory standards. So Simulink check does have compliance of the Simulink model to particular standards. So we're talking about floating inputs of the models, outputs of the models, MISRA C.

      And then specifically, there's a little thing here. It's modeling standards for IEC 62304. You click that bad boy, press Run selected checks, and it kind of goes through a bunch of checks that are specific to those standards. Yeah. And I actually haven't checked exactly what those checks are. Like I actually haven't clicked the dropdown here. But I do know that it runs, and it's nice to have that in the background. It's great.

      Polyspace-- So this is static code analysis-- very important as part of a regulatory industry-- detecting complex bugs, unreachable code, illegal pointers. I think I stole this from MathWorks website. But here's a bunch of different types of bugs that these kinds of tools can find. And we've definitely leverage them when we're trying to run our code.

      And this is the one actually that I've appreciated the most. And this is, from a pitching to upper management perspective, probably the most useful, which is the IC Certification Kit. With any product, if you're using a third-party vendor, you have to have tools qualification.

      Whether it's Python, whether it's compiler, whatever you're doing, there has to be basically validation for that tool to make sure-- what happens if the tool doesn't actually do what it's saying it's doing. What if it's not actually generating a code?

      And using this kit has been kind of nice, because A, it organizes your stuff. You have all that traceability that I was talking about. But most importantly, it comes with the certification PDF. And this is not MathWorks saying, you're certified. But it does have a test suite written in MATLAB, that you can run, that basically proves that the tool itself is working as it expects.

      And that we kind of tie into our DFMEA, putting some of the risk on you, Akhilesh-- to basically say, hey, MathWorks is taking some blame. Here is some of this toolsets that this actual Embedded Coder stuff works. Yeah, so pretty quick.

      Main takeaways here is Model-Based Design is leading to transparency for our development and our verification. The algorithm and the code design is kind of done together, so that this workflow of embedded software is not decoupled from research. There's a lot of time-saving that we found around testing documentation.

      You can change requirements and model-- change your design, iteratively, without having a bunch of manual back and forth, which has been really valuable during product development. We are not at the launch yet, but theoretically, let's say that as well. And we can do a lot of testing on the corner cases.

      So a lot of the stuff-- if it's either preclinical clinical, or done in the lab in a saline bath, it can be really irritating to get some of these corner cases for some other control algorithms. So being able to use Simulink Tools has been pretty useful.

      Yeah, that's all I have. Any questions or thoughts? Quick question. The-- excellent presentation, by the way. The slide where you have the architecture as one of the blocks-- can you maybe comment a little bit on RTLs through ArtOS?

      Yeah, this is a really interesting point. So for our system Simulink is not the full piece of the puzzle. So we have multiple microcontrollers. And some of those have an OS. And some of those, in this case, that are kind of related to Simulink are almost standalone. So we kind of have-- we'll call it neural arm, kind of a piece that is doing the algorithms and the rest is kind of managing the rest.

      So when we think of the architecture, we have to take that into account. So in our context, we haven't had to worry too much about an OS, in terms of interfacing with Simulink.

      All right.

      Regarding your coverage test-- can you go back to your slide where you show the coverage test that you mentioned that--

      Yeah, so this one.

      What's your approach to-- are you having a goal to achieve 100% coverage test? Or, if that's the case, and if you can, what kind of approach do you take to cover 100% coverage?

      Yeah. So the goal is always 100% coverage. It is not required to always get 100% coverage. You can always write addendums to say-- then you have to tell the submission office, basically, why you weren't able to achieve 100% coverage. One of the things we've found is with classical using other tools, it's really difficult to get that 100% coverage, because you're writing a lot of stuff manually.

      When you're doing stuff in Simulink, it's a lot easier to get to that 100%. But you regardless, it becomes a trade of how much of this is trying to get to a magic number versus how much of it is actually providing more safety for your system. So yeah, I guess that's my take.

      All right. Well thank you very much, Adash. Very exciting stuff.

      Yeah.