Model-Based Design for Space Startups Video - MATLAB & Simulink
Video Player is loading.
Current Time 0:00
Duration 33:18
Loaded: 0.50%
Stream Type LIVE
Remaining Time 33:18
 
1x
  • Chapters
  • descriptions off, selected
  • captions off, selected
  • en (Main), selected
    Video length is 33:18

    Model-Based Design for Space Startups Video

    Lewis McCluskey, Southern Launch

    Overview

    In this webinar you will learn how MathWorks® Startup and Accelerator programs can help your organisation during the early stages of your journey and beyond. You’ll hear how these programs equip your team with the necessary tools for design and simulation, test, and code generation within a Model-Based Design workflow. Application Engineers, technical support, and training are also available to provide knowledge and guidance so your team can get the work done.

    Highlights

    • Learn how the MathWorks Startup and Accelerator programs can help your company through
    • Favorable software license fees including MATLAB and Simulink Suites
    • Access to complimentary and discounted training
    • Support from Application Engineers
    • Access to advanced services to jump start your developments
    • Discover how Model-Based Design accelerates your development
    • Simulate and prove designs and avoid expensive physical prototypes
    • Present proven concepts to investors, partners, and customers quickly and efficiently
    • Collaborate within and across teams using model components for parallel development
    • Hear how others, like Southern Launch, have benefited from the Startup and Accelerators programs, and Model-Based Design.

    About the Presenters

    Ruth-Anne Marchant is a Senior Application Engineer specializing in Simulink, and Model-Based Design. Since joining MathWorks in 2015, her focus is on supporting customers adopt Model-Based Design with Simulink. Prior to joining MathWorks, Ruth-Anne worked in the Canadian aerospace industry as a control systems engineer. Ruth-Anne holds a BASc in computer engineering and an MASc in electrical and computer engineering, both from the University of Waterloo, Canada, specializing in control syste

    Lewis McCluskey is a senior launch engineer at Southern Launch and has led the development of Southern Launch’s rocket range safety analysis process.  Lewis works alongside some of the largest rocket manufacturers in the world to understand how their rockets could be safely launched in Australia. He has developed a strong understanding of rocket technologies, and how safe rocket launches can be carried out. Lewis studied Aerospace Engineering at the University of Adelaide, graduating in 2019.

    Recorded: 16 Mar 2022

    Lewis McCluskey here. Thank you for the intro, Ian. So as Ian said, Senior Launch Engineer for Southern Launch, I've been working here with the company since 2018. I currently lead the work and the team for mission development, modeling and simulation, flight safety, and operational software. Today I'm going to talk about how we've worked within the MathWorks startup program throughout the years and how we've benefited from the program, applying to not just model based design software that we've developed using MathWorks products, but also our software that we've developed for other applications as well.

    But before we jump into that, I'll talk a bit about the space market that we're trying to reach as a company, our unique offering as a result of that, and quickly touch on our two launch sites at Whalers Way and Koonibba test range to provide the context for the problems that we're trying to solve using the MathWorks products-- which then leads into the product development.

    So first of all, the market. So we're seeing a change in different sorts of satellites that want to be launched around the Earth. So traditionally, you're looking at the geostationary satellites-- very heavy satellites, weighing up to 4 to 6.5 tons, costing half a billion US dollars just to launch, and to manufacture, equally as expensive. These, because they're so far away, they can stay in orbit for 15 years. And they don't experience a great amount, obviously, of atmospheric drag, and a lot of decay, so they'll just stay there and move into our graveyard orbit afterwards. And that's it. Once that technology is up there, the technology can't change.

    However, what we're seeing now is that with the modernization and miniaturization of technologies, we can achieve similar things, but at a much, much closer distance relative to the Earth at a much lower cost. Moving away from those 4 tons and 6.5 ton satellites, now just 3 to 12 kilograms, some of these ones. Costs are orders of magnitude lower. But the problem is those rockets that were designed for these large heavier satellites aren't cost effective for these smaller satellites that need to go around the Earth, and so this is a new market. Lots of constellations are going to come up-- obviously, with more and more satellites now-- that we'll be looking to tap into to make a rapid, cost effective solution for launching from the Earth-- in particular for us, obviously, launching from Australia.

    So one of the things that we value is obviously the location of the launch site itself. And so what we did as a company before selecting was we looked at a host of different launch locations around the Earth and assessed them, and then we looked to find the optimal location that we thought was to launch from. And for us, we found this location to be here in South Australia. Benefits for us with launching here is it's closer to the poles, so you don't have as much rotational velocity to counteract when you're looking to go into a polar orbit. It's also in a low area of shipping density. Similarly, it's also in a low area of aircraft density as well, which you can see in the two maps on the right. And so the impact to those industries is quite minimal. And that's the justification for our launching-- one of anyway, a few of them from South Australia.

    So our offering. So our five key service pillars are orbital launch-- that's from our launch site at Whalers Way. We'll discuss that in a bit later on. Suborbital launch as well, and that's to test the technology that can eventually go into those orbital technologies. Mission and campaign design, that includes propulsion testing and safety analysis services, and that's partly what I'll be talking about today in terms of MathWorks products will be supporting. Similarly, our rocket design and avionics hardware consulting is another key pillar as well, which links into the discussion today that we'll be going through. And lastly, our securing of technology transfer, launch licensing, and launch insurance for our customers.

    So a very quick overview. So we have two launch sites. We have our orbital launch site from South Australia located at Whalers Way, which is close to the township of Port Lincoln. It's marked as the W on the map that's been centered there. For us, it's a good location. So in addition to what was mentioned earlier, it has good year round weather and, obviously, unhindered over the ocean access, so there's no populations obviously at risk for a launch from way out this way-- in addition to the established industry around the area to support our rocket activities.

    And on the left hand side of the map is the Koonibba test range, which is where we'll be doing our overland launches for a couple of different contexts actually. So this particular launch site was the first ever licensed launch facility, and also the first Australian launched space launch was granted for the Whalers Way launch site last year. So the Koonibba test range is an overland range. It's 145 kilometers of access. You can see it's marked with the swath of azimuth in orange in the center there. And it's used for the recovery of technologies afterwards to launch, recover, and then inspect, analyze, and obviously then go again building up the technologies.

    So that's our core pillars, and those are the problems that we're aiming to solve. And the very first one that we look to investigate was just the flight safety development in general. So flight safety, in the initial times when we were investigating it, we actually-- there wasn't an Australian Space Agency at the time, and there wasn't the space rules that we have now, which we use in our flight safety. So very early on it was exploring other particular regulatory frameworks, whether it's FAA, which is what casa made the use of. And then we were using some complicated range safety equations and tools to just develop scripts to solve these problems. The key thing for us was that we were focused on the actual capability development at this point and not on developing the back end software to do that. And that's where the MathWorks tool chain came in handy for us early on, was that we could use this toolbox that already had solutions implemented.

    As an example, what you're seeing right there is a probability density function measured in two dimensions. And we don't have to code the back end to that, because we can make use already of in-built MathWorks functions which can develop that for us. And so early on that's how we were developing it. So it was developing capability, really, really focusing on that, and that's what we were able to leverage off of based on the MathWorks involvement with us in the startup program.

    Similarly for us as well, we were also looking at another aspect too. So not just flight safety, but we're also looking at the range operation. So for example, one of the Koonibba test range launches that we did in 2020 needed to operate from. As a result, some key areas for us was situational space awareness. So what you're seeing on the right there is an example of an integrated system using an ADS-B SDR solution, which would be integrated using the communications toolbox within MATLAB. And then we used the mapping toolbox itself to then create a map of the different exclusion zones, including the aircraft located, and then plot them live within an ETA intercept to our boundaries.

    In addition to this, we also created, using the App Designer, a GUI to operate and monitor in real time our weather conditions. And similarly we also used the analysis tool sets to ingest, analyze, and display the data measured from Radiosondes that we launched on our weather balloons. And again, the key point here that I'll hammer across is its developing capability using proven concepts efficiently, which is exactly what we were able to do.

    So moving back to the topic on flight safety. So now as we're progressing through the company's journey, going from the early years, the launch service as a key strategic pillar is becoming more and more frequently used, obviously. It's a core strategy for us. And so there's a need for us now to then take what before was a complex analysis that was developed over many, many scripts into a form where we could efficiently and quickly assess various flights from Australia-- or anywhere really-- such that we could quickly perform a flight safety assessment, which typically takes quite a bit of time. For this sort of tool, there isn't really a publicly available tool at the moment, and so this is a fairly unique concept to us and a unique selling point for our company too.

    So taking the scripts as it was now, it was time to, what we call there, "OOP-ify" the code and create a series of objects under a software product lifecycle that we could then start to develop and rapidly generate different objects to fit our different flight safety analysis. The diagram there just shows an example product workflow of how the user may have some data, prepare a model vehicle, and then also start to interact with what we've called MAGIC as our flight safety analysis tool-- which is a wraparound basic sort of software-- as well as implements the right safety equations to actually calculate and assess the risks.

    So moving into the Range Application Suite again, which is an extension of our range application software. The next level up, so to speak, for that was to then look at our increasing complexity of scope of our launch project. So in the Koonibba test range example I gave, that was a small sounding rocket that was launched from Koonibba to 85 kilometers, and now we're looking at scaling up to a larger launches of larger complexity and scale, and as such the supporting software that supported that needed to be upscaled too. So for us, we developed other tools on maritime surveillance, and advanced weather forecasting software-- which you're seeing on this slide here now-- trajectory visualization, attitude visualization, as well as software assurance to ensure that how we operate-- obviously our software, it's critical-- is assured to a particular standard.

    Now, as the topic obviously is our model based design, I've picked one of these particular applications to dig into a little bit there to show you our use of Simulink as one of our examples here. So this is the attitude viewer. So one of the interests for us for a launch is to be able to check the intended pitch angle-- or yaw angle, for example-- of a vehicle and then compare it to a measured angle, which is communicated via telemetry link-- for example, through a ground station-- ingest it into some data back end, and then from there, we then take that data and then make use of it and update an orientation animation, such as this one that you're seeing on the screen here, to be able to get an understanding trend of how it's performing versus how it expected, which is what you're seeing, and then an example block diagram there showing the different inputs which then led towards that.

    For us, the use of model based design made that actually very quick for us to better develop, which is the key thing for why we look to go to model based design, especially when we're visualizing lots of data and handling lots of data. For states in particular too, it's very, very quick and efficient to use an MBD workspace environment for that.

    So hopping onto the MAGIC now. So once again, our interactions and our development as a company brings the software up to another level. So whereas before we started off with this development and understanding the complexities of range safety and flight safety equations, as well as modeling and simulation, we then moved into an object orientated framework to try and speed up our analyzes. But the next step after that was to go through that software product lifecycle and apply to it a test case, or test plan and test procedures, and linking that through. And so we were able to make use of the MathWorks tool set chain here for that as well. So the one in particular that I'm showing case in here as an example is the Unit Test Framework, which we were able to use to generate code coverage reports, unit test reports, test cases, and it made it very easy to save, for example, images for demonstration purposes too.

    Similarly we also looked into the Simulink Requirements for Traceability. At the time we didn't have a need for it in MAGIC, being built in the back end, but this is something which we could use in the future for other MAGIC projects that also require that software assurance for any of our model based design work.

    So our last slide here is on our electronic system capability development, which is a very heavy user of our model based design. And I would say this is where we've benefited the most from the startup program that Arushi outlined earlier, to do with having access to the core set of the MathWorks products and being able to just rapidly test different parts of different toolboxes to just generate effectively and quickly particular solutions to the problems that we're trying to solve.

    So one of the examples on here is the development of hardware and software systems, which will support our mission systems and embedded electronics. And so what we've been able to do is we can design multi-layered systems-- so at different particular levels, whether it's a low level interface with hardware or it's a higher level software design-- to capture and process data, generate user interfaces so we can interact with these particular systems of interest. So for us, we leverage existing hardware and support packages from the MathWorks team, and we're able to rapidly prototype and evaluate communication protocols, sensor data acquisition and capture, as well as actual control of hardware components.

    That's it. So Thanks, everyone, for listening.

    Hello, and welcome to this next session in today's space webinar. My name is Ruth-Anne Marchant, Senior Application Engineer at MathWorks Australia. Over the next 20 minutes or so, you'll hear about how model based design can help you develop your space systems.

    To begin, I thought it would be worth sharing my own personal experience working in the space industry. My first job out of uni was at MDA, a Canadian aerospace company, which some of you may recognize as the company that designed and built the Canadarm, the robotic arm on the International Space Station. And one of my first tasks as a new grad was to run hundreds of simulations of the arm, reaching for, capturing, and docking a space capsule. The point of these simulations was to test in advance all possible scenarios and analyze the forces arising during the maneuver to ensure nothing would break when executing the maneuver in real life.

    This experience highlighted to me some very important lessons. One, simulate everything. Simulation based testing is critical to ensuring a successful outcome when it matters most, executing in real life. And two, modeling the system and environment are key enablers to support simulation based testing. When I went on to work for another company developing control systems for a new commercial aircraft program, these two key lessons stayed by my side, as much of the work designing the control system algorithms required models and used simulation based testing. The control algorithms were then reused in the software via automatic code generation. This helped us quickly iterate and test new control designs without having to rely on hand coding.

    These three pieces, modeling and simulation, automatic code generation, and testing, are common themes you'll see during this session, along with the benefits of using model based design for developing your space systems. Here is the system we'll be using for most of the session-- a satellite. This is a relatively complex system. It has multiple components, such as power management component, a guidance, navigation, and control component, a communication systems component. And complex systems are challenging to develop. Here are three examples of complex systems across a range of application areas and what can happen when things go wrong.

    In some applications and industries, the cost of failure is high. To call out a specific example from the space industry, we'll take a look at the Ariane 5. The Ariane 5 is a European heavy lift launch vehicle that is part of the Ariane rocket family-- an expendable launch system. The Ariane 5 succeeded the Ariane 4. And you can see here that the Ariane 5 has a significantly larger payload.

    As many of you already know, the maiden flight of Ariane 5 was a failure. Pre-flight tests had never been performed on the inertial platform under simulated Ariane 5 flight conditions, so the error was not discovered before launch. The crash was attributed to an overflow in the software. The control software on Ariane 5 was recycled from the smaller Ariane 4. The Ariane 5 rocket motor actuators had a larger range of motion. Physical signal inputs, which were no longer constrained to the bounds that applied to the smaller rocket, caused an overflow on the controller.

    The Ariane 5 inherited requirements from its predecessor. The design process followed the traditional path of requirements, model, test, integrate, build. Pre-flight tests had never been performed on the inertial platform under simulated Ariane 5 flight conditions, so the error wasn't discovered before launch.

    Now following this event, an investigation took place, and here you can see the final report, which calls out some interesting findings. The last point in the recommendation specifically calls out things like perform complete closed loop system testing, complete simulations must take place before any missions high test coverage must be obtained. With model based design, this is possible. And not only is it possible but it is the approach that many companies in the space industry use today.

    For example, Lockheed Martin used modeling, simulation, and automatic code generation to develop the GN&C system for the IRIS satellite. Lockheed Martin faced the prospect of missing tight deadlines for their IRIS project due to their development process. In the past, Lockheed Martin engineers produced extensive algorithm design documents, some of them more than 1,000 pages long. Programmers wrote the code by hand based on their interpretation of these documents. The entire process was slow, and defects were sometimes introduced during the hand coding. Facing a 23 month deadline, a team of four engineers developed the models of the system, verified control algorithms using closed loop simulation tests, and generated production quality code automatically. As a result, this small team was able to double their development efficiency, generate efficient defect-free code, and update designs in a single day. Ultimately, they met their 23 month deadline. So in this example, Lockheed Martin used models throughout the development process, which is the core of model based design.

    Before diving into the specifics of model based design, I'd like to call out some challenges that teams face when developing complex systems. You're likely familiar with some of these challenges. First, when designing a complex system, you'll likely need to perform tasks such as design trade-off studies, what if analysis, and component sizing, and this can often be a time consuming process. Secondly, designing these complex, multi-domain, physical systems often involves testing on physical prototypes, and this can be time consuming, expensive and sometimes impractical-- especially for space applications. And the third challenge I want to highlight is that hand coding the algorithm software is time consuming and often prone to errors.

    Model based design helps you address these challenges, and here's how. In model based design, a system model is at the center of the development process, from requirements development through design, implementation and testing. Rather than relying on physical prototypes and textural specifications, model based design uses a system model as an executable specification throughout development. It supports system and component level design and simulation, automatic code generation, and continuous test and verification. The system model enables you to simulate and test your system in a single, integrated simulation environment. When you have your system model in a single, integrated simulation environment, you can quickly perform performance and trade-off off studies to help you do things like select components and design system architecture.

    With this model, you can also automatically generate code which can run in, say, your embedded system, and this helps you eliminate errors introduced by hand coding your algorithms and saves you valuable time because it's automatic. It doesn't take someone weeks or months to hand code these algorithms. And with the system model living at the center of your development process, you can test early and test often throughout the full process. So this continuous verification and test helps you detect errors earlier and more often than you would otherwise through traditional development methods. It also means you can reduce testing on physical prototypes and test across a wider range of use cases, including ones that are impractical or too dangerous.

    So now that you've heard an introduction on what is model based design, and how it addresses common challenges in developing complex systems, we're going to bring back our satellite example and briefly look at how model based design can be applied to this specific example-- focusing on modeling, implementation, and test and verification.

    So we'll start today here with modeling and simulation focusing on the physical system. In model based design, you use a multi-domain environment to simulate how all parts of the system behave. You include models of your environment, so for example, wind, gravity, solar radiance. You also include physical components, like electrical systems, mechanical systems, hydraulic systems, communication systems. You also include models of your algorithms, like low level motor controls, or your guidance, navigation, and control algorithms. You then run system level simulations to better understand how the full system will behave.

    And this allows you to do things like explore system architecture options and identify the ones that are most likely to meet system requirements. Let's take a battery cooling system as an example. For this system, some common design tasks include designing and verifying architecture of heating and cooling systems, or sizing components. With model based design, you build a model of a battery cooling system by integrating electrical, thermal, mechanical, fluid, and control components. You can run simulations of the system and visualize the results to assess if the test architecture is likely able to meet system requirements. So with model based design, your system level model means you can quickly perform performance and trade-off studies to help you do things like select components and design system architecture.

    Let's take a quick look now at implementing the algorithms on hardware. For this example I'll focus on the GN&C system, or guidance, navigation, and control system. Through code generation, you automatically convert designs into production quality code in one of these supported languages, which shows the same behavior as the simulation model. Doing this helps at both the prototyping and production stages of your development process. For prototyping, automatic code generation helps to answer questions like, does my algorithm perform well on an actual device with true latencies? With production code generation, you can optimize the generated code for your production hardware.

    When you generate code, you can use the bidirectional linking and Simulink models to trace how your model is coded or from where in the Simulink model a block of code is generated. Once you have generated this code, you can integrate it with other software code that's handwritten. Through automatic code generation, you eliminate costly error prone manual steps as you refine your executable specification into target hardware, and this, in turn, reduces your development time. So that briefly covers how you can reduce development time by automatically generating code.

    So now we're going to look at the final piece, testing and verification. For this section, I'll focus on the communication systems. For satellite communication systems, the communication link needs line of sight with the ground station location on Earth. It's often not practical to use a physical prototype to test the line of sight algorithms used to determine where the satellite is with respect to these ground stations. You're unlikely going to send a physical prototype up into space to test these algorithms, because it's expensive, and reconfiguration of your system-- what's up in space-- is either really hard or impossible. If you lose that communication link, it's pretty tough to communicate, to reconfigure. Therefore it's important to have a simulation based environment where you can simulate your satellite moving around the Earth, and determining where there is line of sight, and test your line of sight algorithms. And you can perform these simulation based tests in Simulink with these dynamic models.

    But more generally in software development projects, the cost to fix an error depends on when the error has been detected. So for example, the graph you see here is from data gathered by Hewlett-Packard company. According to this data, the cost of correcting a bug detected at the last stage of the project life cycle-- which is deployment and maintenance-- can be 30 to 100 times higher than the cost of the same bug detected at the first stage-- requirements. In case the bug is never found, it's pretty hard to predict the expenses for the company. So model based design can help you identify and fix errors earlier in the life cycle, so think left shifting the verification process and helping you gain confidence in your design.

    So what does that look like? You start by testing your model and checking that it meets requirements. To do this, you review your requirements and determine the required test cases. To separate your test artifacts from your design artifacts in Simulink, you can create a test harness to isolate the component under test. Then you can create test inputs through map files, signal builder, maybe test sequence blocks-- which allow you to create complex test scenarios. To assess the results, you compare against baseline outputs. You can write custom criteria with MATLAB unit test or use test assessment blocks to define complex pass or fail conditions. Then you can reuse your test throughout the development process. And this is another time saving tactic. That covers how you can reduce reliance on physical prototyping testing and detect errors earlier through continuous test and verification.

    So bringing it all together, there are three main takeaways for you today. First, you can optimize your system level performance through modeling and simulation. Second, you can reduce your development time using automatic code generation. And third, you can save money by testing and verifying your design through the full design lifecycle.

    So how can you get started? I have a few suggestions for you today. One is to engage with MathWorker and others in the user community. You can do this by directly reaching out to MathWorks, or you can reach out through MATLAB Answers. To ramp up your skills in using the tools which you've seen here today, we have free on ramps as well as full training courses. And for specific help with regards to your development workflows, please reach out. Use our consulting services to leverage their expertise in this domain. I'll close here by thanking you very much for sharing your time.