Digital Engineering for Systems of Systems | Next Generation Aerospace Series - MATLAB & Simulink
Video Player is loading.
Current Time 0:00
Duration 29:00
Loaded: 0.57%
Stream Type LIVE
Remaining Time 29:00
 
1x
  • Chapters
  • descriptions off, selected
  • captions off, selected
  • en (Main), selected
    Video length is 29:00

    Digital Engineering for Systems of Systems | Next Generation Aerospace Series

    From the series: Next Generation Aerospace Series

    Overview

    The added complexity in new aerospace programs challenges traditional ways of design and collaboration. One of the main drivers of this complexity increase is the amount of data generated, not only during operation, but also at design time.

    Being able to guarantee a digital continuity among the different design phases is crucial to cope with this data complexity while enabling collaboration amongst teams and companies. A good way to enable digital continuity is working and communicating through models, using different fidelity levels when appropriate.

    Having a unified digital environment becomes crucial to deliver high-quality systems quickly, and cost-effectively.

    Have you ever wondered what you would be able to perform in a digital environment that can offer a simulated and representative environment to?

    • define operational and mission-oriented requirements
    • define and allocate resources to achieve different missions
    • define and manage relationships with supplier organizations
    • anticipate integration challenges focusing on the definition of interfaces
    • ensure a continuous evaluation of the compliance with the operational requirements along the development process

    Highlights

    In this presentation, you will learn how MathWorks solutions can support your digital transformation and enable various ways of collaboration.

    • Identify mission objectives and the required assets
    • Anticipate objectives, contributions and performance needed
    • Validate assumptions and demonstrate early through regular MVPs
    • Accelerate the readiness of your requirements, models and testing environment

    About the Presenter

    Alexandra Beaudouin is the Aerospace and Defence Industry Manager for the EMEA region at MathWorks. Her technical background is on software development for certified critical systems in aerospace and defense.
    Prior to MathWorks, Alexandra was Senior Engineering Manager in Embedded Software at Thales in France, and an Embedded Software Department Manager with a focus on DO-178B/C development projects and certification activities at SOLENT, a French company specialized in System and Software engineering in Aeronautics.

    Juan Valverde is the Aerospace and Defence Industry Manager for the EMEA region at MathWorks. His technical background is on the design of dependable embedded computing solutions for aerospace.
    Prior to MathWorks, Juan was a Principal Investigator for Embedded Computing at the Advanced Technology Centre for Collins Aerospace - Raytheon Technologies in Ireland. Juan has a PhD in Microelectronics and Computing Architectures by the Technical University of Madrid (Spain).

    Recorded: 6 Jul 2022

    The idea for today will be to provide an overview of how model-based design and model-based system engineering offer a digital continuity crucial for the design of system of systems, from missions to individual systems and components. But first, let me introduce myself. My name is Juan Valverde. I'm the Aerospace and Defense Industry Manager for the EMEA region at MathWorks. And today, I'm here with my colleague Alexandra Beaudoin, who is my counterpart based in France. Hello, Alexandra.

    Hi, Juan. It's a pleasure to be here with you this morning, and I welcome everyone to the session. Today, we'll talk about modeling and simulation admission of system levels and discuss how programs can benefit from this approach.

    Great. Thank you, Alexandra. So let me start with an introduction. Whether you're working on a land system, space system, civil or military aircrafts, man or un-manned, next generation aerospace programs will bring you some common challenges. New programs go beyond single-system design to focus on system of systems that require higher levels of collaboration among teams and partner companies. The creation of common working environments that facilitate artifact exchange methods and tools compatibility are crucial for the success of these programs.

    In these environments, the possibility of providing early proofs of concept and enabling fast iteration saves an enormous amount of time and budget. Other technical challenges that are gaining importance are the collaboration between humans and machines, covering different levels of autonomy, or the increasing amount of data used, which is converting every design and design methodology into a data-centric challenge.

    And here, security is crucial nowadays. All these costs increase of software and hardware content to manage complexity in more efficient ways. Today, we will focus on the upper layers of the design lifecycle, including support for mission definition, model-based system engineering, and how simulation can enrich your high-level analysis.

    When dealing with systems of systems, it is crucial to understand and differentiate the objectives of the different design levels and their relations among the different stakeholders. This is both per level and cross-level in the development lifecycle. This can be seen in the pyramid, in the mission engineer pyramid, that goes from mission to component design.

    When we look at this pyramid from the top, we can see the different development lifecycles at the different levels. We will see how the traditional methodologies follow very much a spiral process with little room for iterations. Model-based system engineering and model-based design enable a digital continuity based on model exchange and full traceability of artifacts as well as automatic report generation or enablement of fast iterative loops.

    This way, it is possible to establish and manage a connection among different levels and perspectives and to identify conflicts and assess the behavior of the system of system use in a variety of scenarios. For that, digital continuity becomes the enabler of the collaboration in several dimensions. This is what we will address today.

    Now, if we focus on different stages done during the design of a system, the main purpose of enabling connectivity among the different phases and create a digital thread is mainly to avoid working in silos. It is crucial that exchanging mechanisms within the cross-phases are agile and allow several iterative loops as well as showing an accurate story of your design choices that can be traced to requirements and use cases. Working with models will enable this digital thread at different levels. But, Alexandra, could you please tell us a bit more about what are the benefits of using this approach?

    Sure. Let's see how this thread can enable concrete advantages over the development cycle. We will see how a highest perspective to establish a specific system definition can be very helpful in very different manners. Here are some objectives of digital engineering and how programs would be able to manage the invisible before it is visible.

    The idea is mainly to early detect any functional issue or conflicting aspect of the mission using simulation for functional assessment and performance allocation. Set the engineering framework to enable a high quality of production and a clear contractual decomposition through requirement-driven development. So to enable a multi-dimensional collaboration to reduce the risk of delays of the delivery for acceptance through incremental and regular integration capabilities.

    We would like to start from this well-known workflow, processing and refining the requirements to define the composition, architecture, scope, and dependencies between the items that constitute the considered systems. This engineering workflow is usually based on textual requirements. Teams have established working groups for clarification reviews to reduce the textual ambiguities within the requirements level by level.

    This requires a lot of time, a lot of experts to make many people agree on what is required, requested, feasible, and eventually performant for the mission objectives. But at that stage, no resulting assessment is visible. Everything is on paper or statically defined.

    Then the time comes to evaluate if the implementations are matching the objectives and the requirements that had been defined and clarified. But most of all, the system of the system of systems are being integrated together to evaluate their compliance through the mission scenarios based on the mission requirements from the bottom to the top, assessing that each system is compliant, communicating with the others, and matching the mission purpose to do the right thing.

    And this is where usually discrepancies becomes to happen as integration does not go that well and the mission scenarios are only partially met. And this is also the time where the teams realize that the textual requirements still included some risks. Some requirements would not apply any ambiguity but would imply some interpretation that were not triggered during the discussions, because the teams agreed that there was not ambiguity. They probably did not check that they understood the same thing.

    If we think of that workflow removing the boundaries of the teams that are usually associated to the level of definition, we'll reduce the risks and improve communication. Models combined with the simulation are that enabler of communication to make all that is invisible in the textual requirements and static description visible to the stakeholders and the teams from the very early phases of the mission design. Indeed, models and their capability to be executed in the virtual environment become a keystone and a powerful item for clarification between the teams and every and each level of the lifecycle.

    Having model at each step of the way is an enabler for collaboration and communication. What we mean by that is, for example, with the stakeholders, models and their execution capability allow the actors of the phase to demonstrate, confirm, or validate that the system objectives are achieved. The domain experts can demonstrate the functional purpose, the precision ranges. And the system engineer can confirm or evaluate assumptions, run some functional validation tests, and coverage assessment. All these shown regularly to the customers or even to the operators to assess if the system will match the expectation or still need some clarifications.

    At the level of the managers, when looking at the system activities to pave the way of an efficient implementation, discussions with the architects become really a must-have. The system engineers can confirm some of the design choices or allocation. The domain experts will be kept in the loop to prevent any side effects and assess the performance in terms of precisions or reliability using simulation or even analysis capabilities, like Monte Carlo simulation or trade-off analysis. Models also enable to get the appropriate indicators of progress for the teams to know how they are doing regarding the planning and what would be of interest for the program managers.

    Then during the implementation phases, teams can become larger and not necessarily with domain knowledge. This would raise some functional or technical drawbacks and where models can be very beneficial to the teams. The system engineers provide an executable functional reference for the software engineers and architects that need to design the most efficient architecture and implementation. The software engineers can highlight or clarify the software or technical constraint they could encounter, particularly when that may have a system impact. Models also enable to get the appropriate indicators of progress for the teams, and they know how they are progressing and how were the coverage of the implementation.

    There is one last audience to highlight. It's the collaborators. Models and their ability to be executed in a simulated environment also give access to a much larger virtual environment. Each model acts as a digital twin as part of the system of systems, and this can expand the possibilities of integration in the virtual world. This is how models are the digital keystone to a collaborative and more efficient development. This is even more beneficial when the teams are following agile methodology and needs to be able to perform regular demonstrations or need to be able to deliver a valuable intermediate version for contractual reasons or for early integration processes.

    We've seen some of the challenges and how using models can be very beneficial to the team's development processes and the team's organization itself. That was a bit conceptual, I must admit. But now, Juan, can you give us a concrete example of what users would be able to perform?

    Yes, of course. Thanks a lot, Alexandra. So while it is not the intention of this session to go very deep into specific applications, we would like to mention some possible examples and how they fit into the overall design lifecycle at each level. Of course, there will be many iterations and intermediate steps that we'll not mention today, but I hope this is still illustrative.

    Let's imagine that we have a search and rescue mission where the main goal is identifying and rescuing certain targets in a given area. In this case, we assume all targets are on the ground. Then we'll have to define different conditions and assumptions. This is, for instance, being able to identify the target given certain features. We will have certain time constraints for the mission to be successful, or we might need both air and ground support to be able to cover a certain amount of terrain depending on time limitations.

    Then we can start capturing mission requirements. This is the area to be covered, time constraints, amount of assets dedicated to the mission, the need to overcome certain types of obstacles due to terrain characteristics. What features of the target need to be known so that it can be identified? Where is the decision-making done? Is centralized, et cetera?

    Then we can start capturing information about the environments. Then we can create 3D environments and process them. In this case, we can see an artificial 3D environment where it is possible to add trees, buildings, and other parts of the terrain. You can also process these environments and generate 2D sectorized maps, modify them, add weights, et cetera, to then use them for definition of your mission as well as for later simulation. These scenarios can be used to refine your requirements, as I will mention in the next slides.

    Then you can start also creating use cases based on requirements and information from the previous scenarios. You can then define use cases and see how these use cases contribute to the different missions. You can define goals, pre-conditions, post-conditions, and information from what needs to be done in each scenario and start inventing your use cases. Then you can get traceability diagram for your mission to observe how the different use cases contribute to the overall missions and how one use case extend to others, et cetera.

    Then imagine if we focus on one use case. You can then start identifying which assets you will need to fulfill your mission with. In this case, for instance, we have our main aircraft, land vehicle, and several drones. Then since we already had the graph linking the use case to the overall mission, now we can see how the different assets contribute to this use case.

    Then we can also start defining functions, and we can start allocating these functions to our assets. You can still see that in the relationship map that we saw before, where you can see the different functions and assets. But you also can see that in the allocation matrix to define the different configurations, like we see in the slide now.

    By now, you know what the mission need to do. You have divided that into different use cases. You know the assets required to fulfill them and what functions these assets will perform. However, it is important to validate these assumptions, refine these requirements, et cetera. And for that, there are different types of analysis you can do at this level. So a useful thing to do is merging information you already have about functions, assets, and scenarios to start refining mission requirements. This can be the number of assets required, distribution land/air, et cetera.

    Then the idea will also be to define requirements for the different individual systems, like, for instance, battery autonomy in case of having electric drones used for surveillance or latency needed for communication links, communication bandwidth to share images, et cetera. This can be done by, for instance, performing some mission planning examples and some architecture trade-off analysis. In order to perform a architecture analysis, it is possible to capture your architecture and attributes in the form of stereotypes to your blocks interfaces and so on. Then using the available APIs, it is possible to add different types of analysis using MATLAB scripting.

    Then the results of this analysis, in this case, for example, being that you need another drone to be able to cover the whole area that you are studying. Now, if we start using some mission planning examples to support the architecture and requirements refinement, at this stage you don't need to have high-fidelity models of your systems.

    But to understand contracts among them, use state-of-the-art data for feasibility studies, et cetera. Of course, if you or your partners have already behavioral models, it is very easy to use simulation results to enhance analysis at this level. This way, it is possible to identify a early potential conflict in requirements among systems and evaluate the feasibility of your missions.

    For instance, imagine you need to bound the number of aerial assets you would use for surveillance, like we mentioned before. Then you will be able to use scenario information created and use some path planning algorithms. In this case, the example shows a genetic algorithm to identify typical routes and distances, time response, et cetera. Of course, you can use higher-fidelity models later to refine and validate this analysis.

    This is actually one of the great advantages of using models in a highly-connected and traceable environment so that it is possible to run hundreds of scenarios. So by now, you know what the mission needs to do. You have divided that into these different use cases. You know the assets required to fulfill them and functions these assets need to perform. You have also validated these requirements running use cases, and you have identified requirements for your individual assets.

    Then you can continue capturing requirements for your individual systems. These requirements can be linked to your system architecture models as well as to your behavioral models. It is important to highlight that, apart from authoring requirements in text, it is also possible to model your requirements. This way, it is possible to formalize them so that they are mathematically rigorous so that they can be used as assertions in simulation, like as shown in the table, or even performing formal property proving using behavioral models.

    One way of formalizing requirements is using tabular format that we are showing in this slide. In this way, you can specify pre-conditions, post-conditions, comments, assumptions. And this way, it is possible to look for inconsistencies, like, for instance, data types, conflicting conditions, et cetera. You can, of course, add new requirements, work on your assumptions, and, for instance, link them to the textual version of the requirement that you were working on. So these are all possibilities for your requirements that are new.

    With these requirements, you can start capturing the architectural model of your individual systems. Once you have your system architecture done, functions mapped, requirements clear, you can continue running different types of analysis. This is similar to what we already did at a system of system level but focusing on one individual system.

    Then in this case, just to see an example, we are focusing on one of the drones we saw before to see the types of propulsion it could have. In this case, we're going to see a performance analysis of its battery. You can add different variants to your models and compare results based on physical characteristics. You can trade-off different architectures and configurations. And as you can see, in this case, we have the battery discharging curves under different configurations.

    Then we can create reports with the different results captured from the different variants to have a historic of design choices. This will prove very helpful for other programs, decision justification, et cetera. And then we can create a report that includes snapshots of all components in the architecture definition of interfaces and allocations, et cetera.

    While having high-fidelity models from your assets is not necessary for the initial trade-off analysis, bringing models to the picture can be, of course, very beneficial. Of course, the availability of models of the different parts of the system depends a lot on the phase of the project we are in. However, it is important to mention that the use of legacy models, even if they are not final, simple models like state machines that define behavior and contracts among different parts, et cetera, can make a difference for the analysis that you are performing.

    It is also important to highlight that it is possible to share models in a protected way so that you can protect your IP in case models are exchanged among different companies or teams. This protection can be applied to different features.

    I can share a model, for example, that can be visualized but not simulated. Or I can simulate it, but you cannot generate code out of it, et cetera. So I hope these examples were illustrative. A question would be now, how can we continuously evaluate the system under design? And for that, I think Alexandra can give us some hints about that. Thank you.

    Thanks, Juan. It really feels like modeling and simulation makes things easier as complexity grows. But still, that represent lots of activities and so a lot of human effort probably. We eventually would be able to automate some part of the activities. Juan went through the different phases and levels of definition required for a system definition involved in a mission definition. We shared with you how simulation and different analysis can be executed and add value to the system development. Let's see now how we can think about validation and integration.

    We'll see how automation can support those activities using the concept of DevOps for the system of systems, and our intention is to enlarge the scope of the continuous integration process by enlarging the continuity between the development and the operation paths. Let's dig into the system and get a system perspective that would be in charge of the UAV design.

    Here we are at the UAV design level, needing to validate the functional requirements and the interfaces of the system. On the left, you can see the phases of development using model-based design process, including different actors with which MATLAB and Simulink can integrate to manage the version management or the continuous integration process.

    The validation of the UAV is automated and continuously evaluated based on the model but also based on what would be integrated directly on the processor and executed on the processor. What if we could keep our system on the test at this level but validate it from a larger perspective and check with other systems from a more operational scenario perspective?

    We will take a new hypothesis in this case that the testbench are remotely accessible, then evacuate your UAV design at a higher level. And then integrate your control part of the UAV with other parts of the design, and that would enable you to have the software-software integration but also different parts of software and hardware integration with other calculators or even already available equipments.

    At a system level, going up in the perspectives, the environment would probably be partially virtualized with the environment models and partially with the real equipments, like communication or any dedicated calculator, that would represent the redundancy of the systems in the real conditions. Note that the virtualized part of the system environment can be either run locally on the bench, on a computer, or on a container or remotely even.

    As far as the representation of your environment is sufficient for your system validation, that is totally accessible and feasible. Then you can run the tests that you have at system level and still evaluate your UAV design from a larger perspective. Similarly, you can expand this perspective up to the system of system perspective or even to the mission scenarios if that makes sense to your systems.

    Systems are defined following an incremental development process, so having a way to have an incremental evaluation, an incremental integration, and even a demonstration can really become crucial and really valuable for the system. This enables being able to demonstrate to any other collaborator where you are in the development.

    But more importantly, you check with them if the system is still on track functionally and regarding the interfaces so that, in the end, the mission can be achieved. We would like just to share you that kind of ideas and larger way of thinking about the DevOps cycle. And we appreciate to think about the operation right-hand side of that cycle as being not only the end user or operator side but also all the other systems that would interact with the system and the development.

    And the conclusion, we'd like to share with you, again, the different ideas that we shared. Extending the model usage for operational engineering, using it also for the architecture of your system of systems, but how a model is really a thread and enabling tremendous collaborative work. And globally, having a model-based design development is also reducing risks.

    Thanks a lot, Alexandra. Yeah. I hope the presentation was useful and that you found the topic interesting. This is all for today. Now, we're going to go into the Q&A session. But before going into the Q&A session, please just let me remind you that this is not an standalone presentation. This is part of the next generation air space series that we are doing. This episode was the digital engineering for system of systems, and then the next episode will be the data journey. So please follow us in the next episode. And thanks a lot, and let's go to the Q&A.

    Related Products