How Data Processing Can Enhance Your System Over Its Lifecycle | Next Generation Aerospace Series - MATLAB
Video Player is loading.
Current Time 0:00
Duration 36:46
Loaded: 0.45%
Stream Type LIVE
Remaining Time 36:46
 
1x
  • Chapters
  • descriptions off, selected
  • captions off, selected
  • en (Main), selected
    Video length is 36:46

    How Data Processing Can Enhance Your System Over Its Lifecycle | Next Generation Aerospace Series

    From the series: Next Generation Aerospace Series

    Overview

    The next generation of aerospace systems of systems are characterized by an increasing amount of data generation and sharing. The availability of data in time impacts how fast the OODA loop can be executed, defines where the decision-making process takes place (centralized vs distributed), and conditions the different levels of autonomy of the system.

    Engineers need to design, simulate, test, and deploy algorithms that perceive the environment, keep track of moving objects, and plan a course of movement for the system.

    In this session, you will learn how MATLAB and Simulink can be used to develop perception, sensor fusion, localization, multi-object tracking, and motion planning algorithms, among others.  

    About the Presenter

    Juan Valverde​
    Aerospace and Defence Industry Manager for EMEA, MathWorks, with expertise on the design of dependable embedded computing solutions for aerospace. Juan holds a PhD in Microelectronics by the Technical University of Madrid, Spain.

    Alexandra Beaudouin
    Aerospace and Defence Industry Manager for EMEA, MathWorks, with expertise on system engineering and embedded software design.

    Jose Barriga
    Application Engineering, MathWorks, with expertise in Artificial Intelligence, Signal Processing and Image Processing. Before MathWorks, Jose was a Lead Technical Specialist at Indra.

    Recorded: 21 Jul 2022

    This is the new episode of our Next Generation Aerospace series. My name is Juan Valverde, and I'm the aerospace and defense industry manager for region at MathWorks. And today, I'm here with two of my colleagues from application engineering. Great. So the idea for today's session is going through the data journey. What you're going to see today can be seen from, mainly, two different perspectives. On the one side, how data is used and exchanged during the design time of your missions and systems. And two, how data flows from execution to enrich mission assessment, as well as system side. So let's start.

    New programs go beyond a single-system design to focus on system of systems. And this system of systems require higher levels of collaboration among teams and partner companies. The creation of common work environments that facilitate artifacts exchange methods and tools compatibility are crucial for the success of such programs. In this environment, the possibility of providing early proofs of concept and enabling fast iterations saves an enormous amount of time on budget.

    Other technical challenges start gaining importance are the collaboration between humans and machines, covering different levels of autonomy, and of course, the topic for today, the increase in the amount of data used during design time and operation, converting every design and design methodology into a data-centric challenge. Security here is, of course, very, very important.

    All these costs and increase of software hardware digital hardware content to manage complexity in more efficient ways. Today, we will focus on the journey this data goes through and how it is going through the project lifecycle and operation.

    When dealing with this system of systems, there will be different stakeholders at the different levels. And that will be active at different moments of the life cycle. On the one side, enabling efficient ways for data to travel across organizations, design phases, et cetera is crucial. And it is also very important to show that data flows in both directions, so that we can enable fast iterations. This is the digital continuity that will enable collaboration at the different levels.

    Now, data is not only important during design, but in operation, of course. We will see today how enabling faster iterations of the OODA loop is very important, as well as collecting data to assess missions and improve mission design and system design and configuration. Effectively, this is what we mean by data journey, going through mission preparation, including system design, to later execute these missions with more or less finished assets. Sometimes it could be during testing, sometimes in real operation, and very important, how we assess and compare these phases. During the presentation today, Marco and Costa will provide more information about these phase. And we will start with the preparation phase. So, Marco, the floor is yours. Thank you.

    Thank you, Juan. Before we dive into the preparation phase, let me introduce the so-called concept of operation, or CONOPS, in short. The CONOPS is the vehicle by which the characteristic of a system are described. It usually comes in a form of a document and contains various information, such as the statement of the goal and the objective of the system, the performance requirement that the system has to meet, and many more.

    For the purposes of this webinar, we will focus on two specific characteristics, the mission that the system has to accomplish and the performance requirement that the system has to meet. When we are talking about preparation phase, we are essentially dealing with three main pillars. The first one is ready to explore in the design space. Here, we ask ourselves the question, what if? So how would our system behave under different scenarios? And how we can manage a system of how the system would behave when it's in service. Like, for example, how we would manage a fleet of commercial air.

    The other pillar is related to optimization. Here, we try to answer the question, is it better? We are trying, essentially, to optimize the system for something. Optimization is, of course, a wider spectrum. And, for example, it could be we need to try to optimize the system such that we can cover the maximum amount of area wait, a minimum amount of asset, if we are talking about the surveillance system. Or we can, for example, say, try to maximize the in-service time, if we are talking about the fleet of commercial air.

    The last pillar is related to the planning phase, or the planning topic. Here, we try to say, to answer the question, how to, how our system can successfully execute a mission. For example, how can our system identify and rescue a target in a given area within a certain amount of time? All those aspects are usually conflicting. Trying to optimize for in-service or mission readiness might be contradicting the performance, in terms of how long, how fast should I execute the mission?

    Therefore, the goal here is not to find the best solution for each of those pillars, but rather, to strike a balance amongst those. To do so, modeling and simulation is key. Here is an example from one of our customers' story. In this case, Lockheed Martin was trying to minimize the in-service life cost, and at the same, time, maximizing the mission readiness for the aircraft fleet. In order to do so, they did create a very complex simulation environment with Simulink and SimEvent models. And they run stunts of simulation to predict fleet performance. And you can see what they say about this, and you can read yourself the quote on the slide. But essentially, they did manage to maximize fleet performance while minimizing the development effort, all of this thanks to. Modeling and simulation.

    So now we see what the pillars are, and we see an example of a user story. Let's try to make an example and try to follow it through and see how this could work in practice. In the spirit of continuity with the previous webinar on system on system, let's again assume that we have a search and rescue mission where the main goal is to identifying and rescuing certain targets in a given area. In this case, we assume that the targets are on the ground and we have specified from the CONOPS conditions and assumptions, such as, for example, how big is the area that we need to cover, and what are the timing constraints?

    In the previous webinar, we've also learned how, from the mission, we can derive use cases identifying the assets to fulfill the use cases and then the mission and define our architecture. In this example, architecture is composed by a main aircraft, a semi-autonomous land vehicle, and several fully autonomous drones. Also, for this case, we will concentrate only on one constraint, the time. The time of an average length for the mission, which, let's say, is 26 minutes.

    Now that we have identified what is the mission and what are the performance constraints, how do we start? Well, we could start, for example, by creating an high-fidelity representation of the emission profile and constraints, using scenario drawing. We can model 3D environments and process them. Here is an example where we can add trees, buildings, et cetera. Of course, those can also be processed and generate a 2D sectorized map. You can modify them, you cannot wait, et cetera.

    Those information can then be used to simulate families of mission profiles and constraints. You can then try and execute genetic and optimization algorithms to understand how this families of profiles would interact and to find out what is the optimal apportionment in terms of timing for the asset composing the architecture over the life cycle. The idea here is to find out how the mission timing would look like for each and every asset.

    This is, of course, an iterative process, which needs to be repeated until we reach the balance that we were discussing before. We now execute some simulation. And we have generated families of profiles and constraints, on mission profiles and constraints. We are now confident that we have found this perfect balance that we were talking about before. In here, we have apportion the time budgeting for the asset compose in the architecture. As you can see in that picture, on the x-axis, you have the hours in service, while on the y-axis, that is the asset average mission time.

    Starting from then, we can also plot the average mission length over the service hour. So again, on the x-axis, you have the hours in service, and on the y-axis, you have the mission time. The blue line represents the average mission time from this optimization that we just run. And the pink one represent the performance requirement. As we can see, except for the high scatter, which is most likely due to the fact that we need to account for the algorithm optimisation tuning during the first phase when the system is in service.

    Overall, we can see that the our architecture has a good margin compared with the performance requirements. Right. Let's now switch gears and enter the real-time phase of the data journey. My colleague, Jose, will show you how we handle and process data in a real-time environment following the order loop. Over to you, Jose.

    Thank you, Marco. Now, let's see what happens inside an autonomous system, the data journey in operation. Here, there is an example of a mission executed by one of the drones that Marco mentioned. You can see multiple waypoints and their routes between each one. This is an outcome of the planning phase. In this case, there is a human machine collaboration. The drone is sending information about the mission execution. And the control panel provides information to supervise how the mission is being executed in near real time.

    Note that the drone is not able to follow the right angles existing the trajectories. The system dynamics making it impossible to trace right angles. The drone will take the ownership to generate a realistic trajectory that is feasible to follow. A man can introduce last-minute changes in the global path that the drone will receive in near real time. Then, the drone can follow the new instructions without the need to be reprogrammed.

    In this example, you can see a cooperation between the man and the machine. Emulating human senses, the drone can listen to what the man says by receiving information through communication channels. But it can see by the environment, as well, by analyzing information coming from onboard sensors. Using all this information, the drone will execute the mission.

    Today, I'm going to talk only about how the drone can see the environment by processing different types of data sequentially in a loop, the OODA loop. It was introduced in the first episode of this webinar series. The level of autonomy will depend on the requirements for a given vehicle. But even for low autonomy systems, you may need a partial implementation of the loop.

    The loop has four phases that are executed sequentially and repeated continuously. First, the system gets the information provided by multiple sensors. This phase is where the system observes the environment. Second, the information coming from the sensors is processed. This enables the system to perceive. This phase is where the system is oriented with respect to the environment. Third, the system needs to plan the next action to be taken. With the previous knowledge, the system plans what to do next, but in real time. This phase is where the system decides. And finally, the system needs to execute a plan. It needs to control the actuators in such a way that it can follow the calculated path. This phase is where the system acts.

    So let's talk more in detail about the data journey across the different phases. This data journey goes from the left to the right, from the first to the last block. So let's start with the first two blocks, Observe and Orient. They are somehow tightly coupled. Consider what I'm going to say just as hints. You may find multiple combinations and processing algorithms that are not referred here.

    Here, we see three common sensors we may find on any autonomous system. Let's say that these sensors are typically used to understand the environment. I'm not referring here to the sensors that the system can use to understand its own orientation. I will talk about them later. The main mission of a sensor is converting real world data into digital information, in the case of a camera, in a matrix of pixels; in the case of a radar, in an electromagnetic signal; in the case of LiDAR, well, LiDAR needs preprocessed information before generating digital data. Once the information is in digital format we can apply processing algorithms and get valuable information. You may consider that those algorithms can run either in the sensor if it has computing capabilities or centralized in the main computer in the main processing unit of the system.

    Factors such as bandwidth or compute capabilities could condition the final decision, but this is another discussion. Image enhancing, filtering, or AI-based preprocess algorithms, such as semantic segmentation, can be applied to images coming from cameras. Reflections and greater are then extracted from raw electromagnetic signals. And LiDAR generates a point cloud with the reflections of light pulses. But can a system do anything with only this information? The answer is yes.

    Have a look at the video on the right. It is represented the point cloud generated by a LiDAR or neeach loop iteration. And now have a look at the video on the left. It represents an incremental point cloud generated by combining the point clouds on each iteration. This combination is made by a technique named point registration. And the way it is done, well, the good news is that the algorithms to perform the registration are already implemented. You can just use these algorithms.

    The bid on the right represents, somehow, the first block in the OODA loop. Here, we can say that the sensors are just gathering information from the environment. The video on the left is related to the second block of the OODA loop. Processing the data coming from the sensor, the system is understanding the environment. This is known as SLAM, Simultaneous Localization and Mapping. But here, everything is non-moving stuff. The system can now understand the environment surrounding it and where it is with respect to such environment.

    But what if something moving appears in the scene? Well, we need to perform further preprocessing on digital data coming from sensors. But before that, let's talk about echo positioning. It's really important to know accurately what is my orientation with respect to the environment. Don't forget that we are mainly talking about flying vehicles. So we can move in 3D coordinates. And IMU owns multiple sensors generating different kinds of data. All this data is fused to estimate the orientation of the aero vehicle. Adding a GPS, we can reference the aero vehicle with respect to a global geocoordinate system.

    But why is this so important? The knowledge of this information is a key factor to detect the objects accurately in the environment. A net georeference estimation on the detected objects will enable the possibility to share object positions across multiple platforms in a collaborative way. It's not the same share the position saying, hey, I can see a moving object 20 meters in front of me, than saying, hey, I can see a moving object with this specific geocoordinates and altitude. Did you get the idea? But how can a single autonomous system detect and track objects? Now we need to move forward in the OODA loop, or maybe not moving forward, but at least lagging behind the first block.

    Let's talk about what happened next with the data, but still in the second block of OODA globe, the Orient phase. We need more processing algorithms on data coming from sensors to detect objects in the scene, to be more accurate, moving objects in the scene. You can apply algorithms to preprocess data on its sensor independently to identify updates on each sensor. Or do you confuse the information coming from different sensors somehow?

    Anyway, the system now can see moving objects in the scene. AI plays a key role here. In my opinion, AI unlocked this data processing and made possible. the idea of autonomy. Detections can be noisy. The system may lose detected objects in some loops. The moving objects tend to keep their momentum. So applying motion models to detected objects gives to the autonomous system the ability to track the moving objects across the loops. This is not only for improving the object detection, but also helps to the system to estimate the trajectories and anticipate where the moving objects are going to be in the timeline.

    But can we assume that tracking a single object is the same than tracking multiple objects? The answer is no. Objects can appear and disappear from the scene. And you must handle these situations. But again, that's for another discussion. Let's assume that we have tools and algorithms to make that possible. Or even better, don't assume that it is a fact.

    In this image, you can see the drone on top of it. It is provided with two sensors, LiDAR and a radar-- radar, sorry. The UAV map, the green buildings you can, see is generated with the SLAM. The drone identifies three moving objects. And a mode tracker is following the motion on its detected object. Here, LiDAR and radar tracks are fused to generate a single track per object.

    But, of course, there are other ways to combine this information, sensor fusion versus track fusion. Remember that these are just hints. In a man machine collaboration case, I will trust the machine to accomplish this part of the loop. I don't think that I can track more than one single object by myself.

    But still, we can do more things with the data across the OODA loop it's time to make decisions. Let's move to the plan block. Remember that there is a master plan defined in the previous phase. I am referring to the planning phase, the phase that Marco introduced before. It includes a global path planning, the ideal trajectory that the drone should follow to accomplish the mission. But should I assume that the drone is not going to encounter not known obstacles in the middle of the route? Are my maps up to date? Moving object may be there when I try to follow the global path.

    Under ideal conditions, the drone could execute the mission almost blind. With a single GPS, for instance, that means no sensors needed to perceive the environment. And thus, none of the previous data processing makes sense, not data journey at all. But this is not really an autonomous system. It is a programmed system. The ability to adapt the existing conditions surrounding the drone makes the difference.

    Now, everything we've mentioned so far becomes really relevant for a successful mission execution. There may be multiple reasons for replanning, a pillar in the middle of a global path or a map that is not accurate, for instance. But what if the moving object suddenly appears in front of you? Most likely, the machine is faster to detect the situation and act to avoid a collision. Here, it is very important how fast the data moves across the OODA loop, but still standing at the same position. Let me explain. We've converted real world data into digital data. We can understand the world surrounding us and how we are with respect to it. We can detect and track moving objects. And we can replan the path if needed. So now, we must follow that path.

    It's time to move. Here it comes into action the last part of the OODA loop, the Act block. The drone is equipped with multiple motors and multiple sensors. A control step can be expressed as, I want to move to the next waypoint in the path at a given speed. And for that, I have multiple propellers driven by multiple motors. The robust control can deal with uncertainty in a MIMO architecture.

    In this case, there is a control loop that is sensing propellers speed, currents, voltages. And it is drilling actuators such as motors. Just to clarify, this loop executes in parallel with the OODA loop. Let's say that this control loop has its own life. But is there a better way than a robust control? Well, I don't really mean better, I mean different, new concept.

    AI was the key to unlock object detection in a robust and safe way. Now, AI is growing in control designs. Reinforcement learning is a machine learning training method that either can complement a classical controller or replace it at all. Handling uncertainties and controlling nonlinear systems are two of the most challenging tasks that the engineers face when designing a controller. By introducing this new methodology, the control design relies on simulations that can emulate a high number of situations. An AI model can learn how to interact with environments to get the best results, in terms of performance, accuracy, optimizations, among others. But again, these are just hints.

    So in summary, the OODA loop enables the autonomous part of the system by converting real world data into digital data with sensors, understanding the environment analyzing the digital data, replanning and adapting the path to the environmental conditions using advanced data analytics, and follow the paths controlling the actuators. It is really important to accelerate the interactions between the blocks in the OODA loop so the system can react faster to what is happening in the real world. And with that, this system can operate with autonomy and execute the mission seamlessly.

    So thank you very much for listening. Marco will cover the last part of the session. Over to you again, Marco.

    Thank you, Jose. The assessment part is where we compare the data gathered during the in-service experience, so during the execution phase that we just talked about, with the one elaborated during the preparation phase. The aim here is to understand whether our predictions are aligned with what we are seeing in service, and where accurate, therefore, and if not, try to first understand what went wrong and then try to correct it.

    The first thing we can do is that because we have recorded data in service, we can plot them against the prediction. So we can plot the mission duration that we have recorded during the in-service experience with the one that we calculated during the preparation phase.

    Here is the result. The blue curve is the one from the estimation and the preparation phase. The red one is the one from the in-service data. As we can see, the two curves they, do line up pretty well, up to a certain point in time when they do sensibly diverge. As highlighted in the figure, these deviations are so significant, causing that the real mission duration, it being higher than the one desired in a larger part of the envelope, meaning that not only our predictions in this case are way off compared to the service data, but also we are no longer compliant with the performance requirements in terms of mission duration. Because, as we can see, the red curve is above the pink line, which represents the performance requirement.

    The question is now, what went wrong? Well, there might be different reasons and many reasons. But essentially, we can reconduct all of those to two main classes of events. The first one is that the mission profile or constraints were not properly assessed. This is the case where, for example, we did not account for obstacles, so-called unforeseen obstacles. Or we did not account for specific atmospheric conditions to be present in the region. For example, we did not account for heavy fog during our mission planning.

    In this case, we need to go back to the preparation phase, rerun all our analysis to account for those new constraints. This is, of course, has a big impact. For two reasons. The first is because we might need to go back to the stakeholders and negotiate the constraints in terms of performance, if we really cannot achieve that. Or we might need to heavily change the architecture.

    The other case is where any of the sensors that are composing architectures, they do have a performance degradation over time. And this degradation is different from what we have originally estimated. It could be that the sensor became, all of a sudden, noiser than expected. Or we have a sudden accuracy drop due to a failure which was not planned to happen at this time of the life cycle. In this case, we don't need to go back to the preparation, as it might be enough to go back to the algorithm design and essentially change the algorithm to deal with the new situation, for example, to filter out the extra noise.

    Right, OK, now, we know what the root cause could be. But how do we find out where the root cause is? Well, if you remember, at the beginning during our preparation phase, we did plot the time budgeting for each of the asset composing in the architecture. So what we can do is comparing, now, the data recorded in service for each and every asset with our estimation.

    And what we can see is that although the in-service data for some assets are not as close to the estimation as we would have expect, as shows now in the slide for the land vehicle asset, basically, we can clearly identify that the issue is we the drone too. At some point, we see that there is a sensible deviation between the real data and the estimation. Great. Now we know that the drone tool is what is causing the issue.

    But the question is, how do we know what the issue is? It the issue that we didn't account for some obstacles or atmospheric conditions that the drone 2 is encountering and we hadn't foreseen for that, or there is a problem with one of the sensors of the drone 2, or there is something else? To understand what the problem essentially is, we can, again, leverage a simulation.

    Specifically, what we can do is to model what the response of the drone would be when introducing so-called anomalies. For example, what we can do, we can model what the response would be in a case of an unforeseen obstacles, an unforeseen weather condition, a sensor failure, or an unexpected increase of noise. We can then plot them against the original estimated behavior without the anomaly, which is the Black curve in this picture, and the data from in service, the red one.

    If we do zoom a bit, we see clearly what the problem is. So we see that when we are trying to simulate anomalies such as unplanned obstacles, unplanned weather conditions, and sensor failure, the system can cope well. The reason being that we predict those curves are very close to the prediction one, so the one without the anomalies.

    This is telling us two things. The first one is none of those is the cause of the problem. And actually, it's also telling us that we did a very good job in modeling and designing architecture because it's able, for example, to cope very well with the sense of failure. However, if we look more closely at the simulation with the increase of noise, then we see that, clearly, this is the problem. Because it's quite close to the behavior that we see in service.

    This means three things. The first one is the issue is with the drone 2. And it's because one of the sensor of the drone 2 became noisier than expected. The second is architecture was not well designed to cope with a high level of noise. The third one, which is probably a piece of good news we don't need to go back to the preparation scenario and we run all the simulation. But it might be just enough to go back to the design algorithm and try to make the algorithm more robust to a high level of noise.

    Let's try to do that. Now we've gone back to the algorithm design, correct the implementation to take care of the additional noise. And now we can compare the estimated average, mission duration from the CONOPS, the blue curve, with the in-service mission duration without correction, which is the red curve, and the in-service one with the correction, which is the black curve. For readability purpose, I've skewed the red curve by adding one hour so that the red curve and the black curve, they do not overlap.

    What we can see is that although there are still some areas where the service data do not match the estimation as we would like, the in-service data are now back to be within the performance requirements. And they are much more close to the blue curve, which is our estimation. To get a better job or to get a better result, we might need to go back to the previous plot and analyzing more in detail those minor deviations that we ignored at the beginning. Because the big issue was related with the drone 2. And maybe, in this case, we can get a better match.

    Right, thanks for that. Now, I will hand over to Juan, who will close the session and mention some key takeaways.

    Thanks a lot, Marco. Yes. So I just wanted to close the session with a couple of takeaways. On the first hand, I would like to mention that having a proper connection among these three phase is very important. It is preparation, execution, and assessment. And this is both at design time, but also during operation, so that you can refine your missions.

    Then, it is actually enabling-- or is enabled, sorry, by having a high quality assessment of results. And this means you need to have a good connection and be able to iterate faster. This way, you can not only improve the design of your systems by having more data, but you can also refine your missions and vice versa. Overall, this is data driven development. A lot of emphasis is put on data, both for design of systems, like I mentioned, but what is the role of data afterwards during execution, and how this data can be used. So this is what we wanted to show and where we wanted to show that we have a lot of capabilities.

    Of course, an important part of this data analysis is based on AI. Not everything is based on AI. Definitely, there are other traditional methods. And AI is not only for image processing, tracking algorithms, et cetera. But it can also be used for enhancing controls, for example. So this is something that I just wanted to emphasize because people tend to leave it on the side, as well. So I hope the examples that we showed were illustrative. And that's all for today. Thanks a lot.