Human Machine Collaboration in Aircraft Systems | Next Generation Aerospace Series
From the series: Next Generation Aerospace Series
Matt Jackson, PRESAGIS
Overview
This one-hour webinar will follow the format of a panel discussion. Within the discussion, we will present and review the challenges for human-machine collaboration in the next generation aircraft systems, and how the technology has evolved to address those challenges to process a vast amount of data from multiple sources, process the information and enable better and faster decision making.
Highlights
In the context of new aircraft programs, there have been developments to reduce pilot workloads, improve reaction time and decision-making process while applying the highest standards of safety and security. Among other changes, we have seen the following relevant changes:
- Aircrafts used to have multiple people performing unique tasks. This has been reduced, leading to more information to be processed by the pilot.
- Some years ago, data was mainly local to the aircraft, there was limited exposure to Systems of Systems, and we could say communications were slow.
- Even though most information was local, pilots counted with a limited number of sensors.
- There was very limited onboard processing and real-time capabilities. For example, we could use a weather radar, but we had no models looking into the future.
- Systems had less stringent safety cases, whereas nowadays certification and compliance to the standards are very often mandatory.
About the Presenters
Expert Panelists:
Satish Thokala is the Aerospace and Defence industry manager at MathWorks for the APAC Region. His area of expertise is avionics systems for both military and civil aircrafts using Model-Based Design, and he has over 20 years’ experience. He led large engineering groups responsible to develop, verify and certify cockpit display systems at Collins Aerospace.
Pablo Romero is an Application Engineer at MathWorks specialized in real-time simulation and testing. Prior to MathWorks Pablo worked in BMW and Airbus Defence and Space focused on model-based design techniques and driving and flight simulators among other topics.
Matt Jackson is currently the Technical Product Manager for HMI and Embedded Systems at PRESAGIS. Over the past 25 years, he has worked on multiple HMI systems for Fast Jet Aircraft, from initial HMI concept to deployment, and with companies such as BAE Systems, Airbus, Lockheed Martin and Boeing.
Moderator:
Juan Valverde is the Aerospace and Defence Industry Manager for EMEA at MathWorks. His area of expertise is dependable embedded systems design. Prior to MathWorks, Juan worked at the Research Centre for Collins Aerospace as a technical lead for embedded computing technologies.
Recorded: 15 Mar 2022
Let me first introduce myself. My name is Juan Valverde. I'm the Aerospace and Defense industry Manager for MathWorks in the EMEA region. The idea for the session today, we'll be having an open discussion with our experts from Presagis and MathWorks about the new challenges appearing for human machine collaboration in the next generation of space systems.
For that, I'm very happy to have here with me a group of colleagues that they have a lot of experience in the aerospace industry. This experience is coming either from their previous positions in their other companies or because of their daily work supporting aerospace companies in their development processes. So let me introduce them.
First I would like to introduce my colleague, Satish Thokala. Hello Satish. Good morning.
Hello, everyone.
Satish is the Aerospace and Defense Industry Manager for MathWorks in the APAC region. His area of expertise is avionics systems for both military and civil aircrafts. He has a lot of experience using model based design techniques. He has over 20 years experience leading large engineering teams, responsible for development, verification, and certification of cockpit displays. He was working previously at Collins Aerospace. Thanks again and welcome.
Then we also have here with us Matt Jackson from Presagis. Hello Matt. Good morning.
Good morning, Juan.
Matt is currently the technical product manager for HMI and embedded systems at Presagis. Matt has worked over the past 25 years on multiple HMI systems for fast jet aircrafts. This goes from initial HMI concepts and all the way to deployment. Matt has worked with companies such as BAE Systems, Airbus, Lockheed Martin, and Boeing. So again, thanks Matt and welcome to the session.
And finally, we have here with us Pablo Romero. Hello Pablo. Good morning.
Hi. Good morning, Juan. Good morning, everyone.
Pablo is an application engineer at MathWorks. He's specialized in real time simulation and testing. Prior to MathWorks, Pablo worked at BMW and Airbus Defense and Space. And he was focusing on model based design techniques for driving and flight simulators among other topics. So hello again and welcome this morning.
Cool. So before we start with the core of the session, I would like to start and introduce a big topic and set the scene for my colleagues today. Why are we talking about these today? Well, the reason is because interesting things are happening in a lot of new aerospace programs. We see all these programs about the sixth generation fighters happening, like F class feature are call the various system. We also see increasing trend in commercial aviation to reduce pilot workloads and move towards more autonomous operations. But what is changing?
So as I already mentioned, one of the first aspects is pilot workload. Aircraft used to have multiple people performing unique tasks. This is now changing and we'll show some of these in the conversation today. Another important revolution coming is about data, of course. So data was a traditionally mainly local to the aircraft. There was very limited exposure to other systems. And this is, of course, changing. New communication systems and new sensors.
Of course, with this data revolution, we are starting to have limitations in terms of onboard processing and real time capabilities. So they could use a weather radar before, but we have no models looking into the future. So we have less automatic interpretation of the information which relies on the pilot.
Then of course, we have the safety standards. The safety standards are becoming more stringent. And this is also imposing a lot of limitations and challenges. And last but not least, we all know how security is now a big topic. In the previous program, security was mainly achieved by obscurity than by design. And now we have new standards and we have new analytical methods. And this is a big one.
Perfect. So then now that we have thought a bit about what happened, the complexity of the technology used has increased significantly, as I mentioned. So we work on multi-company programs, on multi-role systems design where faster responses are required with a lot more data. Less decisions are taken by humans. This brings a lot of challenges. So let's hear the opinion of the experts. I would like to start with Satish. Satish, what's in your opinion the main driver or drivers for this change?
Thank you. Thank you, Juan, for the wonderful question and the complexity part. So we'll start with it. So with the latest technological advances, so the ambitious programs once upon a time are becoming a reality these days. And these state of the art programs are extremely large and complex and demands joint development by multiple organizations across the nations. These nations are spread across the nations, across the continents.
And we could clearly see regions joining hands together to materialize these complex programs with expectations to meet the demands of the multi-nation needs. Eventually what is happening is these programs are becoming huge with all the technology that is supposed to go into these programs and big to handle and also big to fail in one way.
Now, let me give you a few examples to explain what exactly I'm talking here. So these days whenever we talk about any state of the art aircraft, it's actually not a single aircraft. It's not a single aircraft. It's a system of systems. In other words, if I can call it, it's a flying network. The data capture, the data exchange is not limited to within the aircraft, but with the near one by one starts
For example, if we talk about passing weather input to the trail aircraft, which we can call it as a flying radar. So the aircraft acts like a flying radar and thus passing the weather info to the trail aircraft or the nearby aircraft. What is happening? How this is increasing complexity? These kinds of requirements leading to the rising number of sensors onboard enhance that data.
If we take few examples, any jumbo jet, the large passenger aircraft, so there are easily more than 25,000 sensors. And the sensors on some of these planes from the engine alone create more than one TB data in 24 hours. So then the next question is, how are we going to store and how are we going to process this data? So need to be cautious on the aftermath. So one part is putting sensors on board. Second thing is what are we doing with the data that we are capturing from those sensors?
I think one more example before I stop. We often see governments talking about manned and unmanned systems flying together. These kinds of systems are supposed to aid in better decision making by pilots are automating if not most of the decisions, at least some of the decisions, supported by this data. One major benefit is this is going to reduce load on the pilot for sure. This would also further introduce new challenges, including how to present information to the pilot. And more important, who should do what in dividing the task of flying between pilot and the machine? So Juan, I would stop here for now and then continue based on this.
Yeah. Thanks a lot. I think this was great. I think these are indeed very important challenges and exactly what you're saying. So this brings an extraordinary load to the onboard processing system. So more processing needs, complexity, longer design and certification times. So for that, actually, I really would like to ask Pablo about this question because of his previous experience. So what do you think is the driver of this change?
Thank you, Juan, for the question. So I think that this has introduced some very interesting topics and has presented some kind of challenges that we are facing right now as we are trying to apply to aerospace to this next generation aircraft some of the technologies of the advancements that we have done in the context of not only of the model based design but in general some simulation, HMI tools and so on. In the last 20 years, advancements and improvements that we have been applying in the automotive, in other industries, even the home appliances, it's something that we see all around in many customers and many partners that we are working on.
But in aerospace, we are talking about programs that are much longer. They are 10, 15, 20 years. So it takes a bit longer to apply all these new concepts, all these new technologies. And now this is the time where we are starting to see an increased demand from both customers and also from the technology to apply all these new tools, all these new methods. So this has already mentioned, you as well, some of these new improvements that are key for this next generation aircraft.
So first of all, there is more computing power available. We are all talking about AI. There are much more data available thanks to data sensors-- the newest bar sensors. We want to have multi-function computers, multi-function HMIs. So this is a lot of data to process but also a lot of data to transmit from one aircraft to the other to UAVs to the ground station. We may have even aircraft that are fully autonomous. So there are more than ever that we have to consider when talking about the next challenges in aerospace.
And also it's very important that all this information that we have available, this information that we are generating in real time from multiple sensors-- weather, force that are flying in the same space, all this kind of data. They must be first processed in real time and presented to the final decision maker in the best way possible.
So to give you an example, talking about some, for example, engine performance. We may not see any more some metrics like, for example, temperature or revolution per minute or so on. We want to know more information. Is that engine performing well? What is the remaining life estimation? When does the aircraft need or the engine or some part of it require maintenance? So we want to go to a higher level of interpretation of all this data we are generating.
So this is something that, as said, we are seeing more and more often. It's very important as well to present this data to the pilots in a way that is easy and quick to interpret. And sometimes this could be even another platform. So we are maybe talking to UAV. So maybe there are other pilots or people monitoring the aircraft from the ground station. So we are adding more complexity in this new system or system.
So how we are going to face the challenge of communicating so much information, processing this information real and presenting that either to pilots or to other people that are going to make the decisions if the aircraft is going to fly or not or it's going to perform certain mission, I think this is a challenge. But I believe also the technology has reached a point in which we are stripet and we have the tools and methods to try to go and pursue these new challenges.
Yeah, definitely. And then, of course, as you mentioned, this is posing a big, big challenge to the embedded devices. All the new architecture, the new processing architectures need to catch up. And certification would be or it is being a problem already. So yeah, we have to make sure that we accelerate these design workflows. So thanks a lot Pablo.
So we've seen the amount of data, how this amount of data is impacting these processing devices and the algorithms needed as well. So what I would like to ask Matt for the first time today is how does this impact the actual human machine interfaces?
Thanks for that one, Juan, and thanks for the background there, Pablo, as a kind of good introduction here with this data and systems. And obviously, my colleagues here have been talking about complexity and information and technology enabling things. It's very interesting, because we have to remember what the purpose is of what we're trying to do here, data and technology as a facilitator to actually enable somebody to do a task. And that's the big problem when we look at the human machine interface is what is the task we're asking the person to do in these next generation aircraft?
One of the things technology does is enables a lot more automation of kind of generic tasks to allow the kind of pilot or the user of the HMI to kind of perform additional tasks. So actually what we're doing is asking them to do different tasks to what they used to. Traditionally, you had to learn all the aerodynamics of the aircraft and the flying.
And now you look at aircraft, you point it, it goes. Actually what you're doing is understanding what your objective is. I'm trying to fly from Frankfurt to Helsinki or to Stockholm. That's your objective. And in between, you've got to deal with the situation, be it weather, be it kind of climatic. Is it your fuel, your engines? What is it? You're managing your actual system. So in the commercial world or in any other type of flight environment, you actually have an objective you're trying to do.
So the technology is an enabler. And there's an expectation. The kind of world has moved on in a number of years, and technology has exploded. What we have and what the generation of users coming through, they expect a certain amount of information presented in a certain way. If we just dumped raw spreadsheets in front of a pilot, they'll never be able to understand it. These sensors or feed out from an engine or from a sensor, a radar, or a LiDAR, but no one understands it.
What we've got to do in the kind of human machine interface is actually go, what is the problem we're trying to solve? What is the objective? And then look at how we facilitate that with new technology of how do we present it. Is it a sense of fusion? Are we merging a LiDAR picture for the ground alongside some data with maps? Are we projecting that then projected velocity for landing? How do we present that? So we actually need to look at new ways of presenting the information to minimize the pilot workload.
So technology facilitates this. Years ago, you wouldn't be able to fuse this in real time. You're now able to take information and present it either via computer graphics or visual stimulation or kind of haptic feedback to allow the pilot not to be overloaded on that. And this is the interesting. Technology is kind of enabling this data overload, but it's also providing a facilitation to actually allow interaction and controlling. We're all used to touch screens and gesture controls on our phone. And you point a child and they go, what's a keyboard? And they don't understand. And the reality is it's quicker to flick your finger than it is to find a button and do that to change a page.
But this means, obviously, the task that somebody's doing is evolving. They can do more because of automation. But we still have to remember that the human body and the mind can only do a limited amount. So kind of the coupling of closed loop of even monitoring of the user to change the information dynamically depending on their workload becomes more important. We actually have to look at how much information is being presented. Do we need to declutter that information?
These are all things that technology can enable, but they become challenges as we build the next generation of aircraft that we have to consider. So obviously that kind of moves on then to the next level of discussion here. How do we build this and what does it do? But these are the kind of drivers here. We're facilitating a kind of communications network, a sensor data network, overload of information, and we've now got to build those systems. So I think we'll probably talk about that kind of later, Juan.
Yeah, definitely. This is very important. So we are frequently saying this sentence that the important thing now is the data. But not data, it's having the right data at the right place at the right time. And not only data, but secure data. So I think it's easy to say, not that easy to do right. And definitely then how the way we actually realize that and the way we organize this is very important.
So we've seen examples of how these technologies are changing. And this is one of the most important challenges for the success of these new programs. The way this technology is used, like Matt is saying, how processes, how methods, how tools are used so that we can enable multi-domain and multi-company teams to work efficiently. These are huge systems. You guys mentioned system of systems. So how are we putting all these pieces together?
So these systems go beyond worrying about one system into these system of systems, like I mentioned. So different themes, frequently in different entities, they need to agree in requirements, interfaces, artifacts. For instance, the integration of solutions or proof of concepts. When you're doing a project, you have different phases. So this integration will combine different items that are at different design stages, and they have different levels of fidelity. The integration of that is very heavy. So how can we do that?
In these programs, we talk about concepts like common working environments where the interoperability of teams through common tool chains and methods is done to accelerate the development and identify issues early in increases. Have the aircraft already to start seeing that the displays are working, of course.
So actually, this is the second part that I would like to ask our experts. So how can we deal with this complexity, this scalability and collaboration challenges? So Satish, I can start with you.
Sure. Thanks. Thanks, Juan. So in your talk, you just mentioned about common working environment. So I'll elaborate a little bit more on that, because I strongly see having an effective common working environment in place is one way to solve the problem of complexity. So using a common working environment, we can bring in multiple teams together.
So in today's world, I would say working in silos is more kind of tribal. The modern approach is enabling seamless collaboration between teams and even eventually between organizations. And at the end of the day, these new large programs are to be successful. So that has to start in terms of establishing more effective working environment for all the diversified teams that are contributing to these large programs. It's not a choice anymore. I think it needs to be part of our processes and in to our systems.
These program complexities rights so they can't be dealt in the field when systems are deployed. And these are to be addressed early in the lifecycle. The earlier you can identify, the earlier you can fix these issues, the bugs, the more successful program is going to be.
So then the next question is, OK, then how do I select my common working environment? So if I have to make a choice and select a common working environment, probably I would consider a few parameters here. So related to the capabilities of this framework, what this framework can offer. We all know that nowadays nobody develop these programs from scratch. We all have something to build on top of.
So that's the first and foremost question I think that we need to ask ourselves. Is our design framework scalable and does it support the incremental innovation, if I can call it as built on top of approach. And can we do early reg studies or build architecture of the system. And then without losing what we have done at the systems level, can we continue to build on top of what we did? Can we pass on those architecture digital designs to the subsystems teams who can leverage these designs, lower abstract designs, and then add more and more fidelity to those designs?
Then the second point I think that we should consider is in aerospace industry, as we all know, safety comes first. And these are the primary objectives of this industry. Now with the given budget and time, so we all need to rely on commercial solutions but not yet compromising on the safety. So the question is, is our framework flexible to plug in new technologies? For example, say adaptive systems in cockpit, as we are talking about HMI aspects here. So can we bring in some kind of adaptive systems into the cockpit into our existing design methodology which we have developed over decades in our organization?
We should be able to move-- from once the design part is done, then how can we connect that piece to the verification framework? Can we move between design and the verification framework seamlessly? So does establishing traceability across all the parts of the lifecycle, all the phases of the software development lifecycle? And moreover, so verification framework should be robust to catch and fix even cartner cases.
So too, Juan, I'm talking more in the process related challenges so far. But one important aspect that we shouldn't be missing here is about people related challenges. Nowadays we are seeing these teams are diversified and composing of team members coming from different cultures, different expertise levels, different backgrounds. So people collaboration is very important to make our common working environment effective.
So then the question is, how do how do we do that? How do our framework enables this kind of collaboration instead of enabling teams working in silos? Enabling these teams through regular upskilling is also equally important in addition to selecting the right working environment for our teams.
So I would like to pose the question to the audience who are listening to this talk here. So in your current organization or in the organizations that you work based on your expertise, how does this common working environment looks like in any of the major programs that you work? On the programs that you are working on now, how does this environment look like? So maybe we will extend this discussion more as we hear thoughts from other panelists as well, Juan. But I would like to stop here.
Yeah. Thanks a lot. I think this is a great question for the audience, actually. So I hear very important words like reusability, traceability, collaboration, and of course, keeping safety. So I think these are the key aspects. And with all this complexity, we have to make sure that we keep these as the pillars to build our system. So I think it's great.
So I would like to ask the next panel. So in your opinion, we've seen what Satish was commenting on. So in your opinion, what's the what's the way to deal with this complexity in this case?
Yeah, thank you, Juan. So Satish was mentioning, I was thinking in the meanwhile about this common development and verification frameworks where we can collaborate easily and in an efficient way. And I think simulation plays a key role to that purpose. So many people, even myself in the past, we think simulation must be as detailed as possible. A simulation must cover all the details. It must be as close to the reality as possible. But actually, the fact is that simulation must, above all, fulfill our purpose.
In many cases, we don't need a very high level of detail to achieve our objective. So it depends on the development phase or the lifecycle phase where we are. Maybe at the beginning, we have some requirements, validations, and trade off studies. And a very simple simulation with behavioral models might be enough in this case.
Then of course, we can increase the level of details of this simulation. We can go to detail design. We can go to HMI status for sure. We can go to even flying simulators and do simulation all along the way. So simulation is not intended just for one very specific detail design. But because of all simulation are inherently run, but some of them are useful, then we must find the simulation that is going to be useful for us.
So now we see a lot of ways of using simulation. We see digital twins. We can use simulation not only for development but for verification and validation in this frame this was introduced in. So actually we see that-- or we have started to see in other industries or we have heard about digital twins. And I think that's a great way of reducing costs of accelerated development and verification as well in the aerospace industry. Because with digital aircraft, we cannot only build and design our aircraft faster, but we can also build, for example, simulators.
We can perform HMI studies that are going to be not only useful for those engineers like us doing development and validation but also for pilots so that they can train beforehand. They can provide feedback about is this information useful for me? May I like to have this information presented in some other way? Is this too much workload for me? Is this something I can handle, as Matt was saying in the previous discussion?
So I think this is very important that we can use simulation for many different aspects that simulation is going to play a key role not only for engineers but for crews, for pilots, for people involved either now in developing, for example, synthetic data for AI to train these future autonomous pilots. So I think it's very important that we consider that the battlefield of the future is changing, that we are going to see people both in different aircraft scenarios, in different missions that might be thousands of kilometers apart from each other. And this is something that we can prepare us up front if we have a good simulation framework where we are going to carry out some development validation studies, not only the traditional ones to develop sound flight controls, but also for HMI studies for data post-processing and even for AI.
Yeah. Thanks a lot, Pablo. I think this is very, very important. And this is becoming, I mean, something that is a necessity. Because you have teams that are working now remotely. So it is very difficult to exchange things in a normal way, let's say. So communicating through models and doing simulation. This capability of being able to combine these different levels of fidelity that we're saying. Give me your low level fidelity model and I will try to see how this behaves together with mine. So all this is very, very important and it's a great way to cope with this complexity. Thanks a lot.
So I would like to rebound the question to Matt now. He does a great job integrating things together. So I'm sure that he will be able to give us very good conclusions.
Thank you for that one. And thank you, Satish and Pablo. Because I think you've introduced kind of two very important topics here is this kind of development environment and these kind of simulation concepts here. Because the challenge we're facing building these systems moving forward is traditionally you would have a small team building your HMI of software developers. And they're very good at coding and they're very good at this sort of thing, and they would take a list of requirements and hand code an implementation. If they were lucky, they would have a great tool to work with to do some HMI designs and drawing.
But the problem comes is actually, we're just expanding the scope here. We no longer can rely on one person doing the development. It doesn't work that way. It's impossible to know all the systems and actually put this together. You can't build everything. The lines of code that are running in there is exploded. So what we're looking for is kind of steps and methods to actually improve that development and understand that these development teams are no longer co-located.
They will be distributed, because you have a specialist in building a radar system. They may be located in Germany. You may have a specialist in the aerodynamics side that's in Spain. These two things come together, but you're feeding that information. Your human factors person may be in France. And you've actually got to put all this together. So what we're looking at here is, how do you allow this to happen? And what is the development cycle that you're going through?
So one of the ways of actually allowing this is collaborationist models and working in a common environment or a model based approach where you don't need to know assembler code or C code or something like that. We can exchange models that people can readily understand from a high level and connect them together. And this is where simulation comes in, which is very interesting, because actually I think Pablo very accurately described there simulation is often thought as the high fidelity, precision, kind of multiple data set running with kind of plus or minus 0.01% tolerance. Actually simulation comes through the whole lifecycle.
As we get closer to the end product, we need higher fidelity simulation models to validate against. But earlier in the lifecycle, we need models, as Pablo mentioned, to validate what the pilot is doing. Is this approach of presenting the data actually useful? Can they interact with it? Can we monitor it? We actually want to throw away ideas quickly. We don't want to be 12 months into a development lifecycle and realize we've spent hundreds and hundreds of man months developing something that's unusable.
What we need is maybe a simplistic flight model. It doesn't have to be accurate. It just has to pitch up and down and bank roll left and right. But actually that's giving us enough for the actual user to realize, hey, as this data moves and changes, I can't understand it. The same with the LiDAR model. The same with the kind of sensor fusion. We need models or simulation of the right fidelity.
So as we come to kind of doing models, one of the great thing here is typically with requirements based development, you have a piece of paper. You'd hand it to another company or another team. And they would go and implement it. And that's one of the problems. Instead of refining our development, we re-express. Today we're using PowerPoint. Tomorrow you're using C#. The next day you're going to go and put it into C++ into the aircraft. You've gone through three re-expressions. Is that sensible?
Well, actually, what you're doing is spending a lot of time and money checking that what you've expressed met what the previous team thought. It's the kind of adage of, hey, I asked for this. I asked for a swing and I got a slide in the play park. It's like, yes, they're both play equipment that the children enjoy, but actually they're completely different pieces of equipment. And that's the problem with translation of requirements by using kind of a model based approach.
And that's the beauty of things that have moved on. There are now open standards for exchange of graphics between models. So you can actually look at HMI. There are now moving into the industry that they've actually recognized how do we exchange HMI models? What are are methods? What are the open standards? This is driving collaboration because it's been demanded. We have do this. It's impossible.
The other beauty is distributed simulation has come on huge amounts. Back in the kind of early days of my video gaming as a child, you'd have a single computer in front of you. You'd play your video game, it's great, and kind of gaming sessions. The best you would expect is your friends coming to your house and all sharing, taking turns on a joystick or kind of two joysticks on there. Now we can fire up the internet and you can actually play with somebody in India, in America, in Spain all at the same time. And these technologies have moved forward in real time low latency that we can then take use of in the simulation world. We now know that we can have an accurate model of a radar running in a country, sending data to a team in another country.
So when we talk about simulation, we no longer talk about running it all on a local set up. Some of the original aircraft I worked on, all our simulation tools you had to run on a big mainframe to connect the data, because it was all localized. Now we have the power to distribute that. And the specialist on the radar or the LiDAR can be a specialist providing us that information. And again, open standards kind of come flooding in to make this easier to connect than your models to this data, which allows you then to run this fidelity and go through that type of thing.
So what we're doing is just trying to reduce mistakes, trying to stop re-expressing them time and time again by using paper. What we're trying to do is recognize that the size of these teams are much bigger. So model based common interchange. And this early validation is really, really important, because it's going to cost a lot more money than it used to if you make a mistake. If I made a mistake a few years ago, it's only my time. If I've got a team of 200 engineers and we've made a mistake, I've wasted 200 engineers a month. That's a lot of money. We can't afford to do that in modern times. So we need to be efficient and quick and get our results early in there.
So kind of summarizes bringing these kind of things together. We've got to do an early validation model and assume that things don't work together. But we've got to look at how that collaboration works. So those are my thoughts there, Juan. I'll leave it to you to kind of comment on that.
Yeah, I completely agree. So yeah. I think in the interest of time, because we have 20 minutes, and I really would like to have some Q&A part. So I think we can move. So just as a quick summary. So we've seen what is changing in these new programs, how technology is one of the trigger affecting these changes and how it is vital to properly address processes and these collaborations that we were mentioning.
So before going into questions, I really would like to remind you and for you to have in mind that this is not the end of the story. So this is just the beginning. So this is one of the episodes. So please follow us in our next episodes. The next one is about collaborative simulation and integration environments. Actually, Pablo will be there. So it'll be great.
And then maybe we can start with some of the Q&A part. So please remember to post your Q&A, your questions in the Q&A chat. And then we will try our best to do to answer. Good. We can give a couple of seconds for more people to do that.
Juan, while we are receiving the questions, let's give probably a few seconds for our audience to post questions, so Matt made an interesting comment. So how do we control the propagation of error? So if we see an error coming from one model or one design. So how do we control the propagation? So that's an important characteristic of the common working environment that we should look for.
So while Matt talking, actually an example is ringing in my mind. So in one of the projects that I worked on, we used to make big proper error propagation trees, what we used to call at that time. So it's a kind of fault analysis. So if an error is made in one particular issue, so how that could propagate and impact panel stock of engineers where the teams are spread across the globe. So I think with effective working environment, common working environment model based design, I think it makes life easy to carnerpic these kind of challenges.
Yep. Perfect. So great. We are top of the hour. We're ready. So I really would like to use this time to thank you personally, the panelists. It was a great discussion and great to have you here. And of course, the audience, please follow us in the next episodes. And thank you guys. It was a pleasure having you today here with us.