Applying Artificial Intelligence for radar applications
From the series: Radar Applications Webinar Series
Overview
Artificial intelligence technology is advancing fast. The possibilities for engineers to apply AI for radar applications is unlimited. We will use some real-world applications for AI which are evident now.
In this session, through detailed examples we will showcase how recent developments in MATLAB and Simulink enable more efficient simulation of AI for radar workflows. Highlights include:
Deep Learning & machine learning for radar applications
- Workflow for AI driven system design
- AI for classification for SAR targets, waveforms, images, micro doppler signatures
- AI for regression including human health monitoring, maritime clutter removal
- AI application examples
Elevation estimation in 3D Surveillance radars using Artificial Intelligence
- Existing challenges in the elevation estimation of aerial targets in 3D surveillance radars
- Motivation to use the AI in elevation estimation techniques
- Implementation & testing of the proposed elevation estimation techniques using MATLAB tools
- Performance improvement using AI techniques
About the Presenter
Sumit Garg | Sr. Application Engineer | MathWorks
Sumit Garg is a senior application engineer at MathWorks India specializing in design analysis and implementation of radar signal processing and data processing applications. He works closely with customers across domains to help them use MATLAB® and Simulink® in their workflows. He has over ten years of industrial experience in the design and development of hardware and software applications in the radar domain. He has been a part of the complete lifecycle of projects pertaining to aerospace and defence applications. Prior to joining MathWorks, he worked for Bharat Electronics Limited (BEL) and Electronics and Radar Development Establishment (LRDE) as a senior engineer. He holds a bachelor’s degree in electronics and telecommunication from Panjab University, Chandigarh.
Ram Pravesh | Deputy General Manager | Bharat Electronics Limited
Ram Pravesh has received the B Tech degree from the Institute of Engineering and Technology, Lucknow, IN in 2002 in Electronics & Communication. He is working in Radar SBU of Bharat Electronics Limited, Ghaziabad, IN since 2002 till date. He has completed is his M Tech degree in Radar & Communication from the Defense Institute of Advanced Technology, Pune, India in 2016. He is a Radar System Engineer and has been involved in the development of 3D surveillance Radars (Ground and Shipborne) for the Indian Military. He has been associated with Commissioning, installation, and evaluation trials of various types of radar systems. He is pursuing a Ph.D. in Electronics & Communication from NIT, Patna, IN. His area of interest is Radar Antenna, Radar Signal Processing, Target tracking & Recognition, SAR, Machine learning, and Remote sensing and Imaging.
Recorded: 9 Feb 2023
Hello, everyone. Thank you for attending this webinar on applying artificial intelligence for radar applications. My name is Sumit Garg. I'll be walking you through the presentation today.
So we'll start off with, in this presentation, I will cover four main topics starting with an introduction to show you the workflows that you can use for if we are using deep learning for radar systems. And then I will explain some of the challenges that you might face during this workflow. Second, I will show you which solutions MathWorks provides and classification that we can use deep learning for. Using these examples, you can take advantage to overcome the challenges that we will discuss in the introduction.
Third, I will discuss some regression examples that we can use artificial intelligence for. Finally, at the end, we will summarize the presentation with some AI application examples of shipping in the toolboxes. So let's get started.
Starting with why do we need artificial intelligence and radar systems, radar systems have many different applications. We can see multifunction radar that performs different functions, search and tracking, for example. We can also find radar systems in other industries. Automotive industry is an example of it where we use radar systems for applications such as adaptive cruise control or more generally for sensor modality and autonomous driving.
Another application that comes to my mind is Synthetic Aperture Radar for generating images. So the advancement in the technology that has enabled all these applications have also generated some challenges. One of these challenges is system design complexity that is increasing rapidly. As a result, the requirements of these complex systems cannot be satisfied with traditional methods.
That is why we see new approaches based on artificial intelligence who have more flexibility in system architecture, obtain better performance, or generate better results and output. So artificial intelligence, or AI megatrend, is a simulation of our intelligent human behavior. It's a technique to perceive the environment, understanding its behavior, and take action.
Consider self-driving cars. AI-driven systems like these integrate AI algorithms, such as machine learning and deep learning, into complex environments that enable automation. In general, machine learning can be categorized into unsupervised learning and supervised learning. Unsupervised learning works with unlabeled data and the output of the network or clusters with common features or characteristics.
Supervised learning, on the other hand, works with labeled data to predict a quantity of interest. You provide input that you think have an impact on the response along with the responses to the network for training. Then the network will predict the response based on its input.
Unlike the other two frameworks, which operate using a static data set, reinforcement learning works with data from a dynamic environment. And the goal is not to cluster data or label data, but to find the best sequence of actions that will generate the optimal outcome.
Deep learning is a subset of machine learning. While machine learning operates on extracted features, deep learning performs automatic feature extraction inside the network. You can expect two types of responses from deep learning networks. That is classification and regression.
When the response is discrete in nature, for example, the targets from the Synthetic Aperture Radar image or predicting hand gesture movement, problem is a classification problem. If your response is continuous in nature, for instance, if you're trying to monitor human vitals or remove clutter from a maritime surveillance data, then it is a regression problem. I will focus upon the supervised learning examples today, and I'll walk you through some of the examples in the coming slides.
The workflow for deep learning starts with creating and accessing data sets. You can either work with the recorded data from real hardware or synthesize data with the radar scenario workflow. To train a neural network, a large quality data set is required. And this data needs to be labeled.
Now, labeling is a time consuming and repetitive task, but it is essential part of this workflow. With the Signal Labeler app, you can automate the labeling of synthesized or recorded data sets. Then you can preprocess and transform the data to make it more suitable for training the network for your application.
For example, if you are using networks that are working with images, you can transform the IQ signal into time frequency images and use that for network training. Also, radar and wireless specific expertise and tools play a key role when preparing and preprocessing the data to train networks. Especially, you won't find quite as much as published research in these application areas as you find an application-based in Computer Vision.
You might also be dealing with large amounts of recorded data. Besides labeling the data, you might need to perform some augmentation and transformation on all the data. Features such as Datastore or ImageDatastore comes in handy in this situation, which will enable you to work with these large data sets.
Next is to develop predictive models and train the models with data you have prepared. Either you are designing the models from scratch, or modifying an already developed model. You can use Deep Network Designer app to get a good head start.
There are also tools available for managing and optimizing the training process. And when you are satisfied with the outcome of training, the performance of the network, you want to deploy that into a desktop app, cloud, or embedded devices. And it is important to note that it is an iterative process.
Now, you can develop radar systems with MATLAB and Simulink. In our view, the building blocks for a radar system are radar frontend, antenna systems, phase studies, signal processing and data processing blocks, and resource management and control for modeling a multifunction radar. Because it is important to predict and analyze the performance of radar system in realistic environments, it is essential to consider the effects of environment and targets by modeling scenes and scenario.
Our goal is to make MATLAB and Simulink the preferred platforms to develop radar systems for analysis of real data or using more simulations for the design, deployment, integration, and tests of radar. For radar technology where we do not have enough real data, synthetic data, a new area of artificial intelligence, relieves you of the burden of manual data acquisition, annotation, and cleaning. Synthetic data generation solves the problem of acquiring data that would otherwise be impossible to obtain, especially for radar applications. Synthetic data generation will produce the same results as real world data in the fraction of time.
One of the key enablers to support this radar lifecycle is three different abstraction levels for analyzing and modeling radars, power level, measurement level, and waveform level. The power level is based on the radar equation and used for link budget analysis. In the measurement level, the signal processing is abstracted out and detections are generated based on target SNR and receiver operating characteristics. And the waveform level is used to generate IQ signal.
In this abstraction, all the components of radar system, including antennas and transceivers, are modeled. As you can see in this chart, the simulation time increases as the fidelity of models are increased. So it is essential to have models with different level of fidelity that can serve different stages of radar system lifecycle for generating synthetic data sets.
Now, let's go through some examples and see what all are the tools and capabilities that you can use to overcome the challenges that we discussed. Let me start with a couple of Synthetic Aperture Radar examples that shows how to classify Synthetic Aperture Radar targets using deep learning and target recognition in large scenes in Synthetic Aperture Radar images.
Starting with Synthetic Aperture Radar target classification example, a major task for Synthetic Aperture Radar-related algorithms has long been object detection and classification. Here, we used a simple convolutional neural network to train and classify SAR targets using deep learning. And the data that we are using here is a moving and stationary target acquisition and recognition that is MSTAR Mixed Targets data set, which is publicly published by the Air Force Research Laboratory.
This data set contains 8,688 SAR images from seven ground vehicles and a calibration target. The data was collected using X-band sensor in spotlight mode with a 1 foot resolution. The images were captured at two different depression angles, 15 degrees and 17 degrees. As you noticed right in the beginning, we are facing a similar issue, large amount of data. But instead of signals, we are dealing with images in this example.
ImageDatastore enables you to store large image data, including that data that does not fit in the memory and efficiently read batches of images during training of a convolutional neural network. You can explore the Datastore by randomly displaying some chip images here. And you can divide the data into training, validation, and test sets.
Here, we use 80% of the data set for training, 10% for model validation during training, and 10% for testing after training. And you can see a conclusion matrix to study the model's classification behavior in greater detail here. Here, we are able to get very good accuracy in terms of target classification for these images.
The Automatic Target Recognition example shows training of Region-based Convolutional Neural networks for target recognition in large scene Synthetic Aperture Radar images. In this example, the solution has been applied to solve the problem of target detection and recognition. The RCN network employed here not only solves the problem in integrating detection and recognition, but also provides an effective and efficient performance solution that scales up to large synthetic aperture scenes as well.
In autonomous driving, recognizing pedestrians and, above all, cyclists is a well-known problem. Car radars come to rescue, which is the idea. Radar micro-Doppler signatures acquisitions are classified into a combination of cyclist and pedestrian targets, in principle, to support radio-based detections. The problem is getting labeled radar data for these types of scenarios can be incredibly complex.
The answer here is that the data can be simulated providing free labeling in a virtually unlimited number of scenarios to generate using Radar Toolbox functions that is backscatter pedestrian and backscatter bicyclist in this case. These functions simulate the radar backscattering and signals of signals reflected from pedestrians and bicyclists. This example illustrates how you can transform the data so that it can be fed to a neural network. And it shows the compelling performance metrics that can be achieved reasonably, easily to identify pedestrian and bicyclist based on their signatures.
Now, I will show you some of the regression problems which are common for radar applications. Here is an example where we want to remove clutter from a maritime radar from the PPI images using neural networks. We use a Radar Scenario Workflow to synthesize the data required for training and validation of network.
And maritime radar is scanning the surface of the sea with 360 degree rotation. The beam pattern all of the radar is shown in the figure on the top left. The target shown in this picture is modeled by number of points scattered to model an extended target. And the sea surface is modeled with the built in elfouhaily model.
Each training data includes a pair of synthetic radar PPI images and input image which has both sea clutter and extended target returns and a desired image which include only target returns. Both surface parameters and target parameters are weighted to generate 84 pairs of synthetic radar images. To evaluate the performance, we provide the network with a set of PPI images that the network has not seen during the training and validation.
And in the output, you can see in the picture of the bottom right-hand side, the sea clutter is removed from the input PPI image. The remaining high power clutter near the center of the images could be removed by spatially aware layer such as fully connected layer or by preprocessing the original images to remove the range dependent losses.
If you go to Radar Toolbox in AI for Radar section, there is an example in Signal Processing and Deep Learning Toolbox. This example basically shows how to infer vital signs from a continuous wave radar. And more specifically, it builds an ECG from a radar return. So it shows how to reconstruct electrocardiogram signals via continuous wave radar using deep learning neural networks, this one.
The example demonstrates how to use radar for vital sign monitoring. Such systems are advantageous over variables, especially in long-term settings. First, the example directs the user to download data. The original data has 30 participants with Synchronized Continuous Wave Radar and ECG signals from a reference device. The example uses only a subset of that data.
Five participants are used to train and validate the network. The sixth is used to test the model. The downloaded data includes 24 gigahertz continuous wave radar as well as different ECG signal, which is used for validation.
There is a wide range of phenomena it includes. In this example, there is rest that is normal breathing and apnea that is start and stop breathing when you sleep. Second, the example shows users how to place that data into a signal datastore. All signals are downsampled to 200 Hertz and divided into segments of 1024 points. That downsampling takes advantage of the fact that ECG's information is usually located in the frequency band less than 100 Hertz.
Then data is normalized, and time is spent visualizing that data. Next, the example moves on to show users how to use data to train Hybrid Convolutional Autoencoder and Bidirectional Long Short-Term Memory, that is BiLSTM network, as a model. It is shown here how the inclusion of Maximum Overlap Discrete Wavelet Transform can be used to improve the performance.
The example demonstrates how to perform a multiresolution analysis using a MODWT. In this case, the first convolutional layer in the deep learning architecture is replaced with a MODWT layer. As can be seen in the figure, some levels contain what mostly looked like noise. These levels can be discarded.
The example selects three to five levels of the MRA analysis. Both the training loss and validation loss of the model with MODWT layer drop much faster and more smoothly. A flattened layer is also inserted in the MODWT layer to make subsequent convolutional layer convolve with the time dimension and make the output compatible with the subsequent BiLSTM layer.
The example further compares the reconstructed signals. This model with MODWT layer is able to represent the peak position magnitude and overall shape of the ECG here when you reconstructed this with a MODWT layer. However, the original reconstructed signal was not as good that we would expecting. And this is more even evident when looking at the error distributions. The model with the MODWT layer is able to more accurately represent the ECG signals.
Finally, I will just show you some of the examples that are available in the Radar Toolbox to get you started. As I mentioned in the beginning, we, as of now, support supervised learning examples with simulated data sets and recorded data sets. And we have a list of examples for a variety of applications. To start off with, you'll start using these examples as a starting point.
So I'm just flashing all the examples on the screen. You can have a look if any one is of interest to you. With this, I will summarize the presentation today. I showed you some of the capabilities regarding what is the workflow that is employed and the challenges that we face generally for deep learning workflow for radar applications.
We saw some of the examples, classification examples on Synthetic Aperture Radar. Then we looked at one of the examples on regression problems regarding maritime clutter removal and human vital detection and some of the application engineering examples that is available in the toolbox.
I'll pass it now to Mr. Ram Pravesh. Mr. Ram Pravesh is a senior deputy general manager. And he will share the rich experience of his radar that how you can apply artificial intelligence in the field of radar. Over to you.
Thank you, Samit. So good afternoon, everyone. I am myself Ram Pravesh. I'm working in Bharat Electronics. My area is I'm working in 3D surveillance radars since I joined
So here in this presentation, basically we are going to talk about the particular parameter in 3D surveillance radar. That is elevation estimation. So first of all, we have a look on what is the existing technology for elevation estimation today and how this can be, again, done by AI with a very effective and efficient manner.
So first of all, we are going to discuss about what are the techniques, then what are the challenges in the techniques. And then what are the motivation behind AI, how that we are going for AI and what are the reasons behind that? Then implementation and testing of whatever the proposed model we are going to see. And then performance improvement using that AI techniques.
So let us first start with a very brief introduction of 3D surveillance radar. It's very common for all electronics engineers and defense personnel Radar has basically three main parameters, range, azimuth, and elevation. And here, we are going to talk about mainly elevation.
So it is very clear that, OK, if a target is there, then I would like to measure range, and then what azimuth is there. And if it is the 3D radar, then we are interested in elevation, also.
Here is some basics of elevation angle estimation. So how we are doing elevation angle estimation? That is, if a target is there, we have a range. And if you have an elevation angle, then we can measure altitude of the target. And that is the final outcome of the measuring elevation angle.
So here, I would like to tell you that, if you have a single broad beam, then, again, it becomes a limitation to estimate that elevation angle. So mostly in earlier days, we have a separate secondary radar that is called RFS. So we try to get that elevation, particularly altitude of target for the altitude target. And then we modulate 2D data so it can be taken as a 3D data. But again, there are some limitations of that.
So in this here, just I would like to say that, if you have a single beam, then whatever the target is there-- need both side to be the elevation angle. And this is one limitation. So that's why we need to put the multiple beams.
Bel had jointly developed with LRDE many 3D surveillance radars, and hundred plus numbers are deployed in field. And these are snapshots of the results.
So before going further, let us go with the history of elevation estimation. So if you see here, there are two figures. The first figure initially what happened? Because of the multipath-- multipath, the number of beams has formed. And this was an advantage to measure the height.
Let us say, if you have known information, if you have already information, the multipath beams have formed. So if you see that if a target first detected so it can be-- OK, it is because of this beam. And we can take both sides. And we can estimate what is the angle.
So this was the earlier, very earlier starting days. It was done through that. And then again, if we have this kind of formation-- so two antenna, upper and lower antenna, by switching of these two beams. And then we can have some amplitude comparison using the multipath loops.
So this is just a brief history. In further slides, you'll see that here by just multipath region for many beams. And we are taking this advantage of multipath. But slowly, you'll see that multipath is not advantage. And now, you want to get rid of multipath.
So here just we're trying to show what are the traditional techniques where elevation is scanning through pencil beam. So if you have a pencil beam, then we can go a little more extent to get a accurate elevation angle, like this figure we have shown. And the stacked beam-- where if you want to cover higher elevation coverage and then we want to get elevation also with all targets in that particular regiment. So staggering is the best option.
And then sequential lobing-- lobing that one by one we are just switching the beams, here, one by one switching the beams. And then we try to find out where the target location. And then similarly with conical scanning, the antenna will rotate basically. And the same will come again. And target respond within the same beam again and again. So that way also we can measure
And the last one is mono-pulse. And that is also the same technique. Basically, these all techniques other than this elevation scanning through pencil beams, other techniques are more or less similar. It is basically mono-pulse where we are using in one direction we are trying to find azimuth. And otherwise, we are trying to find elevation.
So what are the challenges in elevation estimation in these techniques? OK. So here, we see that the mono-pulse technique is OK. It definitely will get a very good accuracy in azimuth and elevation. But it is more or less good for tracking application where we want to track a single target.
But when it's come for a surveillance kind of application where we want to get the elevation angle of all target in particular azimuth then it Here, I just want to tell you that we are talking about 3D surveillance radars, which are mostly rotating type of radar, not the AESA technology. Because in AESA mono-pulse, we are using and setting off multiple beams again and again. We are getting azimuth and elevation.
But still, there are a lot of radars already deployed in field. and with this rotating mechanical antenna. And that is an easy and less complex system, easy to maintain, and all those things. Advantage definitely is having less cost, also.
So in this multiple stacked elevation, what we are using that, if a target has a multiple response, that means to say that target responding to multiple beams, still we are going to take only these two beams, two conjugated beams to find the accuracy. So here in this figure, we can find out that the fundamental accuracy of elevation estimation has restricted this ratio of two overlapping beams.
So if the ratio of overlapping beams is very high, so in that case, we will get a good accuracy. But if the ratio is less-- so this is one of the restriction that we'll also see in other slides, that optimal overlapping is a requirement of getting good accuracy.
So there are other factors also which affect the elevation estimation, the thermal noise, antenna pattern error, channel mismatch error, platform orientation. New parameters are not dependent on radar equipment, like jamming and clutter, multipath reflection, and target fluctuation, all those things. So these are factors that's always been the picture when we are estimating with the traditional techniques. So this is one of the motivation to go for AI technique.
So here, I would like to tell how we are following that multiple stacked beams. And this is also help to getting the data for the application. So this is the array factor. And with the help of this, we are trying to simulate the multiple number of beams with different azimuth. And we are also controlling this overlapping.
But in MATLAB, this Sensor Array Analyzer app have a very good capability to do all those things. Earlier, we have added some coding. We have a lot of struggle. But here, by just setting a few parameters, we can select the geometry, size of geometry, element spacing, and this low and column tapering.
And these are the very easy way to generate multiple beams. Any number of beams we can generate. Even though we can visualize this 3D also picture, the big picture also we can visualize. So this is one way to generate that multiple stacked beams in MATLAB.
So once we have a multiple stacked beam, then how we are basically estimating the elevation angle? So here, out of multiple beams-- if we just take that two subsequent beam where the target has maximum amplitude or maximum response. So once the target, say this Bk and Bk plus 1 are two consecutive beams.
So we take the ratio of these two beams. And then with the help of-- then we'll take the x-axis, as the angle. And then this is the amplitude. And there's curve fitting and all. We try to find that polynomial. And after that, once you have a polynomial, then, with the help of response, we can get this elevation angle.
So this is the technique which is already available. And this is a general technique to find the elevation estimation. Here, this to getting this curve fitting and all, again, MATLAB curve fitting tool is very helpful.
So once we have elevation angle, then we can use cosine rule to find out the altitude where this antenna height and target range, effective Earth radius, reflectivity factor, all are taken into consideration. So there is one way. And the other way is that, with the help of MATLAB function, I also try and check that it is working very well. And it's helpful, also, so that we can not need to go again write and all those things. Simply use this MATLAB function, and we can get this altitude.
So once we have a target altitude, then how to calibrate that, OK, if the radar we have a parameter, then how to calibrate that whatever the parameter is there? Is this an accurate or not accurate kind of thing? So for the calibration condition, the clear weather condition is required. It doesn't mean that radar only work in clear weather conditions. Radar can work in any environment. But for calibration purpose, it's a precondition that the weather must be clear. So we at least know that what is the actual capability of the radar.
And then definitely equipment errors would not be there. There's a number of variations in gain multiple beams, antenna tilt. All those are the equipment related things that can be taken care before going for the calibration.
So for calibration, we need a reference target height also. So in that case, we measure the aircraft altitude by its own instrument and can communicate it to radar. So that is the one way. And the other is the barometric height we can get from IFF And that also we can use for calibration or by dual-GPS method. So these are a way to get the reference height. And these are the preconditions before calibration.
So once we have data, then how to measure the accuracy? So there are two methods. One is percentage height accuracy, and other is root mean square. So percentage height accuracy, what we actually do?
We just take that target height scan by scan and compare with the reference site. And we can find out the percentage height. Right here, we can see this is estimated. And this is radar estimated, and this is the true height.
And then we just find out the difference. And we just fit one threshold at how much we can set that as well. Like in this case, if you have 500 meter, so it is 85%. If it is go less, then percentage will be less.
Or the second way is just take the root mean square. And in that case, it will be some meter, like some threshold. And if something is above that threshold, so it will be failure. And if it is lower then threshold, then it is OK. So either we can measure in percentage, or we can measure in a root mean squared.
So now, earlier, we have seen that with the help of multipath, beams have formed. And we have taken that advantage. And we have estimated some height. But now, we'll see that multipath is also leads to poor height accuracy.
So what here I'm trying to show that let us say we are transmitting one beam pulses and there is no reflection. So we have very clear beams. And if you want to take the ratio of two overlapped beams, it will be very clear. And we can very easily fit in polynomial.
But what will happen when the reflection is there and this is not so clear? So in that case, it is very tough to get that good accuracy. And we don't have any control on radar's environment.
So radar can be deployed anywhere. So if nearby it's a very high reflective surface, so in that case it is tough to get good accuracy. So this way we thought that then what are the techniques that we can get rid of all those things?
So then it's come to AI, use of AI. Now, AI is very popular these days. And many problems can be solved using AI. So here, also, we try that how we can-- since we have the data and we have the separate beam data and elevation angle, so why not try to AI
So we have find that the Regression Learning app in MATLAB is very useful to go with that, with the help of this data. And these data, if it is available, it's also OK. If it is not available, again, we can generate with the Sensor Analyzer app.
So we have selected that different regression methods to check the data, whether it is useful and what are the results. So you have used Regression Learner app. So different techniques we have tried. And finally, r squared validation we have taken as a reference to select the different trees and models. Here, you can see that this is the one set of data where we see how it has fit and what are the other parameters, like root mean square error, R-squared, and prediction time, training time, all those things.
So what we have did? We have checked of the different regression methods. Like if you see here, it is the interaction linear and interaction linear, again, robust linear. So we found that, OK, this is suitable. Similarly, in decision tree, we have checked with the coarse tree, medium tree, fine tree. And we found that, OK, that is the best.
Similarly, we have used SVM and Gaussian and various-- so this is the beauty of this app, that we can have all those available algorithms. We can check. And we can find out what is most suitable for our application.
So this exponential GPR, we can see that some are very well-suited and some have some error. Here, if you see that this model at Least Squared Regression Kernel has not well-fitted, but Bilayered Neural Network is very well-fitted.
So by taking all those things, we thought that, OK, let us have use one model. Then we thought that, if we are going for one model, might be for different scenario it will not fit. So we try to think that how we can club. So we have clubbed all those models in a particular manner so that we can take that one page of all regression methods.
So here we have taken all suitable regression methods. And then using the Voting Regressor of Python library-- yes. And we have clubbed with a weighting of all one equal weighting. But here, there is an option that we can go different kind of weight. And we think this data is coming well. And this predictions and all those things is matching even though it is overcoming with traditional limitations. And we have to checked this result also.
So here, this is the prediction of this Elevation Regression model with other regression models. So we find that, OK, some models are very well-suited. And some are just going out of range. So what we thought-- that let us have all those models and then so that make this model more robust.
So this is the final that with the ground truth. And this is our EREEM prediction. So what we found-- that, OK, it can be used. And it will be good to the limitation of traditional techniques.
So finally, with all those we found that, with the help of AI, a lot more can be done taken only one particular parameter after that. And we have found that, OK, data can also be generated with the help of MATLAB simulated data if more data is not available. So elevation estimation is difficult due to external factors, including radar variations. And this model is overcome with all those challenges.
Model testing performance analysis over the various data set has been done. And it found that is OK. It is suitable. And we can go for deployment. Usage of radar, ML, DL tools-- for data set generation and analyzing radar data.
So what we thought-- that, OK, let us go for deployment with this new model. And we try and test and get successful with all kind of the scenario. So we can go for other techniques. We can find out that how are the clustering and all of this extraction can make more help with AI
So finally, I would like to thank Sumit Garg from MathWorks. He has offered very good technical support. And we have learned a lot with all those things and how we can use these tools. And I also thanks to my senior director Dheeraj Talwar and Hari Kumar, the GM technology planning for support and opportunity.