Linearization of Upcoming High-Efficient RF Power Amplifiers, Part 3: Deriving PA Measurement Data from Existing Hardware for Behavioral Modeling - MATLAB & Simulink
Video Player is loading.
Current Time 0:00
Duration 19:18
Loaded: 0.86%
Stream Type LIVE
Remaining Time 19:18
 
1x
  • Chapters
  • descriptions off, selected
  • en (Main), selected
    Video length is 19:18

    Linearization of Upcoming High-Efficient RF Power Amplifiers, Part 3: Deriving PA Measurement Data from Existing Hardware for Behavioral Modeling

    From the series: Linearization of Upcoming High-Efficient RF Power Amplifiers

    Florian Ramian, Rohde & Schwarz

    Combine state-of-the-art RF PA measurements with behavioral models and system prototypes for accelerating the design, optimization, and testing of linearization techniques before the entire system is available and productized. 

    Published: 15 Oct 2024

    Thank you. So within the next minute to 20 minutes or so, I'm going to show you the measurements that we did on this PA board that Salva just described. And basically, the measurements that we ran was, or target, was to figure out how well DPD works on the device, and also to extract data for behavioral modeling. So what are we going to look at? A quick talk on the measurement setup. Then we'll have a quick look at the key metrics that we will evaluate to compare before and after DPD.

    A few notes on the data extraction itself. I have a comparative slide on measurement versus simulation. And then I guess the most important thing, the measurement results around the DPD. And I'll conclude with a few words of measurement uncertainty. So here's the measurement setup. I guess it's a very traditional PA measurement setup. In the center part, there is the DUT with DC supply, two couplers with power sensors just to be able to have maximum precision, power measurement, and the resulting also in gain.

    And then for the DPD measurement, the key elements on the left-hand side, on the input side, we have a vector signal generator, and on the right-hand side, we have a vector signal analyzer. Both really broadband, so you can, yeah, cover whatever bandwidth is necessary there. Just on the bottom side of the slide, there's a photo of the setup with the Qorvo part in the center of the picture. You see the couplers as well and also the 10-watt 20 there.

    So when doing these measurements, what are the key metrics? What do we want to look at? Power, certainly, it's a power amplifier. So we need to look at power. We need to look at efficiency. We've heard that before. But then, we also need to make sure the signal quality is good enough that's coming out of the device. And for that, we're going to look at EVM. We've already heard a little bit the differences between a standard compliant EVM and a RMS EVM. I'll spend a few words on that on the difference.

    And certainly, we don't want to disturb the other users. So we need to look at adjacent power leakage as well. And that's the ACLR measurement. And of course, we're talking about non-linear devices. I'll also have plots about, or traditional AM/AM AM/PM plots. So basically, I'm running all these key metrics on the same set of IQ data. And yeah, the goal what I'm going to show in the results is I compare before and after DPD.

    And we'll hope to see significant improvement in ACLR of course. We'll hope to see a much more linear AM/AM curve, and also smaller. So we try to reduce the memory effects as well, the same with the phase. Before we go to the measurement itself, I also want to spend a few words on the data extraction process. So both approaches are both things we want to do with the measurement data. The modeling as well as the DPD, it's based on IQ data. So that's basement complex-- complex basement data.

    And ideally, since we're talking about DPD and nonlinear effects, we try to minimize other effects from the system, like, for example, noise. We don't-- ideally, we don't want any noise contribution in the measured data because, well, it's signal-to-noise ratio that degrades and it makes our models worse. And also, what we need to spend a few thoughts on is the amount of bandwidth we need to invest for the measurement. Like on this screenshot there, we can see the transmit channel. It has some bandwidth, maybe 100 megahertz.

    But obviously, we have some emissions of the device and the test outside this transmit channel. And we need to make sure we capture these as well because they carry information. They carry information about the non-linearity, and we need to have that in the measured data. Otherwise, we won't get a good model, and we won't get a good DPD.

    In the measurements I took, I did use a method called IQ averaging to limit, or to reduce, the influence of noise. You can see if you look at the horizontal lines in blue, which is the IQ average method, and in yellow, which is the raw method, you can see a significant difference. And what you can hopefully also see is when I reduce the noise, you can see a lot more of the-- I call it shoulders, the non-horizontal lines outside the transmit channel.

    These are the outer band, non-linear emissions. And basically, a conclusion of that is if I do, or if I apply a noise-reduction method, I need to increase also my bandwidth because I'm simply able to see more of the out-of-band emissions. And that's the rough-- the key point is basically, have a look at the spectrum and try to find the crossover point between the horizontal contribution of my band noise and then sort of the slopes from the non-linearities.

    And then as I said, we're going to extract IQ data, complex basement data. And depending on what we use it for, we can easily extract the original signal that's been sent out by the generator and the measured data. And we can use that for modeling. Or what we can also do is we can run a non-algorithmic DPD on the instrument, and create a pre-distorted waveform, which is sort of the ideal pre-distorted waveform. It's not algorithmic. I have to admit. So it's not usable in real life.

    But it gives you information about what would the perfect pre-distorted signal look like. And then I can use the original reference signal and this perfect pre-distorted waveform and try to do a curve fitting, or parameter search on that. So either of those two pairs is possible for further investigation.

    I promise to do a quick overview and comparison of measurement versus simulation. I guess on the measurement side, we can state for sure, that's the most realistic thing you can do. And you get direct feedback. Is the DPD going to work on my device? I think we also agree it requires a lot of resources, you need the instruments, you need the device, and the test, you need a lab, you need people who can work with both DUT and the instruments. And I would consider it relatively slow because you have to do all the setup and everything.

    On the simulation side, well, I put fast in contrast. And I also said there's low resource requirements. You need a PC, you need some software. Yeah, it's not the full truth because you need the model as well. Well, I would say it's definitely less resource intensive. And I think that the simulation is ideal to-- for example, if you haven't focused on a DPD method yet, so it's ideal to try a lot of different DPD methods, different number of coefficients, and so on.

    And also, of course, with simulation, you can test as early as possible in the design phase, always assuming you have a proper model. But on the down side, I guess we can say we always have the uncertainty, is it really going to work? Or how good is my model? And I guess as always in life, it's all about finding the right balance, when to use the measurement, when to use simulation.

    Now, before we get to the measurement data, quick overview of the process I used, different steps I applied. On the left-hand side, we always start with the power server, and make sure we have defined operating condition of the device on the test. So you make sure the output power is constant. Then we did measure the raw output of the device without any DPD applied. Then we'll step over, or we applied the first round of DPD, the very basic polynomial DPD, no-memory effects.

    As a next step, we added what we call the direct DPD, which is what I call the perfect DPD non-algorithmic, always have to say, non-algorithmic, but perfect DPD. And then finally, we tried different number of coefficients with memory polynomial and even with generalized memory polynomial models.

    So let's get to the measurements. First round of measurements is on a single carrier setup. 3 GPP, downlink signal, 100 megahertz wide. Different test models used. And what you see in the next plots, we did use different PAPR ratios because that's making the whole comparison interesting. Let's start with a fully-standard compliant signal. You can see that also on the PAPR, 10.8 dB. So that's really a challenge.

    All these plots on the left-hand side have EVM versus power. And in this case, we're showing 3 GPP-compliant EVM. On the right-hand side, we have ACP, or ACLR, again versus input power. In blue, you can see the raw curves. And as we would expect, with increasing power, EVM is degrading, ACLR is degrading. And then in yellow, we can see polynomial DPD, memory-less DPD.

    We can see an improvement. So it works. However, it's not really where we want to get to. Just to give you an idea, ACLR-wise, we want to get towards 60 dB 55, 60 dB. As I said, after the polynomial DPD, what we did, we used the direct DPD, the non-algorithmic approach, and we can see a significant improvement between the yellow and the, what is that, red orange line. On top of that, we added memory polynomial and generalized memory polynomial DPD.

    And we can see for low powers, it's close to the non-algorithmic DPD. But there's still a little bit difference. And what we can also see on the right-hand side, this huge increase, both for EVM and ACLR. And that basically just tells you the DPD model. It can't handle the PA behavior anymore.

    And well, it simply is because there is probably-- we've reached maximum output power of the device. So we can-- we see the DPD works, but we're not there. It's not what we want to get. I promised some efficiency numbers as well. I just put in two vertical lines here. On the left-hand side, 25% roundabout. On the right-hand side, 42. This is why we want to get as close to the right-hand side for our measurements.

    So tempo and ACP is way more PAPR than we can handle, or the device can handle. So what do we do? We try to get PAPR down. So we apply crest-factor reduction a little bit more than 2 dB here. And what we can see is the ACLR significantly improves. For the lower power levels, we're close to 60 dB now. And it goes up to, well, 53 dB or something for a nominal output power. We can also state almost no difference between the memory polynomial and the generalized memory polynomial DPD. So there is no reason to use the more complex GNP.

    And the other thing worth noting is the standard compliant EVM goes up. So we are a little less than 3%. That's simply the key thing behind crest-factor reduction is it buys you ACLR at the cost of EVM, because if you clip the standard compliance signal, you introduce errors. That's what you see in this EVM limitation there.

    You can drive that a little further, and clip some more, and you can see-- well, you can get even better ACLR at the cost of even higher EVM. And worth mentioning, still no significant difference between the memory polynomial and the generalized memory polynomial. Let's take it a step further and look-- let's look at the dual-channel setup. So we have two 100-megahertz-wide signals now. And for these measurements, spaced 200 megahertz.

    And I think the most interesting point out there is that finally, we do see a significant difference between the green and purple lines. And basically, the statement that I want to make here is if you choose your DPD method, you have to choose it wisely, it doesn't only have to fit the device and the test, but it also has to fit to the signal type you're applying to the ET.

    And I think for all these things to try and figure out for a completely unknown device, you can do-- you can run a lot of measurements. But at the same time, you can at least try to figure these things out in simulation. Talking a little bit about measurement uncertainty, especially when you're changing your test signal, for example, by applying crest-factor reduction. It makes a significant difference what EVM you're using because the EVM RMS, you compare it to a given waveform, usually to the clipped signal. And that typically gives you very good numbers.

    3 GPP compliant measurement, as we've seen. It gives you a much higher number because you have changed the signal compared to the standard compliant signal. However, if you change your reference signal in the EVM RMS measurement, it gives you more or less exactly the same number. So basically, what I'm saying is when you're comparing apples to apples, it doesn't really matter which method you use. Get a lot more flexibility by using the EVM RMS because you can define what you want to measure.

    On the other hand, there is applications where you need to do standard compliant Just one example where it makes a lot of sense to use EVM RMS is a low-SNR scenario. This is the good case. Both applications, both measurements, give you the same number. This is the bad case. You can see on the constellation diagram on the left-hand side, that's the standard compliant measurement, we have lots of noise in there.

    So we have on the symbols, we get wrong decisions. And that's basically what makes my reference signal for the EVM measurement corrupt. It's not valid because I have wrong decisions in there. You can see the number there is much lower than the number on the right-hand side, where I don't have any decision at all. It's just a comparison between two signals. The input, the output, whether it's the input. And we don't care about what modulation it is, what standard it is, nothing at all. But we just compare output against input.

    And that sort of makes sure you always get a correct number. A low-SNR scenario is one of the scenarios where it really makes sense to use EM-- RMS EVM measurement. Maybe as a summary for these measurements, measurements are resource intensive. Well, I'd say it's the ultimate verification of this sign. And I think it's important to always set up the measurement as close to real-world conditions as possible.

    Of course, as always, know what you're doing. And I think it's also valid nowadays where when we're reaching sort of the limitations of the instruments in terms of bandwidth, in terms of signal-to-noise ratio or whatever, a lot of the test, the measurement instruments have correction mechanisms that you can apply. Be it DPD for the instruments only, not talking about DUT, be it about frequency response correction, or noise correction, that's all the effects that you don't want to have in your measurements. And most of the instruments have methods to correct for them. All right. Then let's hand over to Wissam.