**You are now following this question**

- You will see updates in your followed content feed.
- You may receive emails, depending on your communication preferences.

# LQI(): integrator response not as expected

61 views (last 30 days)

Show older comments

I have a simple 2nd-order spring/mass/damper system in closed-loop configuration, that works as expected under LQR control; see first image.

I'm trying to use LQI() to add integral control -- and also my own discrete method -- but the integrator response, integral(e), is not as expected.

The 2nd image shows a ref input; I'm monitoring the integral state to see if it's working as expected, before augmented state-space with a disurbance input.

The 3rd image includes an augmente disturbance input + ref input, via [B F]. However, the integrator doesn't seem to accumulate when a disturbance is present at t = ~0.37. The integratorit acts as though the r-y error didn't exists.

See the attached .m file.

Mathworks' lqi() page -- the documentation is very sparse -- says the state vector x = [pos; vel; xi)], with xi = integral(error).

So, xi_dot = -x1+r --> xi = integral(r-y). This seems correct.

Also, xi seems correctly added to x2 (ie the input velocity_dot level) via a non-zero A(2,3) term, so the integral action should ultimately impact y as desired via x1_dot_dot = ...+A(2,3)*xi.

3 questions:

(1) Y isn't tracking to ref with a disturbance. Am I misunderstanding what this lqi() structure should do?

(2) x3 seems very small when ref steps to 1. Instantaneous r-y = 1, so since errors accumulate, x3 should be >1. Instead, it's 2.5e-3. What SS structure is lqi() assuming? Is the integrator set up correctly?

(3) Why is the computed LQI using lqi() gain so small at -10? To make a step that's about the same risetime as LQR (no integrator), i needed to multiply what lqi() comes up with, by ~10,000. Otherwise if using k_lqr(3) that lqi() came up with, the risetime was 130 seconds instead of 10 ms.

The latter is also confusing -- the way integral addition is included in state-space, it shouldn't counter-act the non-integral control response; it's only an addition. When starting from 0 state, wouldn't the integrator only be an additive error to get to ref faster? After that, the accumulator (int(e)) needs to wind down, so lag seems expected.

Either way, this makes me think that I don't understand what the lqi() gains actually are. For example, maybe the integrator is not added in a standard way, and ref signal must pass through the integrator before getting to the feedback K gains, so the plant response to a step will always be inherently slower than a non-int version with a ref step.

### Accepted Answer

Paul
on 2 Mar 2023

Before getting to the LQI part of this, I'm curious about the approach for making an LQR regulator into a tracking system.

Here is the original code, with some changes I made to illustrate some points.

%% Reference and Disturbance vectors

tf_s = 0.4;

ts_s = 1e-6;

t = (0 : ts_s : tf_s)'; % changed tout to t

u = ones(1,numel(t));

d = zeros(1,numel(t));

d(t > 0.25 * tf_s) = -1;

refStartTime = 0.1;

distPosTime = 0.25;

distVelTime = 0.5;

ref = zeros(numel(t), 1);

ref(t > refStartTime) = 1;

ref(t > 2*refStartTime) = 0;

ref(t > 3*refStartTime) = -1;

dPos = zeros(numel(t), 1);

% Force ramp for velocity disturbance

dVel = -10 * (t-distVelTime) .* (t > distVelTime);

% Both pos and vel dist (unused

dPosAndVel = dPos + dVel;

%% (1) Make 2nd-0 system

zeta = 0.3;

wn_rPs = 250;

ampl = 1;

m = 1e-5;

b = zeta * 2 * wn_rPs * m;

k = wn_rPs^2 * m;

Add the velocity state to the outputs.

A = [0 1; -k/m -b/m];

B = [0; 1/m];

C = eye(2); % output both states

D = [0];

Form the second order model. Don't normalize by the dcgain to illustrate a point later.

massSys = ss(A, B, C, D);

% massSys = massSys/dcgain(massSys); % got rid of dc gain adjustment

massSys.StateName = {'p', 'v'};

massSys.InputName = {'u'}; % changed to u

massSys.OutputName = {'pos','vel'};

%figure

%step(massSys);

%hold on;

Define the LQR weights. How were the values in Q and R selected?

%% (2) Make CL LQR

Q = [1000 0 ; 0 0.001];

R = [ 0.01 ];

[K_lqr, S1, P1] = lqr(massSys, Q, R);

% Stability check

A_CL_lqr = (massSys.A - massSys.B * K_lqr);

[T, Dd] = eig(A_CL_lqr);

Here is the original approach to forming the closed loop system. This approach assumes a feedback control law of the form:

u = -Kx + r, where r is the reference command.

sysClLqr = ss(A_CL_lqr, massSys.B, massSys.C, massSys.D);

However, for tracking problems it's probably better to form an error signal, in which case the control is

u = K1*(r - pos) - K2*vel = K1*([1;0]*r - x) = K1*[1;0]*r - K*x.

With this control we have

sysClLqr1= ss(A_CL_lqr, massSys.B*K_lqr*[1;0], massSys.C, massSys.D);

Compare the transfer functions of both systems from r to pos

zpk(sysClLqr(1,1))

zpk(sysClLqr1(1,1))

We see that the poles are the same, as must be the case, but the dc gain is different. One pole is at s = -1000 and the other is way out in the LHP, suggesting there is a very high bandwidth loop in the system, which might not be desirable. Are these poles consistent with design objectives? These pole locations must be related to the selection of Q and R, however that selection was made.

Compare dc gains

dcgain(sysClLqr(1,1))

dcgain(sysClLqr1(1,1))

The original approach has a dcgain on the order of 1e-3 and the latter is neary unity, as I think would be desired.

Another way to form the (new) closed-loop system would be

K = ss([K_lqr(1) -K_lqr(2)],'InputName',{'e' 'vel'},'OutputName',{'u'});

s = sumblk('e = r - pos');

sysClLqr2 = connect(massSys,K,s,'r','pos');

Verify its the same as above.

zpk(sysClLqr2)

% sysClLqrDc = ss(A_CL_lqr, massSys.B/dcgain(sysClLqr), massSys.C, massSys.D, ...

% 'StateName', {'p', 'v'}, ...

% 'InputName', {'r(ref)'}, ...

% 'OutputName', {'pos'});

Now plot the response to the reference command

%% (3) Plot to verify

x0 = [0;0];

[y, tout, x] = lsim(sysClLqr2, ref, t, x0);

legStr = {'LQR', 'Pos Ref'};

subplot(3,1,1)

plot(tout, y(:,1), 'LineWidth', 1.5);

hold on;

plot(tout, ref, '--', 'LineWidth', 1.5)

grid on;

legend(legStr);

title('y');

subplot(3,1,2)

plot(tout, x(:,1), 'LineWidth', 1.5);

hold on;

title('pos');

ylabel('x1');

grid on;

subplot(3,1,3)

plot(tout, x(:,2), 'LineWidth', 1.5);

title('vel');

ylabel('x2');

grid on;

sgtitle('CL LQR: Ref response', 'FontSize', 14)

xlabel('Time [s]', 'FontSize', 14)

The response looks pretty good.

I guess the next step might be too look at the response of this system to the disturbance.

Now we could try LQI. But what is the motivation for trying LQI with this system? I'm not saying there isn't one, but I'm curious. Also probably need to get a better understanding how the Q and R matrices are selected for either LQR or LQI relative to actual design objectives.

##### 30 Comments

John
on 2 Mar 2023

Edited: John
on 3 Mar 2023

@Paul, thanks for the detailed insights (as always). I'll start with the shorter replies:

(1) "suggesting there is a very high bandwidth loop in the system, which might not be desirable. Are these poles consistent with design objectives?" and "How were the values in Q and R selected?"

Yes; the plant is fast, and a fast system is desired, with an emphasis on minimizing position error (and assuming control effort is low penalty). Hence selection of Q(1,1) >> 1, and Q(1,1) >> Q(2,2), and R<<1. Let me know if something seems off...

(2) "However, for tracking problems it's probably better to form an error signal" and "the latter is neary unity"

"u = K1*(r - pos) - K2*vel = K1*([1;0]*r - x) = K1*[1;0]*r - K*x."

Interesting and good point; thanks. Ah -- that makes sense on the gain; r is also multiplied by K1, so the system gain will be higher when ref changes. I artificially did that with the dcgain() correction, trying to recreate the correct way to set up the sys, without knowing why it's needed.

Related to this, LQR feedback (of course) assumes state feedback, but my C only includes x1. Why does sys = ss(...) work with a C that isn't eye(2), ie how is Matlab able to run lsim() successfully with C = [1 0] instead of C = eye(2)?

In the physical system, I'll use an observer to create x2 (vel) approximation since the plant doesn't natively output vel; but in lsim(), y is explicitly only x1.

(3) "I guess the next step might be too look at the response of this system to the disturbance."

I ran the following code. The plot below shows the LQR response not tracking the dist. Although, i would have thought it would, because the dist-induced error should be added to u due to (r-y)>>0.

Maybe scaling Dist input causes the (r-y) contribution to be ~0 relative to the dist magnitude?

But if so, it's curious that CL and OL both end up with the same response as any ref command, ie no apparent extra reduction in error for the CL response to a dist (even if zoomed in).

F = [0; 1];

Dist = [ref distForce];

% Still need to correctly scale Dist input, otherwise Dist has ~0 impact on

% OL or CL system.

% OL system as reference

sysOlWithDist = ss(massSys.A, [massSys.B F], massSys.C, [0 0])

gains = dcgain(sysOlWithDist); %(1, 1.5831e-05)

sysOlWithDistDc = ss(massSys.A, [massSys.B F/gains(2)] , massSys.C, [0 0])

% CL system we care about. Use B*K*[1;0], no scaling needed.

% Still need to scale F input.

sysClLqrWithDist = ss(A_CL_lqr, [massSys.B*K_lqr*[1;0] F], massSys.C, massSys.D)

gains = dcgain(sysClLqrWithDist); % (0.99684, 5.0063e-08). 1/gains(2) is very large.

sysClLqrWithDistDc = ss(A_CL_lqr, [massSys.B*K_lqr*[1;0] F/gains(2)], massSys.C, massSys.D)

y = {}; x = {};

[y{1}, t1, x{1}] = lsim(sysOlWithDistDc, Dist, t, x0);

[y{2}, t2, x{2}] = lsim(sysClLqrWithDistDc, Dist, t, x0);

Just to check, is it the case that your correctly-added K1*(pos-r) is equivalent to the manual gain I added to B and Disturbance (without really understanding why), or am I thinking about this wrong? Would you expect the responses between your update and my original version (ie manual B scaling) to be different? They both include the error of y relative to ref, so should respond to Dist.

(4) "But what is the motivation for trying LQI with this system? I'm not saying there isn't one, but I'm curious."

Certainly happy to hear your opinions / advice on this. The motivation for LQI is that the plant will (a) change over time, and so gains will not be correct as they are in ideal sims (b) various plants will have varying responses if making multiple physical systems (ie same non-ideal gains vs sims, as in (a)) , and (c) disturbance ramp error will be reduced with a Type 1 system error.

An adaptive controller could also be used to vary the gains, but any integral action (eg LQI) seemed to have some robustness in time response to gain variations built in.

"Also probably need to get a better understanding how the Q and R matrices are selected for either LQR or LQI relative to actual design objectives."

This is dominantly a position regulator (higher BW) with random disturbance inputs, with less critical (lowerer BW) function of reference tracking. At various operating points of the system, control effort and vel error will have varying priorities, hence using LQR/I.

Would you propose a more deliberate way to find Q and R based on design goals, instead of just lqi(), compared with checking resulting TF or sim response against the design objective? I was under the impression that this is generally iterative. If a more rigorous response were needed (eg meet a specific response), MPC could be used, but I think that's more complex than required for this system.

Paul
on 3 Mar 2023

Edited: Paul
on 4 Mar 2023

Let's build the system first.

%% (1) Make 2nd-0 system

zeta = 0.3;

wn_rPs = 250;

ampl = 1;

m = 1e-5;

b = zeta * 2 * wn_rPs * m;

k = wn_rPs^2 * m;

A = [0 1; -k/m -b/m];

Incluce the force disturbance at the input to plant.

B = [0; 1/m]; F = [0; 1/m]; % disturbance input, make sure to divide by m

C = eye(2); % output both states for use in connect

D = [0];

massSys = ss(A, [B F], C, 0);

% massSys = massSys/dcgain(massSys); % got rid of dc gain adjustment

massSys.StateName = {'p', 'v'};

massSys.InputName = {'u', 'd'}; % changed to u

massSys.OutputName = {'pos','vel'};

The plant has an open loop time constant of roughly 0.0055

figure;

step(massSys('pos','u'));

yline(-massSys.C(1,:)/massSys.A*massSys.B(:,1)*0.63)

xline(0.0055)

Desing the LQR system with the specified Q and R

Q = [1000 0 ; 0 0.001];

R = [ 0.01 ];

[K_lqr, S1, P1] = lqr(massSys({'pos' 'vel'},{'u'}), Q, R);

Form the closed-loop system, include an analysis point at the input to the plant

K = ss([K_lqr(1) -K_lqr(2)],'InputName',{'e' 'vel'},'OutputName',{'u'});

s = sumblk('e = r - pos');

sysClLqr2 = connect(massSys,K,s,{'r' 'd'},'pos','u'); % add analysis point

1) Bandwidth - As we saw above, the closed loop system has a pole at s = -1000 and another at s = -36100. To realize this system in a computer will require a very sampling frequency. Also, see comment below in using an estimator.

The open-loop transfer function at the control input is

figure

margin(getLoopTransfer(sysClLqr2,'u',-1));

Not surprisingly, the open loop crossover is very high, but the stability margins are excellent. The LQR theory guarantees at least 6 dB of decreasing gain margin and infinite increasing gain margin and (EDIT, had incorrect value of guaranteeed phase margin) 60 deg of phase margin. However, as is also the case with LQR, it can yield a very high bandwidth design without careful selection of R (relative to Q). We'll show this below. However, these LQR robustness properties make the LQR approach very appealing if we can find a Q and R that meets our design requirements (time constant, overshoot, disturbance rejection, etc.)

2) lsim works with C = [1 0] because you're plotting the system states computed by lsim, not the system output (if I correctly recall the original code posted with the Question.

As far as the observer goes, keep in mind that the observer poles will have to be faster than the state feedback poles, certianly faster than the dominant pole at s = -1000. Again, this will drive your implementation sampling frequency, sensor bandwidth, etc.

3) Disturbance rejection: I'm looking at disturbance rejection due to contsant inputs as follows

[y,t] = step(sysClLqr2);

figure

subplot(211),plot(t,squeeze(y(:,1,1))); title('r-step')

subplot(212),plot(t,squeeze(y(:,1,2))); title('d-step')

By superposition, we see that the one unit of reference command has 1000 times the effect of 1 unit of disturbance, i.e., applying both steps simultaneously would yield the sum of these curves, which is essentially the top curve. The disturbance would have to be a few orders of magnitude higher before it will have an effect. Of course, d and r have different units, so keep that in mind in this type of assessment.

4) "This is dominantly a position regulator (higher BW) with random disturbance inputs, with less critical (lowerer BW) function of reference tracking." I'm not clear on this statement. It seems the goal is to achieve high bandwidth reference tracking and disturbance rejection.

As far as an LQ design method goes ... here's one for systematic selection of Q based on the "symmetric root locus" (search that term if not familiar).

You mentioned that you're ok with a cheap control solution. I'll assume you want the closed loop time constant to be about 0.001 seconds, i.e., reducing the time constant of the plant by about a factor of 5. One way to approach this systematically is as follows.

Define a row vector H

H = [1000 1];

The transfer function formed with H is (we're going to use this later)

G = ss(massSys.a,massSys.b(:,1),H,0);

zpk(G)

ans =
1e05 (s+1000)
----------------------
(s^2 + 150s + 6.25e04)
Continuous-time zero/pole/gain model.

Note that this transfer function has a zero at s = -1000, which is where we'd like to place the dominant pole.

Form Q from H, and define R

Q = H.'*H, R = 100;

Q = 2×2

1000000 1000
1000 1

This Q (I think) is approximately the same as what we started with in terms of Q(1,1)/Q(2,2), and we have small off diagonal terms.

Design tthe sytem.

[K_lqr, S1, P1] = lqr(massSys({'pos' 'vel'},{'u'}), Q, R);

K = ss([K_lqr(1) -K_lqr(2)],'InputName',{'e' 'vel'},'OutputName',{'u'});

s = sumblk('e = r - pos');

sysClLqr2 = connect(massSys,K,s,{'r' 'd'},'pos','u'); % add analysis point

Closed loop transfer function

zpk(sysClLqr2('pos','r'))

ans =
From input "r" to output "pos":
9.9377e+06
-----------------
(s+1006) (s+9944)
Continuous-time zero/pole/gain model.

Note we have a closed loop pole very near to s = -1000 and another at s = -9944. So we've brought the open-loop bandwidth in a bit from 35000 to 10000. (EDIT: removed comment about phase margin)

figure

margin(getLoopTransfer(sysClLqr2,'u',-1));

The position step response is nearly identical to that above, but the steady state response to the disturbance is increased by a factor of 10.

[y,t] = step(sysClLqr2);

figure

subplot(211),plot(t,squeeze(y(:,1,1))); title('r-step')

subplot(212),plot(t,squeeze(y(:,1,2))); title('d-step')

We could increase the bandwidth to improve the disturbance rejection by decreasing R (cheaper control), which would also drive the "low frequency" pole ever closer to s = -1000. Or increase R to reduce the bandwidth and pay the price of not-as-good disturbance rejection and getting further away from our desired time constant.

For this design, or any design with Q formed from a row matrix H as above, we can use rlocus to plot the closed loop poles as the control cost R decreases. Form the "system reflection" of G and plot the root locus of its product with G

Gr = ss(-G.a,-G.b,G.c,0);

figure

rlocus(G*Gr)

On this plot, the open loop poles are the poles of the plant and their reflections around the jw axis, and the open loop zeros are the zeros of G, which we determine through selection of H, and their reflection around the jw axis. The plot shows migration of four closed loop poles. The two that are stable our our closed loop poles because we know that that the closed loop system must be stable. For R large, the LHP root locus starts at the open loop poles. As R decreases, the closed loop poles in the LHP migrate to the real axis with one headed to s = -1000 and the other heading to the left towards infinity. For one value of R the closed loop system will have a double pole at s = -1200 or so.

This plot shows why LQR with Q formed from row vector H results in a real pole and another migrating toward infinity as R -> 0. For this second order system, I don't think there's any avoiding this behavior because the selection of H can only influence where the open loop zero is located on the imaginary axis. But with appropriate selection of H and R you might be able to get satisfactory 2nd order, closed loop poles.

Other state feedback design techniques, such as garden variety pole placement, may provide more flexibility in placing the poles in desired locations, but they won't come with the robustness gurantees provided by LQR. However, keep in mind those robustness guarantees don't apply if you introduce an estimator into the loop, but those large margins are usually a good starting point before introducing sensors, actuators, esimator, etc. into the loop.

This same approach can be used with LQI. In this case, H has three parameters and G has two zeros, so there might be more flexibility in shaping this root locus plot to achieve desired pole locations and disturbance rejection.

John
on 4 Mar 2023

Edited: John
on 4 Mar 2023

@Paul thanks - that's very helpful. I feel like i'm learning many matlab command usages that i never though existed... I apologize in advance for my reply length; your post had lots of great info to reply to.

Overall, aside from the last portions of your post that seemed insightful to derive max response analytically (I still need to study your reply more), I think the main confusing thing to me is: why does LQR not seem to have any disturbance rejection (see (7) ), at least how it was set up with ss() and run by lsim()?

1) "Form the closed-loop system, include an analysis point at the input to the plant"

Neat; I had no idea analysis points could be added.

2) "To realize this system in a computer will require a very sampling frequency."

Yes, agreed. The poles are are in radians/s, so clock speed will be lower, and i'm expecting high rates.

3) "sysClLqr2 = connect(massSys,K,s,{'r' 'd'},'pos','u'); % add analysis point"

Ah, I see, you're using analysis point to both create a CL system, and also the ability to make an OL TF from 'u', to analyze the dynamics of the control input relative to the plant (ie control stability)

a) "margin(getLoopTransfer(sysClLqr2,'u',-1));"

Why is the -1 sign needed? If this is using sysClLqr2, isn't sysClLqr2 already designed with the assumption of the sign given by connect()? Connect() specified 'e = r - y', a '-1' sign already, no?

Would getLoopTransfer(sysClLqr2,'u') be the same?

b) Just for my own knowledge, is there an advantage for using getLoopTransfer(), vs making the u loop TF from a standard block diagram method, eg massSys/(1 + K*massSys)?

4) "The LQR theory guarantees at least 6 dB of gain margin and 90 deg of phase margin... (the Matlab computing of 88.5 deg must be not quite right)"

Interesting. I wonder if sampling rates matter in how bode is constructed, with fast poles.

5) "lsim works with C = [1 0] because you're plotting the system states computed by lsim, not the system output (if I correctly recall the original code posted with the Question."

In the original post i'd plotted y in addition to x, but I'll try to refine my question to be more crisp:

Since x.=Ax+B(-Kx) in CL, all states x need to be available for FB: here, K (K_lqr) has 2 gains, one per x1 (Pos) and x2 (Vel).

To make all measurements available for feedback control (in this case, measurements match internal states), it seems y would need to be eye(2) to provide Pos and Vel for use with K. Ie, the SS would need a measurement of both states, or else K*x can't exist in an actual implementation. So, how can lsim() correctly simulate a ss() input, when the system built from ss() using y = [1;0], and not y = eye(2)? How can lsim() have the correct measurements needed for feedback?

If the point is that lsim() uses the states (not measurements) for feedback, eg the states are a physical evolution once stimulated, then how can lsim() take into account a closed-loop system that include manual programmed feedback gains K that are supposed to utilize measurements? I thought lsim() implementations would use y (vs states) the same way a real system would.

6) "applying both steps simultaneously would yield the sum of these curves, which is essentially the top curve. ... Of course, d and r have different units, so keep that in mind in this type of assessment."

Ah -- I see. Thanks. True; force maps through mass/etc.

7) "We could increase the bandwidth to improve the disturbance rejection by decreasing R "

Interesting. I'm unable to change disturbance rejection at all. I didn't notice and dist rej change when changing Q and R substantially. I made Q(1,1) very high, and Q(2,2) and R very low (1e-10). There is no observable change in dist rejection, only a sharper pos-following (as expected). I thought this LQR formulation would cause an error, and thus shift in response to a disturbance.

8) "Q = H.'*H ... small off-diagonal terms"

This seems a unique approach to constructing Q that i haven't seen before, and I'm curious to learn more.

What are the advantages of including off-diag coupling? I'm asking independent of another question of whether making G using H is best -- as opposed to first making Q with no coupling terms, and make H from diag of G.

(9) "G = ss(massSys.a,massSys.b(:,1),H,0);"

"Note that this transfer function has a zero at s = -1000, which is where we'd like to place the dominant pole."

Do you mean dominant CL pole should be placed there, ie in the high gain scenario when poles move to that zero? How come a dominant pole should be at -1000? Isn't the -1000 a function of the H you put in?

By forming G, it seems you're saying that Q diag terms in G.C somehow map to the CL behavior by LQR given H-->Q-->K? Feedback gains K (Q,R --> A.R.E. --> K) would multiply through measurements of the states, but it's unclear to me how G represents that without including the A.R.E. in some way.

" Form the "system reflection" of G ... Gr = ss(-G.a,-G.b,G.c,0) ... rlocus(G*Gr)"

"we can use rlocus to plot the closed loop poles as the control cost R decreases"

This is interesting. G is an OL system, and rlocus() closes with FB gain K, but how does that gain map to R? Again, I'm unclear because K is found using ARE (as I know you know :) ) and i thought wasn't straightforward mapping from Q and R.

Why is the reflection needed? Isn't the same behavior observed from LHP? As before, this question probably comes from me just not understanding the significance of G...

"For R large, the LHP root locus starts at the open loop poles"

How is this known? Wasn't G constructed without knowledge of R?

Also, I think you're saying taht rlocus(G*Gr) assumes an arbitrary gain, and in some way R will map to a gain (given a fixed Q) as given by ARE. So then, the specific mapping is less important, than understanding the trend of increasing or decreasing impact of R.

"and the open loop zeros are the zeros of G, which we determine through selection of H"

This is a subtle and interesting point.

"As R decreases, the closed loop poles in the LHP migrate to the real axis with one headed to s = -1000 and the other heading to the left towards infinity. ... This plot shows why LQR with Q formed from row vector H results in a real pole and another migrating toward infinity as R -> 0.""

"I don't think there's any avoiding this behavior [<-- my note: "this" = referencing settling on a zero, i'm assuming] because the selection of H can only influence where the open loop zero is located on the imaginary axis."

I think you're saying that dominant (slowest) CL system pole is set by Q (via your illustration of G, though I don't understand G). So, at a high CL gain the system will settle at those zeros (one finite, one infinite). And that beyond some gain (R) where poles move onto the real axis, reducing R can't adjust any properties of the system response beyond how fast it settles, so we can't choose arbitrary 2nd-order response. Is this accurate?

I would have thought changing Q (ie H) may shift the zeros, and thus R's resulting gain point where poles move ontot he real axis. But I think your point is that overall, Q and R may still be coupled in a way that together they may not allow arbitrary response, or that it may be unclear when they do or don't.

10) "This same approach can be used with LQI. In this case, H has three parameters and G has two zeros, so there might be more flexibility in shaping this root locus plot to achieve desired pole locations and disturbance rejection."

Thanks, I think I'm starting to understand. Once I understand more about G's significance (and the physical meaning of why Q (H) is needed for it), this will probably become more clear. But at a high level, this seems useful to understand, that control effort limitations relating to the dynamic response (and how different structures impact what's achievable)...

Paul
on 4 Mar 2023

Edited: Paul
on 4 Mar 2023

3a. The -1 is needed because that's the way the open loop transfer function is defined for Nyquist stability analysis. Suppose we have a system with plant model Y(s)/U(s) = 1/(s+1) and unity, negative feedback. I think you'll agree that the open loop transfer function for Nyquist plot, or Bode plot is L(s) = 1/(s+1). In code

P = tf(1,[1 1],'InputName','u','OutputName','y');

S = sumblk('u = r - y');

C = connect(P,S,'r','y','u');

Get the open loop transfer function

tf(getLoopTransfer(C,'u',-1))

ans =
From input "u" to output "u":
1
-----
s + 1
Continuous-time transfer function.

Without the -1 the answer is incorrect

tf(getLoopTransfer(C,'u'))

ans =
From input "u" to output "u":
-1
-----
s + 1
Continuous-time transfer function.

If we break the loop at u and go all the way around we get a tf of P*(-1), where the -1 comes from the negative feedback on the sum junction that forms the error signal. getLoopTransfer includes that -1 as it should, so we have to multiply by -1 to get the correct L(s).

3b) massSys/(1+K*massSys) is not the open loop transfer function (and that's not quite right, but I think I know what you meant). That's (sort of) the closed loop TF for Y(s)/D(s). If you're careful you could form the OL tf independently, but why do that? Just build one system that can do both closed-loop and open-loop analysis. Much easier IMO.

4) Edit: The guaranteed phase margin is 60 deg, not 90 deg as I had originally posted.

5) I'm confused here. lsim doesn't assume anything about what's available for feedback. It just simulates model that its given in response to the input and initial conditions. All of the feedback was implemented a priori using connect in my implementation or the matrix manipulations in yours. But that's all before lsim.

7) I'm not sure what to say here. In my example I showed how reducing R increased the steady state gain in repsonse to a disturbance step by a factor of 10. But it was still small. So if you simulate a step change in the reference and a simultaneous step in the disturbance, you'd have to make the disturbance a few orders of magnitude larger than the position step command before you can see its effect on the output.

8) I'm not sure there's an inherent advantage in using off-diagonal terms, that's just what happens when forming Q from the the pseudo-output matrix H. I call it the "pseudo-output" because H*x is not an output we necessarily care about. If you want to learn more, search on "LQR pole placement" or "LQR symmetric root locus." It's not an exotic method and should be easily found. Note that if Q is diagonal with both Q(1,1) and Q(2,2) non-zero, then it can't have been formed from H'*H with H a row vector.

9) I chose the H to get a zero of G(s) at s = -1000, which becomes the dominant closed-loop pole in the cheap control design, because your design yielded a closed-loop response that was basically first-order with a time constant of 0.001. I just wanted to compare to what you already did.

The LQR theory tells us that the closed loop poles of our LQR system will satisfy the equation:

1 + (1/R)*G(s)*G(-s) = 0. (1)

with G(s) = H*inv(s*I - A)*B,

That's a theoretical result.

G(-s) is what I called the "reflection." The roots of (1) lie on the root locus of G(s)*G(-s) and the root locus "gain" is 1/R. Following standard root locus rules ... as R is is large, i.e., expensive control, the locus gain is small and the roots start at the poles of G(s)*G(-s). As R gets small, i.e., cheap control, the locus gain is large and the roots migrate to the zeros of G(s)*G(-s). The closed-loop poles that we care about are the LHP roots on the locus. The beauty is that we get to pick the zero locations via selection of H, and therefore can influence how and where the closed-loop poles migrate as R gets small. Of course, the CARE is wrapped up in the derivation that gets to eqn (1).

For this problem, it was pretty easy to pick an H that gave the desired zero of G(s) given massSys.A and massSys.B(:,1). That might not be so straightforward for a problem even moderately more complex. If that's the case, one solution is to transform the plant to a canonical realization, specify the H matrix for that realization by inspection, then transform back to the original state coordinates and use that transformed H matrix.

John
on 4 Mar 2023

Edited: John
on 4 Mar 2023

"7) ... In my example I showed how reducing R increased the steady state gain in repsonse to a disturbance step by a factor of 10. ... you'd have to make the disturbance a few orders of magnitude larger than the position step command before you can see its effect on the output."

Yes -- and that was helpful :) But perhaps I'm missing something...the output I showed in the last post showed a dist that does have an affect on x1 and y, and R was very small; I made it as penalty-free as possible (eg 1e-10).

(BTW it was a large disturbance after I scaled per your suggestion:) ). So, the disturbance had to have been the correct scale for the plant to be impacted substantially as in the plot.

What I'm seeing though, is that the CL response to the disturbance is non-existent regardless of R. There seems to be no dist rejection even though r-y is large, and I would have expected some correction.

I know there wasn't any noticable CL correction, since I also plotted the OL plant, and the steady state responses seemed the same: CL was disturbed as much as OL, but CL had no extra ability to counter-act that disturbance despite the large tracking error.

Paul
on 4 Mar 2023

Edited: Paul
on 4 Mar 2023

I went back to your original design for Q, varied R, and compute the step response to a disturbance input.

%% (1) Make 2nd-0 system

zeta = 0.3;

wn_rPs = 250;

ampl = 1;

m = 1e-5;

b = zeta * 2 * wn_rPs * m;

k = wn_rPs^2 * m;

A = [0 1; -k/m -b/m];

B = [0; 1/m]; F =[0; 1/m];

C = eye(2); % output both states

D = [0];

massSys = ss(A, [B F], C, 0);

massSys.StateName = {'p', 'v'};

massSys.InputName = {'u' 'd'}; % changed to u

massSys.OutputName = {'pos','vel'};

Q = [1000 0 ; 0 0.001];

figure

hold on

for R = [1e-7 1e-5 1e-3 1e-1]

[K_lqr, S1, P1] = lqr(massSys({'pos' 'vel'},{'u'}), Q, R);

K = ss([K_lqr(1) -K_lqr(2)],'InputName',{'e' 'vel'},'OutputName',{'u'});

s = sumblk('e = r - pos');

sysClLqr = connect(massSys,K,s,{'r' 'd'},'pos','u'); % add analysis point

step(sysClLqr('pos','d'))

end

legend('R = 1e-7','R = 1e5','R = 1e-3', 'R = 1e-1')

copyobj(gca,figure)

ylim([0 1e-4])

We see that the steady state output to a step disturbance is directly related to the value of R, at least for the range of R tested. Do you not observe a similar result?

John
on 4 Mar 2023

Edited: John
on 5 Mar 2023

"Do you not observe a similar result?"

I don't think I do. Not sure if you checked, but if you happened to, were you able to get a CL correction to the disturbance? As mentioned, I keep seeing the same response as OL (for the dist correction part)

I attached a .m file to generate these plots that include the OL system as a reference.

From my last post, as context for figures:

"I know there wasn't any noticable CL correction, since I also plotted the OL plant, and the steady state responses seemed the same: CL was disturbed as much as OL, but CL had no extra ability to counter-act that disturbance despite the large tracking error."

So, the CL system (which has R) responds the same as the OL system (which has no R), so R makes no difference. Just to show explicitly, here are two plots, one with R = 0.001, the other R = 0.1.

If I change R = 100, for example, then none of the CL response is as expected (not tracking to ref), even though dist response is better. Though I'm not convinced an active rejection is what's occurring in this case.

It's almost like disturbance is being treated as a reference input.

But I think that's not the case here, because when I build LQI, the CL system does reject the disturbance (and the OL system does not reject the dist), as shown in some earlier posts.

Paul
on 5 Mar 2023

Hi John,

The code you posted, with two mods, shows that change R in the cost function has an effect on closed loop disturbance rejection. I made two changes: I got rid of all the dcgain adjustments because they were confusing to me. And I changed F so that F{dist} is a torque disturbance, not an angular acceleration.

tf_s = 0.75;

ts_s = 1e-6;

t = (0 : ts_s : tf_s)';

refStartTime = 0.1;

distAngleTime = 2*tf_s/3;

distAngVelTime = 4*tf_s/5;

ref = zeros(numel(t), 1);

ref(t > refStartTime) = 1;

ref(t > 2*refStartTime) = 0;

ref(t > 3*refStartTime) = -1;

distAng = zeros(numel(t), 1);

% Steady-state force (position disturbance)

distAng(t > distAngleTime * tf_s) = -1;

% force ramp for velocity disturbance

distAngvel = -1000 * (t-distAngVelTime) .* (t > distAngVelTime);

dAngPlusAngvel = distAng + distAngvel;

%% Criteria

overShootPec = 5;

zeta = abs(log(overShootPec/100)) / sqrt(pi^2 * log(overShootPec/100)^2);

wn_rPs = 2 * pi * 40;

%% OL system

ampl = 1;

J_kgm2 = 1e-5;

b = zeta * 2 * wn_rPs * J_kgm2;

k = wn_rPs^2 * J_kgm2;

A = [0 1; -k/J_kgm2 -b/J_kgm2];

B = [0; 1/J_kgm2];

C = [1 0];

D = [0];

massSys = ss(A, B, C, D, ...

'StateName', {'x', 'x dot'}, ...

'InputName', {'r(ref)'}, ...

'OutputName', {'theta'});

% no dcgain

%massSys = massSys / dcgain(massSys)

massSys.StateName = {'x', 'x dot'};

massSys.InputName = {'r(ref)'};

massSys.OutputName = {'theta'};

% define F so that disturbance is a torque

F = [0; 1/J_kgm2];

sysOlWithDist = ss(massSys.A, [massSys.B F], massSys.C, [0 0]);

% I haven't changed this scaling part yet per what you mentioned earlier

% (later code is changed)

%gains = dcgain(sysOlWithDist); % relate to ref --> y(output), and F --> y(output)

%sysOlWithDistDc = ss(massSys.A, [massSys.B F/gains(2)] , massSys.C, [0 0])

sysOlWithDistDc = ss(massSys.A, [massSys.B F] , massSys.C, [0 0]);

sysOlWithDistDc.StateName = {'x', 'x dotF'};

sysOlWithDistDc.InputName = {'r(ref)', 'F(dist)'};

sysOlWithDistDc.OutputName = {'theta'};

uOlSysWithDist = [ref dAngPlusAngvel];

% first LQR design with R = 0.01

%% LQR with dist rej

Q = [1000 0 ; 0 0.001];

R = [ 0.01 ];

[K_lqr, S1, P1] = lqr(massSys, Q, R);

A_CL_lqr = (massSys.A - massSys.B * K_lqr);

%eig(A_CL_lqr)

%[T, Dd] = eig(A_CL_lqr);

sysClLqrNoDist = ss(A_CL_lqr, massSys.B * K_lqr * [1;0], massSys.C, massSys.D);

% Add Disturbance. Dist is a reference force input to the acceleration (vel state) level.

%F = [0; 1/J_kgm2];

%sysClLqrWithDist = ss(A_CL_lqr, [massSys.B * K_lqr * [1;0], F], massSys.C, massSys.D);

%gains = dcgain(sysClLqrWithDist) % (#, #)

%sysClLqrWithDist = ss(A_CL_lqr, [massSys.B * K_lqr * [1;0], F / gains(2) ], massSys.C, massSys.D, ...

sysClLqrWithDist = ss(A_CL_lqr, [massSys.B * K_lqr * [1;0], F], massSys.C, massSys.D, ...

'StateName', {'p', 'p dot'}, ...

'InputName', {'r(ref)', 'F(dist)'}, ...

'OutputName', {'theta'});

%gains = dcgain(sysClLqrWithDist);

% Get and plot y and x, with disturbance

% Internal states are x, xdot. Also x = y in thsi example.

%uLqrWithDist = [ref dAngPlusAngvel];

uLqrWithDist = uOlSysWithDist;

x0 = [0 0];

y = {};

x = {};

% From prev section, as reference

% [y{1}, t1, x{1}] = lsim(sysOlWithDistDc, uOlSysWithDist, t, x0);

% [y{2}, t2, x{2}] = lsim(sysClLqrWithDist, uLqrWithDist, t, x0);

% plot the step response. The output due to the disturbance is about 3

% orders of magnitude than the output due to the reference command

[y{1},t1] = step(sysClLqrWithDist);

figure

plot(t1,squeeze(y{1})),grid

% closed loop bode plot.

figure

w = logspace(2,6,1000);

bode(sysClLqrWithDist,w),grid

% second design with R = 100

%Q = [1000 0 ; 0 0.001];

R = [ 100 ];

[K_lqr, S1, P1] = lqr(massSys, Q, R);

A_CL_lqr = (massSys.A - massSys.B * K_lqr);

% Add Disturbance. Dist is a reference force input to the acceleration (vel state) level.

%F = [0; 1/J_kgm2];

sysClLqrWithDist = ss(A_CL_lqr, [massSys.B * K_lqr * [1;0], F], massSys.C, massSys.D, ...

'StateName', {'p', 'p dot'}, ...

'InputName', {'r(ref)', 'F(dist)'}, ...

'OutputName', {'theta'});

% From prev section, as reference

% [y{1}, t1, x{1}] = lsim(sysOlWithDistDc, uOlSysWithDist, t, x0);

% [y{2}, t2, x{2}] = lsim(sysClLqrWithDist, uLqrWithDist, t, x0);

[y{2},t2] = step(sysClLqrWithDist);

figure

plot(t2,squeeze(y{2}))

figure

bode(sysClLqrWithDist,w),grid

Increasing R slows downs the step response from the reference and increases the gain (by about 40 dB at low frequency) on the closed-loop transfer function from the disturbance.

John
on 5 Mar 2023

Edited: John
on 5 Mar 2023

"Increasing R slows downs the step response from the reference and increases the gain"

Just to check, is your expectation that with large R, the system should still track the reference (but more slowly), and disturbance rejection gain is higher?

I'm probably misunderstanding, but with large R, the controller doesn't seem to track the reference (per the plots I posted) in steady-state; it doesn't look like just a slowdown / settling time (BW) issue, since the response settles in plenty of time before the reference changes, but stays at a non-ref-tracking level.

Paul
on 5 Mar 2023

Edited: Paul
on 5 Mar 2023

As R increases, we should expect that the optimal control approaches "do nothing" because the contorol is expensive and the plant is already stable. The feedback gains should get smaller.

As the LQR feedback gains approach zero:

we should expect that closed-loop poles get closer to the open-loop poles

the closed-loop response due to the disturbance torque approaches that of the plant alone because there is effectively no feedback to compensate the disturbance

because the steady state gain in response to the step reference command is always K1*P(0)/(1 + K1*P(0)), where P(s) is the plant transfer function from u to y (applied torque to angle) and K1 is the position feedback gain, as K1 gets smaller the steady state reponse to the reference step approaches K1*P(0). In the limit, K1 ->0 as will the response.

%% Criteria

overShootPec = 5;

zeta = abs(log(overShootPec/100)) / sqrt(pi^2 * log(overShootPec/100)^2);

wn_rPs = 2 * pi * 40;

%% OL system

ampl = 1;

J_kgm2 = 1e-5;

b = zeta * 2 * wn_rPs * J_kgm2;

k = wn_rPs^2 * J_kgm2;

A = [0 1; -k/J_kgm2 -b/J_kgm2];

B = [0; 1/J_kgm2];

C = [1 0];

D = [0];

massSys = ss(A, B, C, D, ...

'StateName', {'x', 'x dot'}, ...

'InputName', {'r(ref)'}, ...

'OutputName', {'theta'});

massSys.StateName = {'x', 'x dot'};

massSys.InputName = {'r(ref)'};

massSys.OutputName = {'theta'};

% define F so that disturbance is a torque

F = [0; 1/J_kgm2];

sysOlWithDistDc = ss(massSys.A, [massSys.B F] , massSys.C, [0 0]);

sysOlWithDistDc.StateName = {'x', 'x dotF'};

sysOlWithDistDc.InputName = {'r(ref)', 'F(dist)'};

sysOlWithDistDc.OutputName = {'theta'};

P0 = dcgain(sysOlWithDistDc);

P0 = P0(1);

[y,t] = step(sysOlWithDistDc);

y = squeeze(y);

figure;

hy = subplot(211); hold on

plot(hy,t,y(:,1)),grid

hd = subplot(212); hold on

plot(hd,t,y(:,2)),grid

format short e

Q = [1000 0 ; 0 0.001];

for R = [1e-3 1e-1 1e1 1e3 1e4]

[K_lqr, S1, P1] = lqr(massSys, Q, R);

[R, K_lqr , K_lqr(1)*P0/(1 + K_lqr(1)*P0) , K_lqr(1)*P0]

A_CL_lqr = (massSys.A - massSys.B * K_lqr);

sysClLqrWithDist = ss(A_CL_lqr, [massSys.B * K_lqr * [1;0], F], massSys.C, massSys.D, ...

'StateName', {'p', 'p dot'}, ...

'InputName', {'r(ref)', 'F(dist)'}, ...

'OutputName', {'theta'});

y = step(sysClLqrWithDist,t);

y = squeeze(y);

plot(hy,t,y(:,1)),grid

plot(hd,t,y(:,2)),grid

end

ans = 1×5

1.0e+00 *
1.0000e-03 9.9937e+02 1.0083e+00 9.9937e-01 1.5821e+03

ans = 1×5

1.0000e-01 9.9370e+01 1.0790e-01 9.9368e-01 1.5732e+02

ans = 1×5

1.0000e+01 9.3883e+00 1.5439e-02 9.3696e-01 1.4863e+01

ans = 1×5

1.0e+00 *
1.0000e+03 5.5113e-01 2.2187e-03 4.6596e-01 8.7252e-01

ans = 1×5

1.0e+00 *
1.0000e+04 7.4736e-02 4.3831e-04 1.0580e-01 1.1832e-01

John
on 5 Mar 2023

Edited: John
on 5 Mar 2023

I see, thanks. That does match what I was seeing. And I had no idea ss() could take a vector of matrices as inputs; very useful.

But then, to go back to your question on why I'm looking to use LQI: the above would be a reason for LQI vs LQR, no? I was looking for LQI for dist rejection; I think you were curious what the benefit of it is for my specific application, if LQR rejection gains can be increased.

It seems that LQR cannot do both dist rej well, and reference tracking accurately, since changing R will optimize for one or the other. Or, am I misunderstanding the results from the last few posts?

(I still owe a reply on the other points from your earlier email, since I'd only replied to (7), regarding the dist rejection with [B F not working as expected )

Paul
on 6 Mar 2023

Edited: Paul
on 6 Mar 2023

Not sure what you're referring to about ss() taking a vector of matrices .....

".... LQR cannot do both ...." I guess the depends on the specifications for accurate tracking and disturbance rejection, which I don't believe have been defined in this thread. As shown above, using the Q you've defined, letting R get small does yield fairly accurate tracking and small steady response to a disturbance step. So I don't know what it means to optimize one or the other. In general, high gain feedback is good for both. But it's still a Type-0 system, no matter what.

If you go to an LQI based approach and implement the system as in the block diagram on the lqi doc page, then you will get a steady sate response to a constant disturbance that is zero and zero error in the output in reponse to a step reference command regardless of R. Maybe that's what you're looking for.

You don't owe anything.

Also, I played around a bit with lqi for this problem and kept running into problems until I realized I misunderstood how LQI works. The doc page sates

"The control law u = –Kz = –K[x;xi] "

I checked lqi.m and verified that the plant is augmented with xi = -int(y),i.e., the cost function is defined for the augmented state vector z = [x; xi] = [x; -int(y)] (I thought it would be augmented with xi = int(y) and then K(end) would be adjusted after the fact to match the feedback structure in the block diagram).

That negative sign on the integral state in the augmented plant makes a big difference if using a Q matrix with non-zero, off-diagonal terms.

John
on 6 Mar 2023

Edited: John
on 7 Mar 2023

@Paul thanks.

1) "The doc page sates

"The control law u = –Kz = –K[x;xi] "

I checked lqi.m and verified that the plant is augmented with xi = -int(y)"

Ahh. I was confused about this too, and had to try both signs for LQI gain (K_lqi(3) ) because i couldn't reconcile the page -- before settling on the sign that caused eig(A) to be stable. Thanks. This is helpful.

2) "As shown above, using the Q you've defined, letting R get small does yield fairly accurate tracking and small steady response to a disturbance step. So I don't know what it means to optimize one or the other. In general, high gain feedback is good for both."

The gain "correction" that I was doing (which you've been suggesting i pull away from) was causing some of my confusion. Based on the analysis you elaborated on, and further testing, my question on LQR dist rejection is resolved when the gain confusion is untangled (as you've been already doing). Thanks.

The result does point to needing LQI because of the steady-state error.

3) Then, I'm back to my question on LQI.

The integrator isn't responding as expected, when I use the gains from lqi(), and setting up CL matrices based on the matlab's LQI documentation.

See the attached .m, with fixes based on the above conversation.

If I make integral error Q gain K_lqi(3) is small (~0), the system responds as normal LQR, and ref tracking works as expected.

If I make integral error Q gain nonzero (eg -1), ie trying to add any integral action, the system doesn't track the ref.

a) The system is unable to track to a reference (not even including a disturbance). The integral error accumulator continuously accumulates regardless of actual error. I'm attempting to set up e = int(r-y), so augmenting B like so:

B_Cl_lqi = [massSys.B; 1];

which I thought would yield e = int(r-y).

If I flip A's e_dot sign ( -massSys.B * K_lqi(3) ) and instead do

A_Cl_lqi = [massSys.A - massSys.B * K_lqi(1:2), massSys.B * K_lqi(3)

-massSys.C, 0];

then the sys is unstable, so I think what i have is correct for -B*K_lqi(3)

Are Matlab's LQI matrix equations correct?

b)

sysClLqi = ss(A_Cl_lqi, B_Cl_lqi * K_lqi * [1; 0; 0], C_CL_lqi, D);

step(sysClLqi) settles to ~1 if only B_Cl_lqi is included, or if k_lqi(3) --> 0.

The step settles to ~315 (K_lqr(1) gain), if K_lqi * [1; 0; 0] factor is included, or if k_lqi(3) != 0.

This high settling is coming from (I think)

B_Cl_lqi * K_lqi * [1; 0; 0] =

0

1.9702e+07

315.24

, so x1_dot automatically has x3 = 315 added to it, it seems via the "-massSys.B * K_lqi(3)" term that's in the A matrix.

I would have thought the LQI step should still settle at ~1, but doing e = int(r-y) instead of e = int(y) seems to break that. I'm guessing I'm setting this up wrong.

c) The integral error accumulator (state x3) accumulates in the opposite direction that i'd expect. If I reverse -B*K_lqi(end), then it is the correct direction but eig(A) is unstable. I think this is what you're mentioning from the documentation error: that the error seen will be subtracted, vs added to the state; ie what I'm seeing might be correct.

But then, the angle y drifts in ways I don't expect, and lsim() should be a correct output.

Overall, I'm not sure why the formulation that I think Mathworks suggests for lqi(), isn't working in my implementation.

Paul
on 7 Mar 2023

Here is my example using lqi

Plant model

overShootPec = 5;

zeta = abs(log(overShootPec/100)) / sqrt(pi^2 * log(overShootPec/100)^2);

wn_rPs = 2 * pi * 40;

%% OL system

ampl = 1;

J_kgm2 = 1e-5;

b = zeta * 2 * wn_rPs * J_kgm2;

k = wn_rPs^2 * J_kgm2;

A = [0 1; -k/J_kgm2 -b/J_kgm2];

B = [0; 1/J_kgm2];

C = [1 0];

D = [0];

massSys = ss(A, B, C, D, ...

'StateName', {'theta', 'theta_dot'}, ...

'InputName', {'u'}, ...

'OutputName', {'theta'});

Plant with disturbance

% define F so that disturbance is a torque

F = [0; 1/J_kgm2];

sysOlWithDist = ss(massSys.A, [massSys.B F] , eye(2),0);

sysOlWithDist.StateName = {'theta', 'theta_dot'};

sysOlWithDist.InputName = {'u', 'd'};

sysOlWithDist.OutputName = {'theta','theta_dot'};

Define plant A and B augmented z = [x; xi], where xidot = -theta

Aaug = [massSys.a zeros(size(massSys.a,1),1);-massSys.c 0];

Baug = [massSys.b;-massSys.d];

H matrix for symmetric root locus.

H = [2*0.7*1000 1 -1000^2];

G = ss(Aaug,Baug,H,0);

Two closed loop poles will apporoach the zeros of G as R gets smaller

zpk(G)

ans =
1e05 (s^2 + 1400s + 1e06)
-------------------------
s (s^2 + 160s + 6.317e04)
Continuous-time zero/pole/gain model.

Plot the symmetric root locus to show how the closed loop poles migrate as R gets smaller

Gr = ss(-G.A,-G.B,G.c,0);

figure

rlocus(G*Gr)

LQ design

format short e

Q = H.'*H

Q = 3×3

1.0e+00 *
1.9600e+06 1.4000e+03 -1.4000e+09
1.4000e+03 1.0000e+00 -1.0000e+06
-1.4000e+09 -1.0000e+06 1.0000e+12

Note that Q(3,3) is quite large. We could also do the design with just the diagonal of Q and get a reasonable design, I believe.

R = 100;

[K_lqr, S1, P1] = lqi(massSys, Q, R);

Form the closed loop system using connect. I still think in the long run this approach is going to be better than the explicit matrix math approach. Can easily add analysis points, actuator and sensor models, etc.

sysint = ss(0,1,1,0,'StateName','xi','OutputName','xi','InputName','e');

K = ss(-K_lqr,'InputName',{'theta','theta_dot','xi'},'OutputName','u');

S = sumblk('e = r - theta');

sysc = connect(sysOlWithDist,K,S,sysint,{'r','d'},'theta');

Closed loop transfer functions. The second order term in the denominator is close to the numerator of G

zpk(sysc)

ans =
From input "r" to output "theta":
1e+10
-----------------------------
(s+9997) (s^2 + 1397s + 1e06)
From input "d" to output "theta":
1e05 s
-----------------------------
(s+9997) (s^2 + 1397s + 1e06)
Continuous-time zero/pole/gain model.

Also, as expected with integral control, the dcgain from the reference command is unity and from the disturbance is zero

dcgain(sysc)

ans = 1×2

1 0

Plot the step responses.

[y,t] = step(sysc);

y = squeeze(y);

figure

subplot(211);plot(t,y(:,1)),grid

subplot(212);plot(t,y(:,2)),grid

The closed loop system can also be formed via the matrix math approach as well. Start with

zdot = Aaug * z + Baug * u + [0;0;1]*r + [F;0]*d

Then sub u = - K_lqr*z

tempsys = ss(Aaug-Baug*K_lqr,[[0;0;1] , [F;0] ],[1 0 0],0);

zpk(tempsys)

ans =
From input 1 to output:
1e+10
-----------------------------
(s+9997) (s^2 + 1397s + 1e06)
From input 2 to output:
1e05 s
-----------------------------
(s+9997) (s^2 + 1397s + 1e06)
Continuous-time zero/pole/gain model.

John
on 7 Mar 2023

Edited: John
on 7 Mar 2023

Thanks @Paul. I'll try both the connect() and the explitic math method; I'm looking to understand the math version, ie what connect() does, esp since I'll implement this in Simulink next with discrete components.

Based on the above, I think my issues are almost narrowed down: understanding B augmentation, and vector input nomenclature, both when using ss().

I'm trying to do:

x_dot = A * x + B * u = A * x + B * [-K_lqr(1:2) * x - K_lqr(3) * xi ].

This leads to

x_dot = [A - B*K_lqr(1:2)] * x - B * K_lqr(3) * xi (This is similar to what you also do)

xi_dot = r - y = r - C * x (This is different than what you do)

with x = [x; xi]. (Since you might be using a transform and use z, is this the same as what you do?)

1) "xidot = -theta"

I'm trying to do a ref tracking version, vs this regulator version. I'm having trouble with e = int(r-y), ie xidot = r-theta.

To do r-y = r - C*x, in the CL formulation where B is the ref vector, I augmented B with the 1 in [B; 1] so that +1*r is added to the xidot level. Given the -C already in the OL A matrix, then xidot = -C*x + r.

Ie instead of

Baug = [massSys.b;-massSys.d];

it'd be

Baug = [massSys.b; 1];

When I run your example with this change, it unstable as well.

Am I thinking about B augmentation incorrectly to do r - C*x?

2) Regarding

zdot = Aaug * z + Baug * u + [0;0;1]*r + [F;0]*d

I see that it does work when I run your code, but not sure why. Why is step() working correctly with the [0;0;1]? Isn't this saying that a reference input (which I thought was at the velocity level, ie element 2) would have no impact? Why doesn't it need to be (for example) [0; 1; 0] for the 0-state regulator xidot = -theta case you were showing, or [0; 1; 1] in the reference tracking case with xidot = r-theta?

3) Here's a non-zero initial condition (pos = 0.3) regulator case, ie [B; 0], to show that what I have also seems to work and converge to 0 -- until reference tracking is added. Here, Q(3,3) = 1e6. The 2nd plot is just a zoom.

And then here's your tempsys (the matrix math version) using lsim(tempsys), which does seem to ref track as well. But, I don't understand why the integrator ref-tracks correctly, given how (a) you set up Baug = [massSys.b; 0] for xidot = -theta, and given how you have the [0; 0; 1] in buildingtempsys that seems like it'd ignore any reference input...

These are the main differences between the version you ended up with, and what I used. I think our closed-loop A are ~the same.

Paul
on 7 Mar 2023

Edited: Paul
on 7 Mar 2023

I guess I could have been clearer with repspect to the definiton of xi and its derivative. When I said

"xidot = -theta"

that was in the context of developing Aaug and Baug for purposes developing G to illustrate the design using symmetric root locus. When I built tempsys, i used xidot = e = r - y = r - theta.

Think about it this way.

The original plant model, including the disturbance is:

xdot = A*x + B*u + F*d (1)

y = C*x + Du (2)

Now we augment the plant model with an additional state for the integrator, xi, defined by the equation

xidot = r - y (3)

Combining these equations:

This model is nothing more than combining (1), (2), and (3) into matrix form. We see that r is mutiplied by [0;0;1] because r comes into the derivative of xi.

For the LQ design, we use the "A" matrix of the augmented sytem and the "B" matrix that multiplies u, which are exactly what is defined as Aaug and Baug. It's important that xi be the last state to be consistent with the lqi definition as shown on the lqi page (on which there is no error, though at one point I thought otherwise).

Using the symmetric root locus technique, I defined Q from the H that gave me desirable zero locations of the transfer function H*inv(s*I - Aaug)*Baug.

The Q and R yield K_lqr, and according to lqi the control is: u = -K_lqr * [x;xi]. Subbing that control into the augmented plant yields

and simplifying yields tempsys, with y = [[1 0] 0]*[x;xi]

John
on 7 Mar 2023

@Paul Thanks, that clarifies. Where I went wrong was indeed on the r vector.

1) "that was in the context of developing Aaug and Baug for purposes developing G to illustrate the design using symmetric root locus. When I built tempsys, i used xidot = e = r - y = r - theta."

Got it, thanks.

2) " We see that r is mutiplied by [0;0;1] because r comes into the derivative of xi."

Ahhhh, I see. i missed (again) that the LQI formulation puts r through the integrator (unlike the accumulator of PID, for example). Thanks. That makes sense and resolves most of my confusion. Specifically, it also resolves my confusion on how Q(3,3) gains are coupled to Q(1,1) and Q(2,2) (in the case of pure diag), given how I previously was thinking of them as independent.

3) "It's important that xi be the last state to be consistent with the lqi definition as shown on the lqi page"

Yes, that part makes sense.

The two formulations for A (that you had and what I had) were indeed the same. I was confused on the lqi() formulation, specifically how the reference ties in with system response. It doesn't make sense to try and reduce integral action reference tracking independent of standard LQR response; it simply is not connected in that way. Thanks for your explanations, and I learned a fair bit about design (and how to use matlab for this design); I'll start using the LQR symmetric root locus technique that seems useful.

This resolves this question, and LQI now seems to be working as expected.

John
on 21 Mar 2023

Edited: John
on 21 Mar 2023

@Paul I'm coming back to this question based on looking at matlab documentation (here) again, given the Ki sign question, as I'm trying to clarify what exactly LQI is doing. Specifically, with how K and Ki combine with their signs. I think you said that

"I checked lqi.m and verified that the plant is augmented with xi = -int(y),i.e., the cost function is defined for the augmented state vector z = [x; xi] = [x; -int(y)] (I thought it would be augmented with xi = int(y) and then K(end) would be adjusted after the fact to match the feedback structure in the block diagram).

That negative sign on the integral state in the augmented plant makes a big difference if using a Q matrix with non-zero, off-diagonal terms."

I'm implementing this in Simulink, and even with the above, the signs are confusing (also, I'm using a Q with only diag elements). The doc page says

u=−K[x;xi]

ie u = -K*xi - K*x = -K*int(r-y) - K*x

If doing ref tracking (as you showed above), it would be

u = -K*int(r-y) + K*(r-x)

Also, the doc page says (as a proxy for the continuous case)

xi[n+1]=xi[n]+Ts(r[n]−y[n])

so I think xi indeed should be xi = int(y) as you were also thinking.

Currently, lqi() returns negative sign for Ki gain, ie [+, +, -]

So, here's what I think the documentation says should be occurring; check out Sum3.

,since this is what the LQI() page shows:

So I think I have it set up correctly, minus the difference for reference tracking, ie K(r-x), vs K(-x).

But, this is unstable.

If Sum3 is changed to -+ instead, then it works as expected.

So it seems the documentation is wrong at least 3 spots, ie perhaps one more place than the dicrepancy you saw:

- how xi is defined (the discrete case)
- how the diagram is drawn
- the sign for Ki

Is this accurate, or am I still misunderstanding what's occurring here?

In any case, the LQI() page should be reworked for consistency, and to be explicit about gain signs etc:D

Paul
on 21 Mar 2023

Edited: Paul
on 21 Mar 2023

As I stated in this comment: "... as shown on the lqi page (on which there is no error, though at one point I thought otherwise)." So, I don't think there's any problem with the lqi doc page, and my lqi example above exactly followed the implementation on the doc page.

As stated on that page, the augmented state vector is:

z = [x; xi], where xi = int(r-y). Again, according to the doc page, the control is

u = -K*z = -K1*x1 - K2*x2 - K3*xi = -K1*x1 - K2*x2 - K3*int(r - y)

which is exactly what is shown in the diagram on the doc page (and copied above).

For a typical system that employs negative feedback in all the loops we should expect K1 > 0, K2 > 0, and K3 < 0, which is exactly what you report for your model [+,+,-]. The reason K3 is negative is because we (typically) want a positive feedback gain on the error signal, which is actually a negative feedback loop because of the negative sign on the feedback that forms the error (just think of typcial proportinal control, for example).

So, the correct implementation in Simulink is actually in the second diagram you've shown, where you've put the negative sign on the sum3 input from K3, to make that loop -K3*int(r-y), which is exactly the same as on the doc page.

As for the discrete time implementation (which I wouldn't call a proxy for the continuous case), I think that's also correct and consistent with the continuous time model. Start with the continuous time equation:

xidot = e = r - y, i.e, xi = int(e) = int(r-y).

One discrete time implementation for xi is then

xi[n+1] = xi[n] + Ts*xidot[n] = xi[n] + Ts*e[n] = xi[n] + Ts*(r[n] - y[n])

exactly as in the doc page, and consistent with the continuous time model.

For me, I think the doc page is fine. But that's just me. If you think clarification is warranted, you can click on a star at the bottom of the page and then (I think) you get a pop up window to add additional comments for the doc writer to consider.

On another note, I see that you're using the reference command to form a position error signal for feedback through K1 in addition to the integral of the error through K3. Nothing wrong with that as long as you realize the effect that has on the zeros of the closed-loop transfer functions from r to y, r to x2, and from r to u.

John
on 22 Mar 2023

1) "The reason K3 is negative is because we (typically) want a positive feedback gain on the error signal, which is actually a negative feedback loop because of the negative sign on the feedback that forms the error (just think of typcial proportinal control, for example)."

2) "I see that you're using the reference command to form a position error signal for feedback through K1 in addition to the integral of the error through K3. Nothing wrong with that as long as you realize the effect that has on the zeros of the closed-loop transfer functions from r to y, r to x2, and from r to u. "

Interesting; and i'll check the poles. I thought what i had was the formulation, given that the basic LQI formulation is

u = -K*z = -K*xi - K*x

which turns into the following, if doing ref tracking where -Kx is replaced with +K(r-x) (as you suggested earlier) :

u = -K*z = -K*xi + K*(r - x)

If I take the "u" that the Matlab documentation diagram (and other LQI examples) shows, it seemed like that would be instead the following, ie a different command from the formulation derivation:

u = -K*z = - Ki*xi + K(Ki*xi - x) =

=(K - 1)*Ki*xi - K*x

That's why i thought the top version of this image is what's correct; the bottom is what you and all other diagrams show as correct.

How does the bottom one match the u equation as it was derived from u = -K*z?

With my formulation of state-space (see (3), below ), the bottom versions's step risetime is much slower (~15 ms) compared with the top version(~4 ms). The ~4 ms seemed to match better your step/dist plots optained from an earlier post -- but I think our systems are different so maybe that's why :) Here was your plot:

Here's my SL output from the bottom version (~15 ms):

and from the top version (~4ms):

3) Earlier, you'd written:

"The closed loop system can also be formed via the matrix math approach as well. Start with

zdot = Aaug * z + Baug * u + [0;0;1]*r + [F;0]*d

Then sub u = - K_lqr*z"

and

"tempsys = ss(Aaug-Baug*K_lqr,[[0;0;1] , [F;0] ],[1 0 0],0);

ss(Aaug-Baug*K_lqr,[[0;0;1] , [F;0] ],[1 0 0],0);"

(your Aaug = [A, 0; C, 0])

So, now I see that the B I'm using is different from your B = [0; 0; 1]. I'm using:

B_lqi = [physSys.B * K_lqi * [1; 0; 0]; 1];

But i think my A matrix is the same as yours. The A I'm using:

Acl_lqi = [physSys.A - physSys.B * K_lqi(1:2), -physSys.B * K_lqi(3);

-physSys.C, 0]

I tried changing my B to your version, and my step responses were very slow still (~15 ms risetime)

I'm sure you're correct :), but it's unclear to me what's going on.

Probably the question of how command signal "u" matches the bottom Simulink diagram, will help clarify...

Paul
on 22 Mar 2023

2) "Interesting; and i'll check the poles."

If implemented correctly, the poles of the closed loop system will be the eigenvalues of (A - BK). Introduction of the reference input, no matter how that's done, can't change the closed loop poles. The zeros of the plant, if there are any, will also be zeros of the closed loop system. The method by which the reference input is introduced can add additional closed-loop zeros, and those additional zeros will influence the response.

I don't understand how you're obtaining the equations for u, like u = (K - 1)*Ki*xi - K*x . It would be better to just use the same notation and subscripts as used in the code and diagram

With the LQI design, the augmented state vector is z = [x ; xi]

First, let's consider the compensator equations assuming the reference input is zero. In this case, the compensator equations are:

xidot = -y_lqi

u_lqi = -[K_lqi(1) K_lqi(2) K_lqi(3)] * [x_lqi(1); x_lqi(2); xi]

= -K_lqi(3) * xi - K_lqi(1:2)*x_lqi

Both of the Simulink diagrams implement these two equations, again under the assumption that ref = 0.

Now we want to introduce ref. The first option is to only use ref to form the error signal that drives the integrator. The compensation equations are then:

xidot = ref - y_lqi

u_lqi = -K_lqi(3) * xi - K_lqi(1:2)*x_lqi

These are the equations implemented in the bottom diagram and the topology that I had used in my examples.

An alternative approach is to form an error signal on the inner position feedback, in addition to the error signal feeding the integrator

xidot = ref - y_lqi

u_lqi = -K_lqi(3) * xi + K_lqi(1)*(ref - x_lqi(1)) - K_lqi(2)*x_lqi(2)

These are the equations implemented in the top diagram (note the + sign on K_lqi(1) because it multiplies -x_lqi(1) !)

In the top diagram, the reference input directly, or immediately, influences u, whereas in the bottom diagram the reference input is tempered by the integrator. Therefore, you should expect the initial transient of y to react faster to a step input in the top diagram, which is what you've reported. Mathematically, this difference should be due to the difference in location of the closed-loop zeros between the two implementations.

overShootPec = 5;

zeta = abs(log(overShootPec/100)) / sqrt(pi^2 * log(overShootPec/100)^2);

wn_rPs = 2 * pi * 40;

%% OL system

ampl = 1;

J_kgm2 = 1e-5;

b = zeta * 2 * wn_rPs * J_kgm2;

k = wn_rPs^2 * J_kgm2;

A = [0 1; -k/J_kgm2 -b/J_kgm2];

B = [0; 1/J_kgm2];

C = [1 0];

D = [0];

massSys = ss(A, B, C, D, ...

'StateName', {'theta', 'theta_dot'}, ...

'InputName', {'u'}, ...

'OutputName', {'theta'});

% define F so that disturbance is a torque

F = [0; 1/J_kgm2];

sysOlWithDist = ss(massSys.A, [massSys.B F] , eye(2),0);

sysOlWithDist.StateName = {'theta', 'theta_dot'};

sysOlWithDist.InputName = {'u', 'd'};

sysOlWithDist.OutputName = {'theta','theta_dot'};

%Define plant A and B augmented z = [x; xi], where xidot = -theta

Aaug = [massSys.a zeros(size(massSys.a,1),1);-massSys.c 0];

Baug = [massSys.b;-massSys.d];

%H matrix for symmetric root locus.

H = [2*0.7*1000 1 -1000^2];

%LQ design

format short e

Q = H.'*H;

R = 100;

[K_lqr, S1, P1] = lqi(massSys, Q, R);

% Form the closed loop system using connect. I still think in the long run

% this approach is going to be better than the explicit matrix math approach. Can easily add analysis points, actuator and sensor models, etc.

% This is the bottom diagram, the ref only comes into the integrator.

sysint = ss(0,1,1,0,'StateName','xi','OutputName','xi','InputName','e');

K = ss(-K_lqr,'InputName',{'theta','theta_dot','xi'},'OutputName','u');

S = sumblk('e = r - theta');

syscbottom = connect(sysOlWithDist,K,S,sysint,{'r','d'},'theta');

% This is the top diagram, the ref comes into the integrator and the proportional position feedback.

sysint = ss(0,1,1,0,'StateName','xi','OutputName','xi','InputName','e');

% changed the first input to K_lqr to 'e', and need to adjust the sign on

% K_lqr(1) because e includes the negative feedback

K = ss([K_lqr(1) -K_lqr(2:3)],'InputName',{'e','theta_dot','xi'},'OutputName','u');

S = sumblk('e = r - theta');

sysctop = connect(sysOlWithDist,K,S,sysint,{'r','d'},'theta');

% step response

step(syscbottom('theta','r'),sysctop('theta','r'))

As expected the top diagram has a faster rise time, which comes with more overshoot. The transfer functions are

zpk(syscbottom('theta','r'))

ans =
From input "r" to output "theta":
1e+10
-----------------------------
(s+9997) (s^2 + 1397s + 1e06)
Continuous-time zero/pole/gain model.

zpk(sysctop('theta','r'))

ans =
From input "r" to output "theta":
1.49e07 (s+671.2)
-----------------------------
(s+9997) (s^2 + 1397s + 1e06)
Continuous-time zero/pole/gain model.

As must be the case, the poles are the same. But the additional feedforward of ref results in a low frequency zero (relative to the dominant pole) that reduces the rise time and increases the overshoot.

John
on 23 Mar 2023

1) "I don't understand how you're obtaining the equations for u, like u = (K - 1)*Ki*xi - K*x"

I see, thanks @Paul. I was trying to build off of what the standard notation was, the same was LQR was expanded to include ref tracking. But you're right, my notation was less clear (and/or wrong)

"An alternative approach is to form an error signal on the inner position feedback, in addition to the error signal feeding the integrator"

Yes, that's what I was trying to do via u = -Ki*xi + K*(r - x), with r = [1;0] step

u = -K(3)*xi + K(1:2)*(r - x) =

= -K(3)*xi + K(1)[r - x(1)] - K(2)*x(2)

, which I think yields what you had:

"u_lqi = -K_lqi(3) * xi + K_lqi(1)*(ref - x_lqi(1)) - K_lqi(2)*x_lqi(2)"

Good point about the - sign.

2) "As expected the top diagram has a faster rise time, which comes with more overshoot. "

"As must be the case, the poles are the same. But the additional feedforward of ref results in a low frequency zero (relative to the dominant pole) that reduces the rise time and increases the overshoot."

Good point regarding A. Then I'm less clear on the differences in results for what you had above, and what i'm seeing. For some reason my ML and SL implementation shows vastly different risetimes if feeding ref to the inner loop, or having the inner loop feed off the integrator output. I'll have to investigate...

John
on 23 Mar 2023

Ah -- I see the difference. The formulation i had works, but my gainset was very different -- ie Q not intentionally designed via pole placement and H.'*H -- so the Bottom version's time constant differed by a factor of ~10 from yours. When using your gains, it works as expected.

In other words, I think since the integrator gain was smaller relative to x1 tracking, the inner loop ref input (Ki*xi output feeding -K*x) was very slow. Hence, bypassing the integrator with the separate ref input to the inner loop, masked that issue.

What's the purpose of the element-wise "." before the transpose, in H.'*H? It seems H'*H is the same output?

Paul
on 23 Mar 2023

The operator .' is the shorthand for transpose, .'

The ' operator is shorthand for ctranspose, '

They produce the same results for real inputs, different results for complex inputs.

John
on 23 Mar 2023

Edited: John
on 23 Mar 2023

Got it, thanks @Paul. I tried searching for " .' " , but that was really hard to find given that it's just punctuation...

1) To better understand the gain questions, I'm going back over what you wrote earlier regarding H --> G. Ie, define H to find a Q for LQR/I (3 Mar 2023 at 4:03)

What's the purpose of 2*0.7 in H; some time constant conversion? And why do you do square in -1000^2 ? Is it just to turn to >0, or perhaps you're relating to H(1) (but not sure why squared)?

H = [2*0.7*1000 1 -1000^2]

2) Also, you mentioned [my emphasis in bold]:

"On this plot, the open loop poles are the poles of the plant and their reflections around the jw axis, and the open loop zeros are the zeros of G, which we determine through selection of H, and their reflection around the jw axis. The plot shows migration of four closed loop poles. The two that are stable our our closed loop poles because we know that that the closed loop system must be stable. For R large, the LHP root locus starts at the open loop poles. As R decreases, the closed loop poles in the LHP migrate to the real axis with one headed to s = -1000 and the other heading to the left towards infinity.

This plot shows why LQR with Q formed from row vector H results in a real pole and another migrating toward infinity as R -> 0. "

The root locus seems to show that gain = 0 has poles on the jw axis -- assuming the open loop poles are with gain = 0, and so are the two poles near the jw axis.

You mentioned large R means starting at jw axis.

Just to check, is this because small R implies minimal control restrictions, ie large gain?

If so, is there any way to map R to poles at a given gain, when G = ss(A, B, H, 0)?

Or is the symmetric root locus more to understand the bounds of R qualitatively?

Just wanted to check if I was missing something in what you showed with H --> G * Gr, and how it relates to R more specifically.

Paul
on 23 Mar 2023

Edited: Paul
on 24 Mar 2023

The H your referencing is actually from my comment of 7 March 2023 1:08. So we'll go with that example.

Recall that the strategy is to define an H s.t that the zeros of G(s) = H*inv(sI - Aaug)*Baug are two desired closed loop poles of our feedback system, call it clsys. So we have

overShootPec = 5;

zeta = abs(log(overShootPec/100)) / sqrt(pi^2 * log(overShootPec/100)^2);

wn_rPs = 2 * pi * 40;

%% OL system

ampl = 1;

J_kgm2 = 1e-5;

b = zeta * 2 * wn_rPs * J_kgm2;

k = wn_rPs^2 * J_kgm2;

A = [0 1; -k/J_kgm2 -b/J_kgm2];

B = [0; 1/J_kgm2];

C = [1 0];

D = [0];

massSys = ss(A, B, C, D, ...

'StateName', {'theta', 'theta_dot'}, ...

'InputName', {'u'}, ...

'OutputName', {'theta'});

%Plant with disturbance

% define F so that disturbance is a torque

F = [0; 1/J_kgm2];

sysOlWithDist = ss(massSys.A, [massSys.B F] , eye(2),0);

sysOlWithDist.StateName = {'theta', 'theta_dot'};

sysOlWithDist.InputName = {'u', 'd'};

sysOlWithDist.OutputName = {'theta','theta_dot'};

%Define plant A and B augmented z = [x; xi], where xidot = -theta

Aaug = [massSys.a zeros(size(massSys.a,1),1);-massSys.c 0];

Baug = [massSys.b;-massSys.d];

Now, we can define G(s) symbolically as follows

H = sym('H',[1 3])

H =

syms s

G(s) = H*inv(s*eye(3)-Aaug)*Baug;

We are only interested in the zeros of G(s) as determined from its numerator

num = numden(G(s))

num =

Factor out the common gain

factor(num)

ans =

We see that the zeros of G(s) are the roots of

H2*s^2 + H1*s - H3 = 0

Using the standard second order form

s^2 + 2*zeta*wn*s + wn^2

we see that we must have

H2 = 1, H1 = 2*zeta*wn, H3 = -wn^2.

I chose zeta = 0.7 and wn = 1000 as two of my desired closed loop poles based on notional time domain specifications.

The closed loop poles of clsys are the left-half-plane (LHP) roots of the equation

1 + (1/R)*G(s)*G(-s) = 0 (recall that Gr(s) = G(-s))

So the closed loop poles of clsys are determined by the rlocus of G(s)*G(-s) and the rlocus gain is 1/R. Note that zeros of G(s)*G(-s) are the zeros of G(s) and their relfections around the jw axis. Similarly for the poles of G(s)*G(-s).

When R is very large and control is expensive, 1/R is very small, the rlocus gain is near zero, and the closed loop poles of clsys start on the root locus at the LHP poles of the G(s)*G(-s). As R decreases and control becomes cheaper, the rlocus gain increases and two closed loop poles of clsys migrate to the LHP zeros of G(s)*G(-s) and the the third pole heads off to -inf. Again, we chose the location of the zeros of G(s)*G(-s) via selection of H.

Addressing the specific qeustions:

"The root locus seems to show that gain = 0 has poles on the jw axis -- assuming the open loop poles are with gain = 0, and so are the two poles near the jw axis.

You mentioned large R means starting at jw axis."

I said that large R means starting at the LHP poles of G(s)*G(-s), which in this problem are the poles of G(s), and they are near the jw axis because of the small damping in the plant model. But that's just a property of the plant, not the method. When R is large, the closed loop poles of clsys will be near the LHP poles of G(s)*G(-s), wherever they happen to be. If G(s) has a pole on the jw axis as in this problem, the corresponding closed loop pole of clsys will be just to the left of it, for finite R.

"Just to check, is this because small R implies minimal control restrictions, ie large gain?"

Yes, small R implies cheap control. With cheap control we can move the plant poles to new locations in the complex plane. Of course, doing so might require high gain, which can come with a host of other issues, but mathematically that's what's going on.

"If so, is there any way to map R to poles at a given gain, when G = ss(A, B, H, 0)?"

That's exactly what we're doing, assuming the "given gain" means the root locus gain. Thus, for a given value of R, the closed loop poles of clsys are the subset of LHP poles returned from

r = rlocus(G*Gr,1/R)

If you're asking if there's a way to go directly to closed-loop poles of clsys using just G(s) and R without forming Gr, then you can always just use lqi() to compute the gains from Q = H'*H and any value of R that you wish. If you do this in a loop and plot the closed loop poles of clsys from each iteration you'd just be getting the LHP portion of the symmetric root locus.

"Or is the symmetric root locus more to understand the bounds of R qualitatively?"

Well, the SRL is a way to visually see how the closed loop poles of clsys migrate as a function of R for a given H. I wouldn't call that qualitative because we can easily map the quantitative value of R to corresponding closed loop pole locations.

John
on 24 Mar 2023

That's very helpful, thanks @Paul for the explanation. (Also I didn't know matrices could be created symbolically like that)

1) If rlocus gain is 1/R, then physically, why is the dominant CL pole (CL BW) becoming slower, as R gets lower (cheaper), after the gain causes CL poles to hit the Real axis? As you've mentioned before, one mode moves far left to -inf (fast), but the dominant pole slows down and moves right, towards the Im axis.

I'm not sure why easier control would slow things down -- unless it causes instability of some kind, but I thought LQI sets an optimum gainset that guarantees gain/phase margins (ie stability).

2) "We see that the zeros of G(s) are the roots of

H2*s^2 + H1*s - H3 = 0

Using the standard second order form

s^2 + 2*zeta*wn*s + wn^2

we see that we must have

H2 = 1, H1 = 2*zeta*wn, H3 = -wn^2."

Thanks. This is the clearest explanation i've seen of this (though i read a bit online).

How do you know to pick the standard 2nd-O form? Ie how do you know these CL zeros should correspond to a physical 2nd-O system's form, vs anything else? (as far as I know not every 2nd-O CL TF is a 2nd-O physical representation)

Paul
on 25 Mar 2023

1) Are you referring to the rlocus plot in this comment? If so, the reason the dominant closed loop pole migrates to the right is becasuse it has to migrate toward that zero at s = -1000. By the rules of the root locus, the two branches have to meet on the real axis to the left of that zero, and then one root moves off to the left towards -inf and the other to the right towards the zero at s = -1000. That's the only direction it can go. You seem to be expecting that as R gets smaller both closed loop roots should migrate further to the left, but that's not how the math works when Q = H'H (H a row vector), in which case m roots will migrate to the m zeros of G(s) and the remaining n-m roots will migrate to infinity in a Butterworth pattern IIRC (n is the number of states in the plant). Depending on the selection of H, m could be anywhere from 0 to n-1.

If you're not referring to that comment, please clarify what you are referring to.

2. I picked the standard 2nd order form because that seemed to be the kind of response I thought you were trying to achieve, i.e., a well-damped second order response with a time constant faster than that of the plant. The zeros of G(s) aren't closed-loop zeros, unless it happens to be the case that H = C.

### More Answers (1)

Sam Chak
on 1 Mar 2023

Hi @John

If I'm not mistaken, the result is due to the disturbance d injected at the e_dot level, which I think it should be injected at the x_dot level.

Try if you get the expected result with this one:

F = [0; 1; 0];

##### 4 Comments

John
on 1 Mar 2023

Edited: John
on 1 Mar 2023

Thanks @Sam Chak -- you're right, good catch! Y seems to track ref, and int action is adding effort when dist is added.

1) If I change to [0; 1; 0] then here's what is plotted.

Updated .m file attachment, and new figure plus a zoom on the disturbance transition

.m file update:

- needed to add 2e4 in uLqrInt = [ref 2e4*dPosAndVel], for the disturbance to substantially impact the system
- F=[0;1;0])

2) Any thoughts on why integral gain, K_lqi(3), needs to be so much larger than what lqi() outputs? It might have to do with my Q gains (int error gain relative to position gain). If I increase Q(3,3) to 5e3 instead of 1, I still need an extra 50x mutiplier: 50 * K_lqi(3).

Sam Chak
on 1 Mar 2023

Edited: Sam Chak
on 1 Mar 2023

The computed gain matrix is influenced by your selection of the weight in relative to

Q_lqi = [1000 0 0;

0 0.001 0;

0 0 1];

You can try gradually increasing the value, for example

Q_lqi(3,3) = 1e2

Q_lqi = 3×3

1.0e+03 *
1.0000 0 0
0 0.0000 0
0 0 0.1000

until the desired performance is achieved. Also, instead using a multiplier, you can increasing the scalar weighting factor R that affects the overall control vector u in the state space .

### See Also

### Categories

### Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!**An Error Occurred**

Unable to complete the action because of changes made to the page. Reload the page to see its updated state.

Select a Web Site

Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .

You can also select a web site from the following list

## How to Get Best Site Performance

Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.

### Americas

- América Latina (Español)
- Canada (English)
- United States (English)

### Europe

- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)

- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)

### Asia Pacific

- Australia (English)
- India (English)
- New Zealand (English)
- 中国
- 日本Japanese (日本語)
- 한국Korean (한국어)