AE353 Homework #4: Optimal Control Design solution

$24.99

Original Work ?
Category: You will Instantly receive a download link for .ZIP solution file upon Payment

Description

5/5 - (8 votes)

1. You have seen that the spring-mass-damper system shown above can be described in statespace form as
x˙ =




0 0 1 0
0 0 0 1
−8 4 −1 0.5
4 −4 0.5 −0.5




x +




0
0
0
0.5




u
where x1 and x2 are the absolute displacements of each mass from their equilibrium positions,
x3 and x4 are the corresponding velocities of each mass, and u is the applied force. In what
follows, consider an input of the form
u = −Kx + kreferencer
where K and kreference are gains and r is a reference signal.
(a) Suppose
y = Cx
and we want to design optimal state feedback that minimizes the cost
Z ∞
0

kyk
2 + ρu2

dt. (1)
Find the matrices Q and R for which (1) is equivalent to the standard LQR cost
Z ∞
0

x
TQx + u
T Ru
dt.
(Your answer will be in terms of C and ρ.)
(b) Find the optimal choice of K for Q and R as defined in part (a), given ρ = 10−6 and . . .
• the output is the displacement x1 of the first mass;
• the output is the displacement x2 of the second mass;
• the output is the difference x2 − x1, i.e., the amount of stretch in the second spring.
(c) Compute the gain kreference so that y = r in steady-state for each of the three choices of
K that you found in part (b).
(d) Compute and visualize the step response of the closed-loop system for each of the three
choices of K and kreference that you found in parts (b)-(c) using the script hw4prob01.m.
Submit only the lines of code you added to this script and a snapshot of the figure after
each simulation has ended. Briefly explain what you observed.
2. You have seen that the motion of a robot arm with one revolute joint can be described in
state-space form as
x˙ =

0 1
0 −1/5

x +

0
1/5

u
y =

1 0
x
where the state elements are the angle (x1) and the angular velocity (x2) of the joint, and
the input u is a torque applied to the arm at the joint. In what follows, consider an input of
the form
u = −Kx + kreferencer
where K and kreference are gains and r is a reference signal.
(a) Feedback design. Choose K to minimize the standard LQR cost
Z ∞
0

x
TQx + u
T Ru
dt
where
Q =

100 0
0 1
and R =

0.1

.
(b) Feedforward design. Compute the gain kreference so that y = r in steady-state.
(c) Analysis. Compute the closed-loop eigenvalues. Using this result, predict the time to
peak and the peak overshoot of the unit step response.
(d) Simulation. Compute the response of the closed-loop system to a reference signal
r(t) = π/2 for all t ≥ 0
using the script hw4prob02.m. Submit only the lines of code you added to this script
and a snapshot of the figure after the simulation has ended. Are your results consistent
with the prediction you made in part (c)?
(e) Design Iteration. Will an increase or a decrease in R reduce the time to peak? Check
your guess with analysis and simulation.
3. You have seen that the rotational motion of an axisymmetric spacecraft about its yaw and
roll axes can be described in state-space form as
x˙ =

0 λ
−λ 0

x +

1
0

u
where the state elements x1 and x2 are the angular velocities about yaw and roll axes, the
input u is an applied torque, and the parameter λ = 9 is the relative spin rate. Last week,
you showed that this system was controllable, and applied state feedback
u = −Kx
to place the closed-loop eigenvalues in various locations. This week, you will apply optimal
state feedback, where K is chosen to minimize the cost
Z ∞
0

kxk
2 + ρu2

dt (2)
for some ρ > 0, where kxk is the length of x (i.e., the standard Euclidean “2-norm”).
(a) Find the matrices Q and R for which (2) is equivalent to the standard LQR cost
Z ∞
0

x
TQx + u
T Ru
dt.
(b) Find a value of ρ for which the optimal K results in a closed-loop system that is:
• under-damped (i.e., has eigenvalues that are complex conjugates);
• critically-damped (i.e., has eigenvalues that are approximately equal);
• over-damped (i.e., has eigenvalues that are real and distinct).
On a single figure, plot the three sets of closed-loop eigenvalues.
(c) For each of the cases in part (b), plot the closed-loop state response x(t) and input
response u(t) for the initial condition
x0 =

10
0

.
using the script hw4prob03.m. This script does everything for you—all you need to do is
put it in your working directory and call hw4prob03(K,x0) with an appropriate choice
of gain matrix K and initial condition x0. Submit only a snapshot of the figure after each
simulation has ended. Briefly explain how the response changes with ρ.
4. In this problem, you will apply the Hamilton-Jacobi-Bellman equation to derive an optimal
controller for a scalar state-space system. As you have seen in class, this controller can be
found by solving the following optimal control problem:
minimize
u[t0,t1]
mx(t1)
2 +
Z t1
t0

qx(t)
2 + ru(t)
2

dt
subject to dx(t)
dt = ax(t) + bu(t)
x(t0) = x0
(3)
where q ≥ 0, r > 0, and m ≥ 0. This optimal control problem has the general form
minimize
u[t0,t1]
h(x(t1)) + Z t1
t0
g(x(t), u(t))dt
subject to dx(t)
dt = f(x(t), u(t))
x(t0) = x0.
(4)
The Hamilton-Jacobi-Bellman equation for a problem of this form is

∂v(t, x)
∂t = min
u

∂v(t, x)
∂x f(x, u) + g(x, u)

, (5)
where v(t, x) is the value function. Since at time t1 we clearly have
v(t1, x) = h(x) = mx2
, (6)
it makes sense to guess a value function of the form
v(t, x) = p(t)x
2
, (7)
where p is some function of time that remains to be derived. Proceed as follows:
(a) Find f, g, and h by comparing (3) with (4).
(b) Find ∂v/∂t and ∂v/∂x by taking partial derivatives of (7).
(c) Plug f, g, h, ∂v/∂t, and ∂v/∂x into (5).
(d) Minimize the right-hand side of (5) with respect to u, either by “completing the square”
or by applying the “first and second derivative test.”
(e) Use your result from part (d) to find the gain k for which u = −kx is the optimal input.
Your answer should depend on p.
(f) Use your result from part (d) to write an ordinary differential equation that must be
satisfied by p. (Hint: equate coefficients.) Write the boundary condition for this ODE
by equating coefficients of (6) with h(t1).
(g) If t1 is finite, then the gain k is time-varying. Suppose t1 → ∞. It is a remarkable fact
that p—hence, k—tends to a steady-state value. Write an equation that characterizes
this steady-state value. It should be quadratic in p, and should look familiar to you. You
have now found the “steady-state” or “infinite-horizon” optimal controller—it is characterized by the quadratic equation that you derived just now for p and by the expression
for k in terms of p that you derived in part (e).
CONGRATULATIONS! THIS IS A HUGE RESULT.
(h) What happens to k as (q/r) → 0 and how will the controller behave?
(i) What happens to k as (q/r) → ∞ and how will the controller behave?
4