Continuous LQR¶
This tutorial shows the continuous-time control path on the double integrator.
We will build a continuous state-space model, design an LQR controller through
care(), simulate the closed loop with the Diffrax-backed continuous
simulate() path, and finish with a small gradient check.
Runnable script: examples/continuous_lqr.py
This tutorial spans Systems, Control, and Simulation, but it stays on the continuous-time branch all the way through.
The control problem is the continuous LQR setup
with infinite-horizon objective
Define The Continuous System¶
The double integrator is a standard first continuous-time example: position and velocity are the state, and acceleration is the control input.
import jax
jax.config.update("jax_enable_x64", True)
import jax.numpy as jnp
import numpy as np
import contrax as cx
A = jnp.array([[0.0, 1.0], [0.0, 0.0]])
B = jnp.array([[0.0], [1.0]])
C = jnp.eye(2)
D = jnp.zeros((2, 1))
SYS = cx.ss(A, B, C, D)
Q = jnp.eye(2)
R = jnp.array([[1.0]])
X0 = jnp.array([1.0, 0.0]) # start at position=1, velocity=0
Design The Controller And Simulate¶
For ContLTI systems, lqr() dispatches through care(). The resulting gain
is then used in a continuous closed-loop simulation with explicit duration
and sample spacing dt.
The resulting closed-loop dynamics are
def design_and_simulate(q_scale: float = 1.0, duration: float = 10.0):
"""Design a CARE-backed LQR and simulate the closed-loop response.
Args:
q_scale: Scalar multiplier on Q. Increase for faster settling.
duration: Simulation duration in seconds.
Returns:
dict with ts, xs, K, and closed-loop poles.
"""
result = cx.lqr(SYS, q_scale * Q, R)
K = result.K
ts, xs, _ = cx.simulate(SYS, X0, lambda t, x: -K @ x, duration=duration, dt=0.05)
return {
"ts": ts,
"xs": xs,
"K": K,
"poles": result.poles,
"residual_norm": result.residual_norm,
}
This workflow is part of the public API and is backed by explicit residual and
stability checks in eager mode. The main caveat is comparative maturity:
care() is less benchmarked than dare(), but it is a supported solver path.
That caveat matters for the docs structure too: continuous LQR is documented as a real supported workflow, but it is not presented as the most battle-tested slice of the library.
Check That Gradients Stay Finite¶
The continuous path is also differentiable in representative cases. The example
keeps that claim modest and concrete by checking only that a scalar gradient
through care() and continuous simulate() stays finite.
def settling_cost(log_q_scale: float) -> float:
"""Scalar cost: integral of x'x under the optimal continuous-time LQR."""
q_scale = jnp.exp(log_q_scale)
K = cx.lqr(SYS, q_scale * Q, R).K
_, xs, _ = cx.simulate(SYS, X0, lambda t, x: -K @ x, duration=8.0, dt=0.05)
return jnp.sum(xs**2)
def run_gradient_check():
grad_fn = jax.jit(jax.grad(settling_cost))
g = grad_fn(jnp.array(0.0))
assert jnp.isfinite(g), f"gradient is not finite: {g}"
return float(g)
What The Script Prints¶
Running examples/continuous_lqr.py prints the gain, poles, residual summary,
state decay, and gradient smoke-test output:
Continuous LQR — double integrator
K = [[1. 1.73205081]]
closed-loop poles = [-0.8660254+0.5j -0.8660254-0.5j]
all poles stable = True
residual norm = 1.776357e-15
x[0] = [1. 0.]
x[-1] = [-0.00023873 0.00033244] (should be near zero)
time horizon = 10.000 s
Gradient smoke test (d/d(log q) of settling cost):
grad = -3.848696 (finite: True)
All assertions passed.
The exact gradient value is not the main takeaway. The useful checks are stable continuous-time poles, a small Riccati residual, state convergence toward zero, and a finite gradient through the end-to-end workflow.
Validate The Result¶
For this tutorial, the first checks are:
- the closed-loop poles should have negative real part
- the Riccati residual should be small
- the final state should be much closer to zero than the initial state
- the gradient smoke test should stay finite
Those checks are mirrored by assertions in the runnable example so the tutorial stays tied to executable behavior.
Where To Go Next¶
- Getting started for the fastest route into the library
- Control API for
careand continuouslqr - Simulation API for continuous
simulatesemantics - Linearize, LQR, simulate for the discrete workflow from nonlinear dynamics
- JAX transform contract for compiled and differentiable workflow expectations
- Riccati solvers for continuous and discrete solver details