Control¶
Control is the public namespace for controller design and Riccati-backed feedback synthesis in Contrax.
The namespace includes:
dare()andcare()for algebraic Riccati solveslqr()andlqi()for regulator designaugment_integrator()for explicit integral-state augmentationplace()for design-time pole placementstate_feedback()for applying a state-feedback gain to an LTI model
Minimal Example¶
import jax
jax.config.update("jax_enable_x64", True)
import jax.numpy as jnp
import contrax as cx
sys = cx.dss(
jnp.array([[1.0, 0.05], [0.0, 1.0]]),
jnp.array([[0.0], [0.05]]),
jnp.eye(2),
jnp.zeros((2, 1)),
dt=0.05,
)
result = cx.lqr(sys, jnp.eye(2), jnp.array([[1.0]]))
closed_loop = cx.state_feedback(sys, result.K)
Conventions¶
Q: state cost matrix with shape(n, n)R: input cost matrix with shape(m, m)LQRResult.K: state-feedback gain with shape(m, n)LQRResult.S: Riccati solution with shape(n, n)LQRResult.poles: closed-loop eigenvaluesLQRResult.residual_norm: JAX scalar Riccati residual diagnostic
For lqi(), the returned gain acts on the augmented state [x; z], where z
is the integral state added by augment_integrator().
Control Equations¶
Contrax's control surface is centered on state-feedback design for linear systems.
For the discrete-time model
the infinite-horizon quadratic objective used by dare(), lqr(), and
discrete lqi() is
with state-feedback law
For the continuous-time model
the corresponding continuous objective behind care() and continuous lqr()
is
with feedback law
Riccati Equations¶
The stabilizing discrete Riccati equation is
and the continuous Riccati equation is
Given the stabilizing solution S, the corresponding gains are
Closed-Loop Models¶
state_feedback() applies the gain directly to the plant matrices.
For discrete systems,
and for continuous systems,
That is the model whose poles appear in
LQRResult.poles.
Integral Augmentation¶
augment_integrator() and lqi() introduce an integral state z driven by an
output-tracking error. At the level of the augmented state
the returned gain acts as
with \(\bar{K}\) partitioned over the physical and integral states.
Solver Maturity¶
Contrax is explicit about solver maturity:
dare()is the most mature Riccati pathcare()is supported and validated, but less benchmarked thandare()place()is suitable as a design-time helper, not as a hardened large-scale assignment solver
If the numerical context matters to your workflow, read Riccati solvers before treating all paths as equally mature.
Transform Behavior¶
dare(), care(), and lqr() are designed to participate in compiled JAX
workflows.
The key contracts are:
lqr()dispatches onDiscLTIvsContLTIoutside traced runtime values- the Riccati paths use custom or implicit VJPs rather than blindly unrolling the forward solve in reverse mode
state_feedback()is just state-space algebra on the model matrices
That makes controller design inside a JIT-compiled objective a first-class use case rather than an accidental one.
Numerical Notes¶
Contrax does not enable float64 globally on import. Riccati solves require
jax_enable_x64=True.
state_feedback(sys, K) is specifically about applying a state-feedback gain
to an LTI model. It is not the general interconnection operator for arbitrary
controller/plant block diagrams.
For integral-action designs, augment_integrator() exposes the augmented model
directly and lqi() is the thin convenience wrapper on top.
Related Pages¶
- Systems for model construction before design
- Simulation for validating the designed controller
- Types for
LQRResult - Riccati solvers for solver details
- Tune LQR weights with gradients for a task-oriented recipe
contrax.control
¶
contrax.control — LQR, pole placement, closed-loop construction.
augment_integrator(sys: DiscLTI | ContLTI, C_integral: Array | None = None, D_integral: Array | None = None, *, sign: float = 1.0, dt_scale: ArrayLike | None = None) -> DiscLTI | ContLTI
¶
Augment an LTI system with integral-of-output states.
This is the small design utility behind LQI-style workflows. It returns an
augmented system that can be passed to lqr() with a state cost that also
weights the added integral states.
For continuous systems, the added state evolves as
z_dot = sign * (C_integral x + D_integral u). For discrete systems, the
added state evolves as
z[k+1] = z[k] + dt_scale * sign * (C_integral x[k] + D_integral u[k]).
When dt_scale is omitted for a discrete system, sys.dt is used.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sys
|
DiscLTI | ContLTI
|
Continuous or discrete LTI system to augment. |
required |
C_integral
|
Array | None
|
Output map to integrate. Shape: |
None
|
D_integral
|
Array | None
|
Feedthrough map for the integrated output. Shape: |
None
|
sign
|
float
|
Sign applied to the integrated output. Use |
1.0
|
dt_scale
|
ArrayLike | None
|
Discrete integration scale. Defaults to |
None
|
Returns:
| Type | Description |
|---|---|
DiscLTI | ContLTI
|
Source code in contrax/control.py
care(A: Array, B: Array, Q: Array, R: Array) -> LQRResult
¶
Solve the continuous algebraic Riccati equation.
Computes the stabilizing Riccati solution and corresponding continuous-time
LQR gain for the matrix tuple (A, B, Q, R).
The forward solve uses a Hamiltonian stable-subspace construction with
jnp.linalg.eig, and gradients are provided through an
implicit-differentiation custom VJP. This is now a real baseline
continuous-time solver, but it is not yet as hardened as dare().
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
A
|
Array
|
Continuous-time state matrix. Shape: |
required |
B
|
Array
|
Continuous-time input matrix. Shape: |
required |
Q
|
Array
|
State cost matrix. Shape: |
required |
R
|
Array
|
Input cost matrix. Shape: |
required |
Returns:
| Type | Description |
|---|---|
LQRResult
|
LQRResult: A bundle containing the optimal
feedback gain |
Examples:
>>> import jax.numpy as jnp
>>> import contrax as cx
>>> A = jnp.array([[0.0, 1.0], [0.0, 0.0]])
>>> B = jnp.array([[0.0], [1.0]])
>>> result = cx.care(A, B, jnp.eye(2), jnp.array([[1.0]]))
Source code in contrax/_riccati.py
dare(A: Array, B: Array, Q: Array, R: Array, max_iter: int = 64, tol: float = 1e-10) -> LQRResult
¶
Solve the discrete algebraic Riccati equation.
Computes the stabilizing Riccati solution and corresponding infinite-horizon
discrete LQR gain for the matrix tuple (A, B, Q, R).
This is currently the strongest Riccati solver path in Contrax. The forward solve uses structured doubling with tolerance-based stopping, and gradients are provided through a custom VJP built from implicit differentiation of the Riccati residual.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
A
|
Array
|
Discrete-time state transition matrix. Shape: |
required |
B
|
Array
|
Discrete-time input matrix. Shape: |
required |
Q
|
Array
|
State cost matrix. Shape: |
required |
R
|
Array
|
Input cost matrix. Shape: |
required |
max_iter
|
int
|
Maximum structured-doubling iterations. |
64
|
tol
|
float
|
Forward stopping tolerance on Riccati iterate changes. |
1e-10
|
Returns:
| Type | Description |
|---|---|
LQRResult
|
LQRResult: A bundle containing the optimal
feedback gain |
Examples:
>>> import jax.numpy as jnp
>>> import contrax as cx
>>> A = jnp.array([[1.0, 0.05], [0.0, 1.0]])
>>> B = jnp.array([[0.0], [0.05]])
>>> result = cx.dare(A, B, jnp.eye(2), jnp.array([[1.0]]))
Source code in contrax/_riccati.py
feedback(sys: DiscLTI | ContLTI, K: Array) -> DiscLTI | ContLTI
¶
Alias for state_feedback().
This alias remains available for MATLAB-style ergonomics, but the more
explicit state_feedback() name is preferred in new Contrax code because it
distinguishes gain application from general closed-loop interconnection.
Source code in contrax/control.py
lqi(sys: DiscLTI | ContLTI, Q: Array, R: Array, C_integral: Array | None = None, D_integral: Array | None = None, *, sign: float = 1.0, dt_scale: ArrayLike | None = None) -> LQRResult
¶
Design an LQI gain by augmenting integral states and calling lqr().
This is intentionally a thin composition of augment_integrator() and
lqr(). The returned gain acts on the augmented state [x; z], where z
is the integral state.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sys
|
DiscLTI | ContLTI
|
Continuous or discrete LTI system. |
required |
Q
|
Array
|
Augmented state cost matrix. Shape: |
required |
R
|
Array
|
Input cost matrix. Shape: |
required |
C_integral
|
Array | None
|
Output map to integrate. Shape: |
None
|
D_integral
|
Array | None
|
Feedthrough map for the integrated output. Shape: |
None
|
sign
|
float
|
Sign applied to the integrated output. |
1.0
|
dt_scale
|
ArrayLike | None
|
Discrete integration scale. Defaults to |
None
|
Returns:
| Type | Description |
|---|---|
LQRResult
|
LQRResult: LQR result for the integrator-augmented design model. |
Source code in contrax/control.py
lqr(sys: DiscLTI | ContLTI, Q: Array, R: Array) -> LQRResult
¶
Solve the infinite-horizon linear quadratic regulator problem.
Dispatches to dare() for discrete systems and care() for continuous
systems, returning a unified LQRResult.
The discrete path is more benchmarked than the continuous path, but both
routes are first-class public solvers. Type dispatch happens outside traced
runtime values, so jit works as long as the system type is fixed at trace
time.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sys
|
DiscLTI | ContLTI
|
required | |
Q
|
Array
|
State cost matrix. Shape: |
required |
R
|
Array
|
Input cost matrix. Shape: |
required |
Returns:
| Type | Description |
|---|---|
LQRResult
|
LQRResult: A bundle containing the optimal gain, Riccati solution, and closed-loop poles. |
Raises:
| Type | Description |
|---|---|
TypeError
|
Examples:
>>> import jax.numpy as jnp
>>> import contrax as cx
>>> sys = cx.dss(
... jnp.array([[1.0, 0.05], [0.0, 1.0]]),
... jnp.array([[0.0], [0.05]]),
... jnp.eye(2),
... jnp.zeros((2, 1)),
... dt=0.05,
... )
>>> result = cx.lqr(sys, jnp.eye(2), jnp.array([[1.0]]))
Source code in contrax/control.py
place(sys: DiscLTI | ContLTI, poles: ArrayLike, *, method: str | None = None, rtol: float = 0.001, maxiter: int = 30) -> Array
¶
Place closed-loop poles for a state-space system.
Uses a JAX-native robust assignment path based on KNV0 for real-pole cases and YT for complex-pole cases. Both methods are design-time helpers rather than traced differentiable primitives.
If method is left as None, Contrax selects KNV0 for all-real pole
sets and YT when any complex-conjugate pair is present. For single-input
systems, Ackermann remains available as a fallback if the robust path
fails.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sys
|
DiscLTI | ContLTI
|
Continuous or discrete-time system. |
required |
poles
|
ArrayLike
|
Desired closed-loop pole locations. Shape: |
required |
method
|
str | None
|
Optional robust assignment method. Supported values are
|
None
|
rtol
|
float
|
Determinant-improvement stopping tolerance for the robust update. |
0.001
|
maxiter
|
int
|
Maximum robust-assignment iterations. |
30
|
Returns:
| Name | Type | Description |
|---|---|---|
Array |
Array
|
State-feedback gain |
Examples:
>>> import jax.numpy as jnp
>>> import contrax as cx
>>> sys = cx.dss(
... jnp.array([[1.0, 0.1], [0.0, 0.9]]),
... jnp.array([[0.0], [1.0]]),
... jnp.eye(2),
... jnp.zeros((2, 1)),
... )
>>> L = cx.place(sys, jnp.array([0.6, 0.7]))
Source code in contrax/_place.py
state_feedback(sys: DiscLTI | ContLTI, K: Array) -> DiscLTI | ContLTI
¶
Construct the closed-loop system under state feedback.
Returns a new LTI system with A_cl = A - B @ K and unchanged B, C,
D, and, for discrete systems, dt.
This is a pure functional constructor rather than a simulation routine. It dispatches on either DiscLTI or ContLTI and behaves like normal array algebra under JAX transforms.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
sys
|
DiscLTI | ContLTI
|
Continuous or discrete-time system. |
required |
K
|
Array
|
State-feedback gain. Shape: |
required |
Returns:
| Type | Description |
|---|---|
DiscLTI | ContLTI
|
Examples:
>>> import jax.numpy as jnp
>>> import contrax as cx
>>> sys = cx.dss(
... jnp.array([[1.0, 0.05], [0.0, 1.0]]),
... jnp.array([[0.0], [0.05]]),
... jnp.eye(2),
... jnp.zeros((2, 1)),
... dt=0.05,
... )
>>> result = cx.lqr(sys, jnp.eye(2), jnp.array([[1.0]]))
>>> closed_loop = cx.state_feedback(sys, result.K)