Optimal control
with budget constraints and resets.
Alexander Vladimirsky,
Cornell
(Joint work with R. Takei, W. Chen,
Z. Clawson, and S. Kirov)
Abstract:
Consider a model problem:
given a room with multiple obstacles and a stationary enemy
observer,
find the fastest path to the target for a robot, with the constraint
that the observer should not be able to see that robot for more than
five seconds in a row.
Many realistic control problems
involve multiple criteria for optimality and/or integral constraints on
allowable controls. This can be conveniently modeled by introducing a
budget for each secondary criterion/constraint. An augmented
Hamilton-Jacobi-Bellman equation is then solved on an expanded state
space, and its discontinuous viscosity solution yields the value
function for the primary criterion/cost. This formulation was
previously used by Kumar & Vladimirsky to build a fast
(non-iterative) method for problems in which the resources/budgets are
monotone decreasing. We currently address a more challenging case,
where the resources can be instantaneously renewed (& budgets can
be "reset") upon entering a pre-specified subset of the state space.
This leads to a hybrid control problem with more subtle causal
properties of the value function & additional challenges in
constructing efficient numerical methods.