Some reflections on the learning-to-control renaissance.https://www.argmin.net/2020/06/29/tour-revisited/ …
-
-
Continuous exploration is achieved by noise in output observations, not present in LQR + continuous model refinement achieved by closed-loop system-id allows faster rates in LQG. In regret sense, adaptive control of LQG easier than LQR. https://arxiv.org/abs/2003.11227
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.