One way is to use errors to train a forward model and then “invert” it to get the adapted motor command. Because adaptation is driven by sensory prediction error (which implies a sensory prediction, i.e. forward model output), one would think this is how things work. 2/7
-
-
Prikaži ovu nit
-
However, errors (even sensory prediction ones) could instead *directly* train the control policy, adapting motor commands. This can rely e.g. on simple rules about how errors should shift motor output: if the error is leftward, shift motor output rightward. 3/7
Prikaži ovu nit -
Both approaches would work to deal with a VMR. However, only learning a forward model (then inverting it) would be able to eventually deal with a mirror reversal. If the cursor goes the opposite way, why can’t you learn to predict just that? 4/7
Prikaži ovu nit -
But direct policy learning will not deal with a mirror reversal. If you keep shifting output rightwards in response to a leftward cursor error, you will increase cursor error in the next trial. 5/7pic.twitter.com/1Pja17CLT8
Prikaži ovu nit -
We thus had people reach under a mirror reversal, isolating implicit adaptation. They couldn't compensate, instead increasing error from one trial to the next. That shows implicit adaptation is driven by direct policy updates, not forward model based learning. 6/7pic.twitter.com/PspLAfkWdz
Prikaži ovu nit
Kraj razgovora
Novi razgovor -
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.