Recurrence is not required. DNNs can represent a finite number of recurrence iterations (sufficient for biological vision RNN convergence) via so-called loop unrolling, which merely constrains the weights of a DNN. Not discussed at all in this paper?https://machinelearningmastery.com/rnn-unrolling/
-
-
-
Unrolling is the starting point for any exploration of recurrent processing. But a recurrent network uses weight sharing across time, recycling limited physical resources for iterative computation — this better accounts for measured dynamics in the human brain.
Kraj razgovora
Novi razgovor -
-
-
@NKriegeskorte@TimKietzmann do you have the code for the paper? I'm especially interesting in the recurrent neural net -
Code and weights for inference on category trained recurrent and parameter matched feedforward networks is here: https://osf.io/mz9hw/ These are not the ones trained to predict the MEG data yet, but I hope a good start nevertheless?
- Još 3 druga odgovora
Novi razgovor -
-
-
Fantastic work
@TimKietzmann! I’m building recurrent CNNs of the dorsal stream for motor tasks and I’m wondering if you think the same types of recurrence likely apply their. What would you model differently? -
I currently don't see a good reason as to why they should not work, but happy to chat via Skype to find out more about your project before making overly ambitious claims ;-)
Kraj razgovora
Novi razgovor -
-
-
Does it contradict Simon Thorpe’s paper saying the visual system is feed forward? Or is my understanding incorrect? Thanks!
-
No, this does not contradict his work. He showed that *some* computations (such as animacy detection) can be done rapidly, i.e. in a feedforward manner. I do not think the implication ever was that *all* inference is feedforward.
Kraj razgovora
Novi razgovor -
-
Tweet je nedostupan.
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.