Medijski sadržaj
- Tweetovi
- Tweetovi i odgovori
- Medijski sadržaj, trenutna stranica.
-
-
The ANN was trained to map (eye position & retinal location) —> (world-coordinate location) using backprop They showed that the representation characteristics of 7a neurons (receptive fields and spatial gain fields) were reproduced by the hidden layer neurons of the trained ANN.pic.twitter.com/s2aibGVwSX
Prikaži ovu nit -
But how are these neurons computing this coordinate transformation? This paper by Zipser and Anderson was perhaps one of the first attempts to use ANN to understand a neuronal computation in the brain.pic.twitter.com/2fZuniH8AR
Prikaži ovu nit -
Some portion of 7a neurons (~57%) represent both eyes position and the retinal location of the object. These neurons were believed to be involved in the representation of object location.pic.twitter.com/M7jQr2wfKP
Prikaži ovu nit -
David Sussillo mentioned this old paper by Zipser and Anderson in his talk
@neuroAIworkshop at#NeurIPS: http://www.vis.caltech.edu/documents/54-v331_88.pdf … It’s perhaps one of the first examples of comparing representations in the brain and artificial neural nets.pic.twitter.com/p4yAbB7HUK
Prikaži ovu nit -
(3/n) Check out this paper by Van Rullen and Koch for a review: https://www.sciencedirect.com/science/article/abs/pii/S1364661303000950 … Also, these oscillations are shown to be locked to neural oscillations in the brain: https://www.jneurosci.org/content/29/24/7869.short …pic.twitter.com/5NxvlCKMfQ
Prikaži ovu nit -
Also I don’t understand how a simple reverse correlation can generate this clean receptive field maps for a conv5 neuron, especially if the claim is that these face selective neurons are view-point invariant.pic.twitter.com/r4xFAyW3zk
-
An encoding model that incorporates the environment-to-retina geometry of 3D motion can explain the atypical structure of 3D motion tuning in MTpic.twitter.com/c8RDwEZ0RJ
Prikaži ovu nit -
Some examples of atypical tuning of MT neurons in response to 3D motionpic.twitter.com/GCsHQdfgu5
Prikaži ovu nit -
from the paper: “values in each weight kernel were randomly drawn from a Gaussian distribution that fit the weight distribution of the pre-trained state“pic.twitter.com/EehOtu2xoP
-
They checked for bunch of categories of objects, see panel C here:pic.twitter.com/GNhdtdMG11
-
So, if we regularize an ANN to have similar representations as those of mouse visual cortex, the ANN becomes more robust to adversarial attacks. Here is the most interesting part of the paper for me: https://twitter.com/arxiv_org/status/1194818086504787968 …pic.twitter.com/EFibwakK5p
-
Example: an ANN trained to simulate vestibule-ocular reflexpic.twitter.com/PTNueRKksu
Prikaži ovu nit -
-
No, this isn’t from
@tyrell_turing et al recent perspective on@NatureNeuro. This is David Robinson trying to make a similar point in 1992: http://www.dna.caltech.edu/courses/cns187/references/Robinson-92.pdf …pic.twitter.com/bXJANFMtO9
Prikaži ovu nit -
Pea Plants Show Risk Sensitivity: Current Biology http://www.cell.com/current-biology/fulltext/S0960-9822%2816%2930459-6?rss=yes#.V5Z7qkwKoqo.twitter …pic.twitter.com/oagPrw8N2g
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.