Adrien Doerig

@AdrienDoerig

Finishing a PhD at EPFL, Switzerland. Computational neuroscience, machine learning, psychophysics. Soon: postdoc at the Donders institute on deep visual models.

Vrijeme pridruživanja: travanj 2018.

Tweetovi

Blokirali ste korisnika/cu @AdrienDoerig

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @AdrienDoerig

  1. Prikvačeni tweet
    7. sij

    New paper out! We provide evidence that feedforward convnets (ffCNNs) cannot implement human-like global computations because of their *architecture*, and not merely because of the way they are *trained*.

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet
    27. sij

    Very happy to be sharing the first preprint out of my PhD, and first first-author paper: “Flexible contextual modulation of naturalistic texture perception in peripheral vision”, with and . Twitter summary below: (1/10)

    Prikaži ovu nit
    Poništi
  3. proslijedio/la je Tweet
    22. sij

    Our latest work on endophenotypes of schizophrenia is out. Where we find evidence that unaffected siblings of schizophrenia patients might compensate for their backward masking deficits.

    Poništi
  4. 21. sij

    What are the best libraries for Deep RL? Something efficient but flexible. I know about Openai baselines, tf.Agents and Unity, but don't really know how they compare.

    Poništi
  5. proslijedio/la je Tweet
    16. sij

    The 1st paper of my PhD is out today in ! We show that word contexts can enhance letter representations in early visual cortex With , peter hagoort& Paper: TL;DR? Let me unpack it in a few (or so) tweets👇

    Prikaži ovu nit
    Poništi
  6. proslijedio/la je Tweet
    15. sij

    The brain represents multiple future outcomes simultaneously and in parallel. Cool new work by , with single-unit recordings from mice confirming the predictions coming from AI side.

    Poništi
  7. proslijedio/la je Tweet
    15. sij

    This is finally out there! I tortured my kid by putting a headcam on him for years as a toddler (not all the time, I hasten to clarify) and this is the result! With , Jess Sullivan, and , some of whom tortured their children similarly 🙂

    Poništi
  8. proslijedio/la je Tweet
    13. sij

    Are deep neural networks trained on object recognition tasks a good model of visual processing in the brain? In rodents the answer is no, and in primates previous results suggesting yes "should be taken with a grain of salt":

    Poništi
  9. proslijedio/la je Tweet
    10. sij

    New preprint with & Warrick Roseboom showing biases in duration reports are predicted by salient changes in visual cortex BOLD. Relevant information for time perception arises from sensory processing, no specialised systems needed. 1/7

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet
    10. sij

    New preprint from the lab: "Individual differences among deep neural network models." Work with , , and Courtney Spoerer. below. 1/7

    Prikaži ovu nit
    Poništi
  11. 7. sij

    Sorry, I mean that ffCNNs do *NOT* classify this image as a cat. It is classified as an elephant, but we still see the cat based on its global shape.

    Prikaži ovu nit
    Poništi
  12. 7. sij

    This is the peer-reviewed version of a preprint we published a few months ago. The free, updated preprint is here:

    Prikaži ovu nit
    Poništi
  13. 7. sij

    There is much more promising exploration with recurrent networks nowadays, for example by , , , , , , and others. Time will tell which kinds of models best approximate human computations!

    Prikaži ovu nit
    Poništi
  14. 7. sij

    But there are other options. For example, , and colleagues have another great recurrent grouping and segmentation network.

    Prikaži ovu nit
    Poništi
  15. 7. sij

    Future work is needed to find out how to implement such computations. For example, we showed that capsule networks can explain the human global effects in crowding presented here through recurrent grouping and segmentation.

    Prikaži ovu nit
    Poništi
  16. 7. sij

    This suggests that the ffCNN *architecture* does not allow for human-like global computations. The problem does not seem to stem only from *training*. We discuss these results and argue that recurrent grouping and segmentation seems important for human-like global computations.

    Prikaži ovu nit
    Poništi
  17. 7. sij

    Here, we use very strong global effects in visual crowding (a well-documented human psychophysical effect) to address this question. Neither AlexNet, ResNet50, nor Gheiros et al.’s shape-biased network matched human performance on these tasks.

    Prikaži ovu nit
    Poništi
  18. 7. sij

    For example, Gheiros et al. showed that randomizing the textures in ImageNet biases ffCNNs towards using more global-shape computations. But of course there are many ways to perform global computations. Does this network implement *human-like* global computations?

    Prikaži ovu nit
    Poništi
  19. 7. sij

    However, it is unclear whether this limitation of ffCNNs follows from their *architecture* (i.e., a cascade of local, non-linear, feedforward operations), or if training the ffCNNs on more complex datasets could lead to more human-like computations.

    Prikaži ovu nit
    Poništi
  20. 7. sij

    For example, Gheiros et al. and Baker et al. showed that local changes to the edges of objects or their texture changes ffCNNs classification. For example, this image is classified as a cat. Humans can still see the cat, based on its global shape.

    Prikaži ovu nit
    Poništi
  21. 7. sij

    Several groups have shown that ffCNNs seem to rely largely on local, rather than global features. In contrast, it has been known since the gestaltists that the global shape of objects is very important for human vision.

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·