Tal Golan

@TalGolanNeuro

Postdoctoral research scientist. Human vision, deep neural networks, neuroimaging and statistics

Columbia University, New York
Vrijeme pridruživanja: srpanj 2016.

Tweetovi

Blokirali ste korisnika/cu @TalGolanNeuro

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @TalGolanNeuro

  1. Prikvačeni tweet
    23. stu 2019.

    New paper: Deep net models have many parameters, which enable them to flexibly fit data. As a result, qualitatively different models can make similar predictions. How then can we adjudicate among deep nets as models of human perception? 1/10

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet
    16. pro 2019.

    🧠🧠🧠 Consciousness postdocs - come work with Michael Pitts and myself (and many others) on the GNW/IIT adversarial collaboration! We are hiring for a 2-3 years position. Details below. Please RT 🙏

    Poništi
  3. proslijedio/la je Tweet
    13. sij

    Amazing resource to make sense and the most of reaction time distributions:

    Poništi
  4. proslijedio/la je Tweet
    14. sij

    We are pleased to announce CCN 2020 to be held in San Francisco on Aug 21-24. Also, check out the videos from last year's conference!

    Poništi
  5. proslijedio/la je Tweet
    23. pro 2019.

    A petition to the Association for Psychological Science to withdraw its support for the AAP letter opposing a federal effort to improve access of publicly funded research: You are welcome to adapt the letter for campaigns with other societies.

    Poništi
  6. proslijedio/la je Tweet
    17. pro 2019.

    The first real application of is out there! It can out-of-the-box differentiate the ranking function (just by calls to torch.argsort) and help directly optimize rank-based metrics. Simple, blazing-fast, and about on-par with SOTA!

    Poništi
  7. proslijedio/la je Tweet
    10. pro 2019.

    Only five days left to apply for a 4 year, fully-funded PhD position in the lab. Details below.

    Prikaži ovu nit
    Poništi
  8. proslijedio/la je Tweet

    A simple circuit model of visual cortex explains neural and behavioral aspects of attention

    Poništi
  9. proslijedio/la je Tweet
    10. pro 2019.

    CCN 2019 recordings are up Featuring + more Population coding, (inverse) RL, Causality, functional decoding, neuropixels, free energy, ...

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet
    8. pro 2019.

    Latest work: A simple robust method for isolating and removing venous effects in fMRI. Acquisition agnostic (standard- or high-resolution data, GE or SE, etc.). Code and sample data available online.

    Poništi
  11. proslijedio/la je Tweet
    6. pro 2019.

    Graphical representations of what happens when correlations and reaction times are reported conditional on p<0.05 and there is no effect: we get a literature of mixed results and potentially large effect sizes…

    Poništi
  12. 5. pro 2019.

    What kind of quantity are we estimating when we guess that 2116047814278053667325 is the best answer to 's problem? 4/4

    Prikaži ovu nit
    Poništi
  13. 5. pro 2019.

    It seems that most people's intutation is that this expectancy grows much faster (e.g., p^-2 instead of p^-1). Why is that? is that the same kind of bias evident in the counterintuitivity of the solution to the birthday problem? 3/4

    Prikaži ovu nit
    Poništi
  14. 5. pro 2019.

    To me, the most unintuitive piece seems to be that the expected number of repeated Bernoulli trials until success is 1/p. So if you are drawing with replacement from a deck until you see a king of hearts, you'll need only 52 draws, on average. 2/4

    Prikaži ovu nit
    Poništi
  15. 5. pro 2019.

    The human bias in guessing the answer to this problem is fascinating. 1/4

    Prikaži ovu nit
    Poništi
  16. proslijedio/la je Tweet
    2. pro 2019.

    New preprint from our lab on whether deep neural networks see the way we do. With

    Prikaži ovu nit
    Poništi
  17. proslijedio/la je Tweet
    3. pro 2019.

    Scikit-learn 0.22 is out! New website, new plotting API, permutation variable importances, support for missing values in GBRT, KNN Imputer, Decision Tree pruning and much more. Highlights: Full changelog:

    Poništi
  18. proslijedio/la je Tweet

    1/4: The lottery ticket hypothesis suggests that by training DNNs from “lucky” initializations, we can train networks which are 10-100x smaller with minimal performance losses. In new work, we extend our understanding of this phenomenon in several ways...

    Prikaži ovu nit
    Poništi
  19. proslijedio/la je Tweet
    26. stu 2019.
    Poništi
  20. 26. stu 2019.

    ECoG researchers: here's some useful & tidy code shared by . Projecting electrodes to SUMA normalized cortical meshes is a very easy way to analyze and visualize ECoG data from multiple patients. Much better than any MNI-based solution, IMHO

    Poništi
  21. 23. stu 2019.

    𝗖𝗼𝗻𝗰𝗹𝘂𝘀𝗶𝗼𝗻 𝟮: Models employing generative internal models of the digits dominated discriminative models at accounting for human judgments. 10/10

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·