Simon Kornblith

@skornblith

Researcher Brain Toronto 🇨🇦🦝. In past lives, I was a neuroscientist , a contributor, and a developer .

Vrijeme pridruživanja: rujan 2010.

Tweetovi

Blokirali ste korisnika/cu @skornblith

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @skornblith

  1. proslijedio/la je Tweet
    4. velj

    To anyone in the whisker field: this is the greatest thing ever.

    Poništi
  2. 26. sij

    Just ate "The Apple of Big Dreams"

    Poništi
  3. 21. sij

    Perhaps this is the prior you're looking for. (Obviously, a fully Bayesian treatment would marginalize over the distribution of distributions of weight distributions instead of using EM.)

    Prikaži ovu nit
    Poništi
  4. 21. sij

    Here is the full paper.

    Prikaži ovu nit
    Poništi
  5. 21. sij

    "Simplifying Neural Network Soft Weight-sharing Measures by Soft Weight-measure Soft Weight Sharing" (Pearlmutter, 1994)

    Prikaži ovu nit
    Poništi
  6. proslijedio/la je Tweet
    13. sij

    This photo of fried chicken really hits home effects of context (grill...floor) on object recognition (dogs)

    Poništi
  7. proslijedio/la je Tweet
    10. sij

    The winning entry of a Kaggle machine learning competition hacked the website to get the test labels, buried and encoded them into a "private external dataset", reloaded them during pre-processing, and added noise to not get a performance of 100%. This is pretty sad.

    Prikaži ovu nit
    Poništi
  8. proslijedio/la je Tweet
    10. sij

    Critics: AI can't even beat a human at a simple game Researchers: no it's totally great at games now C: okay but not Go R: ya C: okay but it can't produce coherent sentences R: well, actually... C: whatever, it can't wiggle its ears like this

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    10. sij

    Our analysis suggests that a large portion of choice-correlated variability in MT is in a subspace NOT aligned with the sensory encoding and lives in the *null-space*!

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet
    6. sij

    With judicious refinement of modern tricks, and scale, we can push SOTA (on VTAB, ImageNet, Cifar, etc.) using classic transfer learning, without excessive complexity. Works surprisingly well even with <=10 downstream examples per class.

    Poništi
  11. proslijedio/la je Tweet
    13. pro 2019.

    What gems. brought me the elusive art zines from Toronto. The one on the right is about the Steve jobs fashion dataset "There is a little hot Steve jobs in all of us" lol

    Poništi
  12. proslijedio/la je Tweet
    11. pro 2019.

    Come to our poster (w/ and ) today Wednesday at to learn how to improve the accuracy of hard attention models for vision. (Poster #70 10:45am-12:45pm East Exhibition hall B + C)

    Poništi
  13. proslijedio/la je Tweet
    8. pro 2019.

    if y'all r lookin for somethin to read while walking around Vancouver, check out my newest paper: "Your Classifier is Secretly an Energy-Based Model and You Should Treat it Like One" with

    Prikaži ovu nit
    Poništi
  14. 8. pro 2019.

    @ NeurIPS to support: “Saccader: Improving Hard Attention Models for Vision” Wed w/ “When Does Label Smoothing Help?” spot💡Thu w/ “Exploring CNN Inductive Biases: Shape vs Texture” wrkshp w/ Come say hi!

    Poništi
  15. proslijedio/la je Tweet
    6. pro 2019.

    Infinite width networks (NNGPs and NTKs) are the most promising lead for theoretical understanding in deep learning. But, running experiments with them currently resembles the dark age of ML research before ubiquitous automatic differentiation. Neural Tangents fixes that.

    Prikaži ovu nit
    Poništi
  16. proslijedio/la je Tweet
    5. pro 2019.

    [ Workshop Paper Preview] Hermann et al. show that despite a texture bias, CNNs learn shape distinctions faster / with less data. To increase shape bias, remove random-crop augmentation and increase learning rate / weight decay! Full papers at

    Poništi
  17. proslijedio/la je Tweet
    2. pro 2019.

    To recap, the current AI war is no longer the age-old war btw symbolists and connectionists, but btw those who decry that the war is still ongoing and those saying that there's no longer one. But don't quote me. I'm not willing to start a war on whether there is a war about a war

    Poništi
  18. proslijedio/la je Tweet
    21. stu 2019.

    Excited to share new work, in collaboration with , investigating the texture bias in ImageNet-trained CNNs: .

    Prikaži ovu nit
    Poništi
  19. proslijedio/la je Tweet

    Here's a crazy idea. All the authors got free links to share to their COiN articles (valid until mid December). Why not put them in one place (perhaps as responses to this tweet?)

    Prikaži ovu nit
    Poništi
  20. proslijedio/la je Tweet

    So grateful for 's colab on CKA. It seems obvious in retrospect but I hadn't considered the equivalence of calculating similarities based on examples and based on features. My experiments are so much faster now...🚀

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·