patrick.shafto

@patrickshafto

Professor of Math and CS at Rutgers - Newark, Data science! Human learning! Machine learning! Exclamation points!

Rutgers University - Newark
Vrijeme pridruživanja: ožujak 2013.

Tweetovi

Blokirali ste korisnika/cu @patrickshafto

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @patrickshafto

  1. proslijedio/la je Tweet
    3. velj

    Great write-up in NJ Moms for how parents & their littles can get involved in science in our labs!

    Poništi
  2. proslijedio/la je Tweet
    Prikaži ovu nit
    Poništi
  3. proslijedio/la je Tweet
    16. sij

    We are keen to provide participants with children with personalized solutions to be able to comfortably attend AISTATS. Please fill out We ❤️ family-friendly ML conferences! 👪👨‍👩‍👦‍👦👪

    Poništi
  4. proslijedio/la je Tweet
    16. sij

    Applications are invited for the 2020 Brains, Minds and Machines Summer Course, to be held in Woods Hole, MA from Aug 6 through Aug 27th. Deadline: April 6, 2020

    Poništi
  5. proslijedio/la je Tweet
    12. sij

    What makes a good lab? Are group meetings really the best way? In the we recently reexamined how we organize our weeks, and then redesigned everything in a systematic way (100% democratically!). I blogged about our design process and takeaways:

    Prikaži ovu nit
    Poništi
  6. proslijedio/la je Tweet
    9. sij

    Training Neural SDEs: We worked out how to do scalable reverse-mode autodiff for stochastic differential equations. This lets us fit SDEs defined by neural nets with black-box adaptive higher-order solvers. With , and .

    Prikaži ovu nit
    Poništi
  7. proslijedio/la je Tweet
    9. sij

    Children are voracious learners. "They take in all the evidence they can get – even leveraging social information such as the knowledge state & goals of an informant.”

    Poništi
  8. 7. sij

    Paper can be found here:

    Prikaži ovu nit
    Poništi
  9. 7. sij

    Our paper "Interpretable deep Gaussian Processes with moments" has been accepted to AIStats. Great work by with Scott Yang, Xiaoran Hao

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet
    6. sij

    I am extremely excited to announce (1) I've joined OpenAI to lead a large-scale effort into AI-generating Algorithms research, & (2) I'll be an Associate CS Professor at U. British Columbia in 2021, where I will continue to lead the OpenAI project. Both are dreams come true! 1/2

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet
    6. sij

    Ever notice that people & especially kids tend to change an uncertain response to a neutral question? Our (Open Access) paper now out in Cognitive Science provides an answer. W/ Sophie Bridgers & A Gonzalez (a thread)

    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet
    1. sij

    Here my earlier thread on 's must-read paper on the processes underlying children's probabilistic inferences: 3/n

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    27. pro 2019.

    1/ This shows how far the field has regressed in its understanding of probability. It's not a controversial opion, it's the opinion of someone who hasn't understood that a prior over weights in a neural network induces a prior over functions.

    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet
    20. pro 2019.

    I have had an amazing reaction for my NeurIPS tutorial. I thank you all for your encouraging comments. Links to the video and the paper attached below.

    Poništi
  15. proslijedio/la je Tweet
    20. pro 2019.

    Paper with on controllability and Pavlovian biases is out! We show that when rewards are less controllable, Pavlovian bias on action selection is stronger, consistent with a new normative analysis.

    Poništi
  16. proslijedio/la je Tweet
    12. pro 2019.

    TADA!!! Our new website for through Playful Learning Landscapes is live! check it out.

    Poništi
  17. proslijedio/la je Tweet
    9. pro 2019.

    Classifiers are secretly energy-based models! Every softmax giving p(c|x) has an unused degree of freedom, which we use to compute the input density p(x). This makes classifiers into generative models without changing the architecture.

    , , i još njih 2
    Prikaži ovu nit
    Poništi
  18. proslijedio/la je Tweet
    22. stu 2019.

    I fed the first lines of Edward Gorey’s "Gashlycrumb Tinies" into OpenAI’s GPT-2. I then asked to illustrate the results. We give you: Tʜᴇ GPT-2ɴɪᴇs

    Prikaži ovu nit
    Poništi
  19. 6. pro 2019.
    Poništi
  20. 5. pro 2019.
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·