Davis Brown

@davisbrownr

physics/philosophy student. Also interested in ML, etc. Currently

Vrijeme pridruživanja: lipanj 2017.

Tweetovi

Blokirali ste korisnika/cu @davisbrownr

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @davisbrownr

  1. proslijedio/la je Tweet
    12. sij

    It's as if we taught music by starting with finger exercises, then teaching the manipulation of lots of complicated notation, and mentioning in passing that it could theoretically help with all that singing and instrumental playing that we are doing intuitively every day.

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet
    7. sij

    That GPT-2 can learn to play chess just from reading notation (no knowledge of the game itself) seems to suggest that these DL systems do figure out useful representations on their own. If you can fake task X well enough you have understood *something*.

    Poništi
  3. 31. pro 2019.

    Weird. Is this a 30 km -> 19 m transition or attempted behavioral science nudge?

    Poništi
  4. proslijedio/la je Tweet
    7. stu 2019.

    We've analyzed compute used in major AI results for the past decades and identified two eras in AI: 1) Prior to 2012 - AI results closely tracked Moore's Law, w/ compute doubling every two years. 2) Post-2012 - compute has been doubling every 3.4 months

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet
    3. stu 2019.

    My recent essay with , "How can we develop transformative tools for thought?" is now available in a nicely-formatted pdf/print version:

    Prikaži ovu nit
    Poništi
  6. proslijedio/la je Tweet
    29. lis 2019.

    1/ I can't teach you how to dougie but I can teach you how to compute the Gaussian Process corresponding to infinite-width neural network of ANY architecture, feedforward or recurrent, eg: resnet, GRU, transformers, etc ... RT plz💪

    Prikaži ovu nit
    Poništi
  7. 2. lip 2019.
    Poništi
  8. proslijedio/la je Tweet
    28. svi 2019.

    [Blog post]: *Not* reproducing Axelrod's first tournament: The latest release of has "as good as possible" implementations of all strategies from Axelrod's first tournament (based on the descriptions available). Cooperation still emerges.

    Poništi
  9. proslijedio/la je Tweet
    10. svi 2019.

    Neural nets generalize well mostly because randomly chosen parameters make simple functions. HT

    Poništi
  10. proslijedio/la je Tweet
    24. tra 2019.

    Evolutionary prior knowledge is the dark matter of human intelligence

    Poništi
  11. proslijedio/la je Tweet

    📣 New "mnemonic essay" on quantum computation with 🚨 How can any procedure possibly search a list in O(√N) time? Find out here—and remember what you've learned almost effortlessly through our integrated spaced repetition system.

    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet
    12. tra 2019.

    Re SpaceX: I've only ever had a dry, abstract appreciation for what they're doing. But this video gives me visceral awe: Shows how fast the rockets are coming in and how much power/control are needed to stick the landing. Recommend watching with sound

    Poništi
  13. 10. tra 2019.

    Nice convo b/w Chalmers and Dennett. Chalmers: "Half of my memories are now either stored on my smart phone or sitting in the cloud. I was trying to figure out the other day who has a bigger part of my brain. Is it Google, Apple, or Facebook?"

    Poništi
  14. proslijedio/la je Tweet
    14. ožu 2019.

    One of the most fascinating and heart-rending characters in the Odyssey is Penelope & Odysseus' only son, Telemachus, who has grown up bullied by his mother's suitors, fatherless, aimless, without positive male mentors until Athena helpfully shows up in the guise of Mentor.

    Prikaži ovu nit
    Poništi
  15. proslijedio/la je Tweet
    12. sij 2019.

    A new & peculiar essay, on a (very!) unusual approach I've been using to deepen my understanding of mathematics:

    Prikaži ovu nit
    Poništi
  16. proslijedio/la je Tweet
    19. stu 2018.

    We have only one planet. This fact radically constrains the kinds of risks that are appropriate to take at a large scale.

    Poništi
  17. proslijedio/la je Tweet
    27. stu 2018.

    What knowledge has a GAN learned? Why does it sometimes fail? We took a small step towards understanding the internal representation of GANs. Try our visualization tools () and online demo . (with , @henddkn, MIT & IBM)

    Poništi
  18. proslijedio/la je Tweet
    29. srp 2018.

    I cannot get over this. This is a time-lapse of nearly 20 yrs of footage (that I had to turn to gif) from the NACO instrument on the ESO's VLT in Chile, that shows stars orbiting a supermassive black hole at the centre of our Milky Way galaxy. General relativity in action.

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·