Isaac Leonard

@is8ac

Rust user, ML dev

@is8ac@mastodon.social
Joined June 2014

Tweets

You blocked @is8ac

Are you sure you want to view these Tweets? Viewing Tweets won't unblock @is8ac

  1. Retweeted
    Jan 9

    . is an extraordinary, massive, persistent force of nature, like Jupiter’s Great Red Spot or something. Read his decade in review and be gobsmacked:

    Undo
  2. Retweeted
    Jan 2
    Replying to

    I had, but my next thought was "I don't understand why Shane Legg, who seems like a pretty smart guy, is so ludicrously optimistic that connectionism is going to suddenly start working when it never has before, and worth quitting a good academic career for." Not my best call.

    Undo
  3. Retweeted

    “It would imply that there are language-like behaviors out there in logical space which aren’t language and which are nonetheless so much like it, non-trivially, beautifully, spine-chillingly like it.”

    Show this thread
    Undo
  4. Retweeted

    My deep respect to for reporting this news. May all journalists writing about climate change follow his lead: The facts are bad enough to justify urgent action on climate change. Attempts to scare people into action by going beyond the science will only backfire.

    Show this thread
    Undo
  5. Retweeted
    12 Dec 2019

    Machine Learning in a company is 10% Data Science & 90% other challenges It's VERY hard. Everything in this guide is ON POINT, and it's stuff you won't learn in an ML book "Best Practices of ML Engineering" This is a lifesaver project

    Undo
  6. Retweeted

    This is the first writeup of quadratic decision-making that got me interested - though I would have led with the headline that "nobody can prove how you voted" is meant to address vote-buying schemes.

    Undo
  7. Retweeted

    1/4: The lottery ticket hypothesis suggests that by training DNNs from “lucky” initializations, we can train networks which are 10-100x smaller with minimal performance losses. In new work, we extend our understanding of this phenomenon in several ways...

    Show this thread
    Undo
  8. Retweeted

    We also introduce a technique [] for training neural networks that are sparse throughout training from a random initialization - no luck required, all initialization “tickets” are winners.

    Show this thread
    Undo
  9. Retweeted

    “Fast Sparse ConvNets”, a collaboration w/ [], implements fast Sparse Matrix-Matrix Multiplication to replace dense 1x1 convolutions in MobileNet architectures. The sparse networks are 66% the size and 1.5-2x faster than their dense equivalents.

    Show this thread
    Undo
  10. 20 Nov 2019
    Undo
  11. Retweeted
    18 Nov 2019

    I'm running a new instruction set! The star I use to establish my local coordinate system is Vega (α Lyrae), the brightest star in the constellation Lyra. CCS: END BASELINE SEQUENCE B185: BEGIN BASELINE SEQUENCE B186: LOCK STAR(03) (2019:323:024115:2T)

    Undo
  12. Retweeted

    i've listened to Kimi no Shiranai Monogatari enough times to make this mildly heartbreaking ;~;

    Show this thread
    Undo
  13. Retweeted
    17 Nov 2019

    Today’s a great day to celebrate the people who took the risk to give you your first job (in AI or otherwise) For me it was introing me to John Wagster and Keith Massey . Changed my life and I’ll always be grateful.

    Undo
  14. Retweeted

    software is essential to science—that’s why we’re supporting 42 open source tools that accelerate biomedical research and serve the larger community. Learn about the grantees

    Show this thread
    Undo
  15. Retweeted

    Just a few more minutes in today's ! Watch Mercury complete its journey across the Sun through the eyes of our Solar Dynamics Observatory satellite ➡️ . SDO keeps a constant eye on the Sun, so it has a prime view for transits like this! 🛰☀️

    Undo
  16. Retweeted
    9 Nov 2019

    🤖 As a persistent critic of AI hype, I should be glad for backlash. A new wave of Important Thinkpieces from Famous Pundits say AI is impossible, we don’t need to worry about it, etc. Most are riddled with glaring illogic, false analogies, motivated reasoning, & factual errors

    Show this thread
    Undo
  17. Retweeted
    31 Oct 2019

    This isn't really surprising! Lots of people are okay with "you'll pay more taxes so we can invest in the country continuing to have upwards mobility" and not okay with "you'll pay more taxes because you're evil scum who don't deserve the money you made!"

    Undo
  18. Retweeted

    Speaking of productive self-criticism. When we demonstrate a new capability, let's also demonstrate its limits by showing instances of functional failures. Much of the criticism is a reaction to the hype generated by researchers and the organizations they work for. 7/

    Show this thread
    Undo
  19. Retweeted
    26 Oct 2019

    He argues that if we instead design AIs whose objective is to get a better model of humans and do what we want, then there's still a lot which can go wrong - but more competent AI means better outcomes instead of meaning worse ones. That's a safer paradigm to be working in.

    Show this thread
    Undo
  20. Retweeted
    25 Oct 2019

    Path Length Bounds for Gradient Descent – Blog | Machine Learning | Carnegie Mellon University

    Undo

Loading seems to be taking a while.

Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.

    You may also like

    ·