Alex Renda

@alex_renda_

Grad Student . Interested in systems combining programming languages and machine learning.

Vrijeme pridruživanja: veljača 2012.

Tweetovi

Blokirali ste korisnika/cu @alex_renda_

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @alex_renda_

  1. proslijedio/la je Tweet

    Tool predicts how fast code will run on a chip: Machine-learning system should enable developers to improve computing efficiency in a range of applications.

    Binary code in glowing blue digits is written on a computer chip that is part of a larger circuit board.
    Poništi
  2. 20. pro 2019.

    I'm thinking something along these lines, but I'm not sure that this is the best option:

    Prikaži ovu nit
    Poništi
  3. 20. pro 2019.

    Does anyone know of any physical CPUs that adhere to some relatively simple model? E.g., a simple in-order processor, with some ability to profile execution times?

    Prikaži ovu nit
    Poništi
  4. proslijedio/la je Tweet
    15. pro 2019.

    BHive: A Benchmark Suite and Measurement Framework for Validating x86-64 Basic Block Performance Models

    Poništi
  5. proslijedio/la je Tweet
    14. pro 2019.
    Poništi
  6. proslijedio/la je Tweet
    14. pro 2019.

    Ever wondered what happens when you freeze all the weights in a neural network and only train batch normalization? Me too! Turns out you can get 80%+ accuracy on CIFAR-10 by doing so. Check out our poster and oral in the SEDL workshop in West 121. With David Schwab and

    Poništi
  7. proslijedio/la je Tweet
    14. pro 2019.

    How do the lottery ticket hypothesis and the loss landscape relate? Winning lottery tickets always find the same, linearly-connected optimum. Check out our (, , ) poster at the SEDL workshop (West 121) and our new paper

    Poništi
  8. 14. pro 2019.

    Come check out our poster! We present thorough experimental validation of a pruning technique based on the Lottery Ticket Hypothesis, showing how to match state-of-the-art pruning results with a simple technique. We’re at the SEDL workshop at .

    Poništi
  9. proslijedio/la je Tweet
    13. pro 2019.

    i'll be presenting a poster on learning a semantic parser from zero training examples at the emergent communication workshop at tomorrow. come check it out!

    Poništi
  10. 13. pro 2019.

    Come chat about using Tiramisu for sparse neural networks and LSTMs! The afternoon poster session is 3:30-4:30, in rooms 11 and 12 at the MLSys Workshop.

    Poništi
  11. proslijedio/la je Tweet
    20. lip 2019.

    Learn more about how Deep Learning can be used for performance modeling; I will be giving a talk about Ithemal () at the ML for systems workshop ()

    Poništi
  12. proslijedio/la je Tweet
    12. lip 2019.

    RESULTS: Using a corpus of over 1M basic blocks and just their measured performance, Ithemal learns to predict performance with half the error of LLVM's and Intel's tools.

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    12. lip 2019.

    How fast will your code run on the latest Intel chip? Find out with Ithemal (). With only black-box access to a processor, we use machine learning to answer the question () with half the error of Intel's own tools.

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·