sebastien_wood

@sebastien_wood

Ph.D. student dpt. of electrical engineering with a focus in ML/AI - current projects in power efficiency of NN

Montréal, Québec
Vrijeme pridruživanja: listopad 2013.

Tweetovi

Blokirali ste korisnika/cu @sebastien_wood

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @sebastien_wood

  1. proslijedio/la je Tweet
    29. sij

    I made a goose that destroys your computer Download it free here:

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet
    22. sij

    There's a *lot* of people that are under the impression that publishers pay for stuff that's actually done by volunteers or open source software. As a result, many people over-estimate the amount of costs that are reasonable to expect.

    Prikaži ovu nit
    Poništi
  3. proslijedio/la je Tweet
    20. sij

    For some reason, I haven't been invited to Davos this year. Was it something I said? 🤔

    Poništi
  4. proslijedio/la je Tweet

    The perfect character backstory doesn't exi-

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet
    24. stu 2019.

    It’s been 20 years since I submitted my first paper with Nhat Nguyen and the late great Gene Golub on multi-frame super-res (SR). Here’s a thread, a personal story of SR as I’ve experienced it. It won’t be exhaustive or fully historical. Apologies to colleagues for any omissions

    Prikaži ovu nit
    Poništi
  6. 15. sij

    The use case is when there are possible orthogonalities in your experiment/idea and the state of the art shiny bag of trick thingy. E.g., is a clear signal of value in your idea more important than trying to look cool by getting top 1 ?

    Prikaži ovu nit
    Poništi
  7. 15. sij

    When preparing for an experiment, do you believe it is more useful to compare with the strongest baseline available (as to get into a virtual competition), or is it better to compare with a standard baseline with marginally lower results ?

    Prikaži ovu nit
    Poništi
  8. proslijedio/la je Tweet
    13. sij

    Swap x and y without using a third variable. x = x^y y = y^x x = x^y where ^ is XOR.

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    12. sij

    Some work-in-progress illustrations to gently introduce some concepts for 2-sample t-tests & how to think about p-values. Meant as teaching aids, not as a comprehensive standalone lesson. Caveats & assumptions abound. 🧵

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet

    Thank you, Taiwan.

    Poništi
  11. proslijedio/la je Tweet
    8. sij
    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet
    6. sij

    Patterns produced by neural network with random weights can give us some insights about the system’s inductive biases

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    5. sij

    Lloyd’s algorithm is the continuous counterpart of k-means. Optimizes the optimal quantization energy, which is an optimal transport distance to free Dirac masses.

    Poništi
  14. proslijedio/la je Tweet

    Public transit should be free. The fact that it's not is a policy choice. We choose to prioritize cars & spend money cracking down on fare evasion. This worsens carbon emissions & hurts the poor. Public transit should be free.

    Prikaži ovu nit
    Poništi
  15. 24. pro 2019.

    (compared to a binary network with reliable memory) Details are in the paper, which we'll present at AICAS 2020 🤓 Interested ? have any question ? let's chat ! 🙌 otherwise, merry christmas ! 🎄

    Prikaži ovu nit
    Poništi
  16. 24. pro 2019.

    Thus we propose the Layerwise Noise Maximisation algorithm to efficiently find the "good amount" of randomness during a gradient descent based training. We show networks have the capability to retain an interesting accuracy with a third of the original memory energy consumption.

    Prikaži ovu nit
    Poništi
  17. 24. pro 2019.

    Or, you could try to optimize the network to work with these random parameters and even better: try to maximize the amount of randomness. Based on recent works such as "Are all layers created equal" (Zhang, Bengio et al.) we propose to optimize the randomness per-layer.

    Prikaži ovu nit
    Poništi
  18. 24. pro 2019.

    You could use some safety mechanisms to ensure the read parameters are "the good ones" (e.g. error correction code). But let's agree it's neither fun nor free (require additional hardware 🤖)

    Prikaži ovu nit
    Poništi
  19. 24. pro 2019.

    It's no free lunch though ! The memory read error rate, meaning 0's read as 1's and vice-versa, will undergo an exponential increase. Your network will thus have to make with randomized parameters.

    Prikaži ovu nit
    Poništi
  20. 24. pro 2019.

    To squeeze even further the consumption you're then bound to tweak your architecture ... but are you ? If you take a look at the hardware part, the consumption grows quadratically with the supply voltage. You could try to play with it 🤓

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·