AndriyMulyar

@andriy_mulyar

I enjoy taking signals and predicting things with them.

Vrijeme pridruživanja: srpanj 2019.

Tweetovi

Blokirali ste korisnika/cu @andriy_mulyar

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @andriy_mulyar

  1. proslijedio/la je Tweet
    26. sij

    You know Zagier’s brilliant-but-baffling “one sentence proof” that every prime of the form 4k+1 is the sum of two squares? It turns out there’s a lovely intuitive explanation of it!

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet
    24. sij
    Odgovor korisniku/ci

    Reviewer 2: This paper is clear and thorough, but it unfortunately failed to trigger my $$\sum$$ receptor. Author: We thank Reviewer 2 for their feedback and have copy pasted the definition of AdaBoost into our paper.

    Ovo je potencijalno osjetljiv multimedijski sadržaj. Saznajte više
    Poništi
  3. proslijedio/la je Tweet
    13. sij

    New blog post! This time I'm looking at recent advances in memory-efficient training and where that might lead.

    Poništi
  4. proslijedio/la je Tweet
    20. sij

    I found it! Turns out it was not as directly applicable as I remembered, but still a very interesting and deeply technical read:

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet
    13. sij

    a HUGE problems in NLP, imo, is that to make actual progress, or to build something which is actually good, you just cannot escape doing very elaborate and laborious work on the messy details. language is messy, and we cannot esccape that.

    Prikaži ovu nit
    Poništi
  6. proslijedio/la je Tweet
    15. ožu 2019.

    Justification for the optimism about neural networks: there won't be any need for preprocessing. You can just train from pixels, and with enough compute and enough data, they will approximate the best possible for the task at hand. (8)

    Prikaži ovu nit
    Poništi
  7. proslijedio/la je Tweet
    7. sij

    This is exactly the tactic warned about in her recent keynote: "There’s a sense that a career could be destroyed over awkward passes or misunderstandings... What I want to say today to all of the men in the room is that ​you have been misled."

    Poništi
  8. proslijedio/la je Tweet
    7. sij
    Odgovor korisniku/ci
    Poništi
  9. proslijedio/la je Tweet
    6. sij

    You might have seen some press recently about a study Google did where they trained a neural net to read mammograms more accurately than doctors. As usual, that's not the real story. What is, is the sad state of the American healthcare industry.

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet
    6. sij

    Thanks! For a 1-pager on DAGs, I give this page from AJE 2005 ( with major credit to co-author ) and I like the chapter in Oakes & Kaufman Methods for Social Epi. I learned from & Robins: .

    Poništi
  11. 1. sij
    Poništi
  12. proslijedio/la je Tweet
    18. pro 2019.

    It took me a while to recap bc some papers are complex. Key points: - Destructing black box: convergence, generalization, neural tangent kernel - New approaches: Bayesian & uncertainty, graph, convex optimization - Neuroscience & bio-inspired algos - ...

    Poništi
  13. proslijedio/la je Tweet
    15. pro 2019.

    Since I've been complaining about techbros today (in general, and one in particular), I thought it might also be good to write a thread on how to not be that guy:

    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet
    4. pro 2019.

    Our new paper, Deep Learning for Symbolic Mathematics, is now on arXiv We added *a lot* of new results compared to the original submission. With (1/7)

    Prikaži ovu nit
    Poništi
  15. proslijedio/la je Tweet
    3. pro 2019.

    Code release for "What's hidden in a randomly weighted neural network?" Code: Arxiv: Discussion thread below

    Poništi
  16. proslijedio/la je Tweet
    26. lis 2019.
    Odgovor korisniku/ci

    To people asking for a debugging guide, wrote a pretty comprehensive one:

    Poništi
  17. 7. lis 2019.
    Poništi
  18. proslijedio/la je Tweet
    Poništi
  19. proslijedio/la je Tweet
    6. ruj 2019.
    Odgovor korisnicima

    +1. When I ran EMNLP in 2007, we wrote and saved many emails laying out clear expectations for authors, reviewers, ACs, presenters, & session chairs. These were used by several *ACL conferences thereafter. Available on request.

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·