Tomasz Darmetko

@Isinlor

Software Developer at University of Leuven. Machine learning and astronomy passionate.

Leuven
Vrijeme pridruživanja: veljača 2012.

Tweetovi

Blokirali ste korisnika/cu @Isinlor

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @Isinlor

  1. proslijedio/la je Tweet
    30. sij

    The most surprising yet true thing anyone has ever pointed out to me on Wikipedia is that the Sun is so bright because it is so big. The power production rate is actually like a few lightbulbs in a box, or the heat from a compost pile or Lizard! From:

    Prikaži ovu nit
    Poništi
  2. proslijedio/la je Tweet
    28. sij

    New paper: Towards a Human-like Open-Domain Chatbot. Key takeaways: 1. "Perplexity is all a chatbot needs" ;) 2. We're getting closer to a high-quality chatbot that can chat about anything Paper: Blog:

    Prikaži ovu nit
    Poništi
  3. proslijedio/la je Tweet
    9. pro 2019.

    Classifiers are secretly energy-based models! Every softmax giving p(c|x) has an unused degree of freedom, which we use to compute the input density p(x). This makes classifiers into generative models without changing the architecture.

    , , i još njih 2
    Prikaži ovu nit
    Poništi
  4. proslijedio/la je Tweet
    1. lis 2019.

    Do you formally know Monte-Carlo and TD learning, but don't intuitively understand the difference? This is for you. (with )

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet
    10. lis 2019.

    The war between ML frameworks has raged on since the rebirth of deep learning. Who is winning? 's data analysis shows clear trends: PyTorch is winning dramatically among researchers, while Tensorflow still dominates industry.

    Poništi
  6. proslijedio/la je Tweet
    11. ruj 2019.

    The paper that introduced Batch Norm combines clear intuition with compelling experiments (14x speedup on ImageNet!!) So why has 'internal covariate shift' remained controversial to this day? Thread 👇

    Prikaži ovu nit
    Poništi
  7. proslijedio/la je Tweet

    The Illustrated GPT-2 (Visualizing Transformer Language Models) New blog post visually exploring the insides of the model that dazzled us with its ability to write coherently and with conviction. We also look at other applications of this type of model.

    Prikaži ovu nit
    Poništi
  8. proslijedio/la je Tweet
    15. kol 2019.

    Compressed everything I learned about how life sciences work in the last year (and 100+ interviews) into 6000 words:

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    11. lip 2019.

    Weight Agnostic Neural Networks 🦎 Inspired by precocial species in biology, we set out to search for neural net architectures that can already (sort of) perform various tasks even when they use random weight values. Article: PDF:

    Prikaži ovu nit
    Poništi
  10. proslijedio/la je Tweet
    15. svi 2019.

    A while ago, I blogged about a simple way to think about matrices, namely as bipartite graphs. Now I’d like to share yet another way to think about matrices: tensor network diagrams! Here, familiar things have nice pictures. New blog post!

    Poništi
  11. proslijedio/la je Tweet
    5. tra 2019.

    Does my unsupervised neural network learn syntax? In new paper with , our "structural probe" can show that your word representations embed entire parse trees. paper: blog: code: 1/4

    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet

    I've made this cheat sheet and I think it's important. Most stats 101 tests are simple linear models - including "non-parametric" tests. It's so simple we should only teach regression. Avoid confusing students with a zoo of named tests. 1/n

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    25. ožu 2019.

    1/6 Deep classifiers seem to be extremely invariant to *task-relevant* changes. We can change the content of any ImageNet image, without changing model predictions over the 1000 classes at all. Blog post @ . with Rich Zemel

    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet
    15. ožu 2019.

    This. Don't waste time on domain specific tricks. Do work on abstract & general inductive biases like smoothness, relational structure, compositionality, in/equivariance, locality, stationarity, hierarchy, causality. Do think carefully & deeply about what is lacking in AI today.

    Poništi
  15. 15. ožu 2019.

    The biggest lesson that can be read from 70 years of AI research is that general methods that leverage computation are ultimately the most effective, and by a large margin. - Rich Sutton

    Poništi
  16. 5. ožu 2019.

    "The Halo Drive: Fuel-free Relativistic Propulsion of Large Masses via Recycled Boomerang Photons" by The idea is to use a moving black hole as a mirror that can energize laser beamed towards it. The mirrored beam is free energy. Clever! 🤩

    Poništi
  17. 1. ožu 2019.

    This is *the most important* direction of work in AI safety today. Recommendation systems have enormous power over society and politics, and we do not understand them. Who know how much these systems helped Trump and Brexit to happen.

    Poništi
  18. 26. velj 2019.

    Differentiable Programming: Rather than always writing new programs for ML, we can incorporate existing ones, enabling physics engines inside deep learning-based robotics models.

    Poništi
  19. proslijedio/la je Tweet
    27. lip 2018.

    I made this years ago, even before , but I never published it because I never finished it and stuff happened... Somehow it got on Hacker News, so here it is: Backpropagation explained via scrollytelling:

    Poništi
  20. 14. velj 2019.

    Neural Networks seem to follow a puzzlingly simple strategy to classify images. Viewing the decision-making of CNNs as a bag-of-feature strategy could explain several weird observations about CNNs.

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·