Rich Pavlovskiy

@eazymandias

University of Toronto Math and Physics undergrad. AI enthusiast. Wide eyed optimist.

Toronto, Ontario
Vrijeme pridruživanja: listopad 2018.

Tweetovi

Blokirali ste korisnika/cu @eazymandias

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @eazymandias

  1. proslijedio/la je Tweet

    you send her flowers, i prove her theorems. we are not the same.

    Poništi
  2. proslijedio/la je Tweet
    1. velj

    The days on Mars were 39 minutes longer than the days on Earth. Those minutes got their own place on the clock. “Martian time,” between 11:59 and midnight, was supposed to be spent alone in quiet reflection. On that empty red planet, comfort in loneliness was part of the culture.

    Prikaži ovu nit
    Poništi
  3. 30. sij

    There is/should be a paradox that states that we will never know if we have created a true AGI. Does anyone have an idea of how to prove it or maybe it’s name?

    Poništi
  4. proslijedio/la je Tweet
    29. sij
    Odgovor korisnicima

    Perplexity for a language model, by definition, is computed by first averaging all neg log predictions and then exponentiating. Does that help explain?

    Poništi
  5. proslijedio/la je Tweet
    1. sij

    FaceShifter: Towards High Fidelity And Occlusion Aware Face Swapping pdf: abs:

    Poništi
  6. proslijedio/la je Tweet
    27. sij

    It just makes economic inequality more persistent if, while rich kids are being taught the recipe for wealth, you tell poor kids it's impossible to become rich except by being a crook:

    Tweet je nedostupan.
    Poništi
  7. proslijedio/la je Tweet
    Poništi
  8. proslijedio/la je Tweet
    8. sij

    I prepared a new notebook for my Deep Learning class: Joint Intent Classification and Slot Filling with BERT: This a step by step tutorial to build a simple Natural Language Understanding system using the voice assistant dataset (English only).

    Prikaži ovu nit
    Poništi
  9. proslijedio/la je Tweet
    11. sij

    me in the audience cheering my friends on as they give cool academic talks in fields I don’t know anything about

    Poništi
  10. proslijedio/la je Tweet
    10. sij

    I finished listening to the epic 5 hour 80k podcast with David Chalmers. I was dreading this since I knew I disagreed with Chalmers on many things but wow was I wrong! This was a phenomenal discussion that really got me PUMPED for philosophy again! (1/5)

    Prikaži ovu nit
    Poništi
  11. proslijedio/la je Tweet
    26. pro 2019.

    Bayesian methods are *especially* compelling for deep neural networks. The key distinguishing property of a Bayesian approach is marginalization instead of optimization, not the prior, or Bayes rule. This difference will be greatest for underspecified models like DNNs. 1/18

    Prikaži ovu nit
    Poništi
  12. 5. sij

    found a pretty neat blog of Agustinus Kristiadi with some thorough explanations of advanced ML topics: *I was looking for well explained connection between KL-divergence and log-likelihood()

    Poništi
  13. proslijedio/la je Tweet
    5. sij

    Consider: millions of years ago our antecedents gave a massive sacrifice of their left hemisphere. We lost a tremendous amount of short term memory and replaced it with Broca’s, Wernicke & the phonological loop. But why? So we can—talk. Thus chimpanzees can do this—we can’t:

    Prikaži ovu nit
    Poništi
  14. proslijedio/la je Tweet
    5. sij

    Many if not most "luxury brands" prey upon people who have lots of money but don't know how to live, and hope they can use cost as a sort of compass to guide them to the good life.

    Poništi
  15. proslijedio/la je Tweet
    3. sij

    The effect can now handle collisions and multiple photos

    Prikaži ovu nit
    Poništi
  16. proslijedio/la je Tweet
    9. pro 2019.

    Classifiers are secretly energy-based models! Every softmax giving p(c|x) has an unused degree of freedom, which we use to compute the input density p(x). This makes classifiers into generative models without changing the architecture.

    , , i još njih 2
    Prikaži ovu nit
    Poništi
  17. 31. pro 2019.

    Juergen Schmidhuber did a pretty cool AMA in 2015 👀 Some one was asking about attention and rnn, boy oh boy

    Poništi
  18. proslijedio/la je Tweet
    28. pro 2019.

    Reformer: The Efficient Transformer They present techniques to reduce the time and memory complexity of Transformer, allowing batches of very long sequences (64K) to fit on one GPU. Should pave way for Transformer to be really impactful beyond NLP domain

    Prikaži ovu nit
    Poništi
  19. proslijedio/la je Tweet
    26. pro 2019.

    y’all arguing over work ethic is boring. u should cycle dif amphetamines and never sleep, aiming for continuous dopamine drip feed to the brain whilst reading every word of the internet. once achieved, move to log cabin in german forest & do math all day, weekends at berghain

    Prikaži ovu nit
    Poništi
  20. proslijedio/la je Tweet
    20. pro 2019.
    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·