Denny Britz

@dennybritz

Human. Ex-Google Brain, Stanford, Cal. Tweets about ML, startups. Writing at and .

Vrijeme pridruživanja: siječanj 2008.

Tweetovi

Blokirali ste korisnika/cu @dennybritz

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @dennybritz

  1. 9. svi 2019.

    Machine Learning model deployments often have these characteristics. You cannot look at the average, you need to look at the cost of tail events.

    Poništi
  2. 24. tra 2019.

    Although I have been working with Deep Learning for 4 years, I choose not to learn more than a smattering amount of Linear Algebra because I need a sense mystery and alchemy in my life, something to offset the sense of the familiar.

    Poništi
  3. proslijedio/la je Tweet
    23. tra 2019.

    Releasing the Sparse Transformer, a network which sets records at predicting what comes next in a sequence — whether text, images, or sound. Improvements to neural 'attention' let it extract patterns from sequences 30x longer than possible previously:

    Poništi
  4. proslijedio/la je Tweet
    17. velj 2019.

    Thrilled to be teaching a new course on Deep Unsupervised Learning with (ImprovedGAN, InfoGAN, PixelCNN++, VLAE, PixelSNAIL, Flow++), (Flow++, GAIL), (Flow++). Follow along here: Lecture vids go up once captioned

    Prikaži ovu nit
    Poništi
  5. proslijedio/la je Tweet
    20. tra 2019.

    One of my most controversial software opinions is that your sleep quality and stress level matter far, far more than the languages you use or the practices you follow. Nothing else comes close: not type systems, not TDD, not formal methods, not ANYTHING. Allow me to explain why.

    Prikaži ovu nit
    Poništi
  6. proslijedio/la je Tweet
    21. tra 2019.

    "Exploring the Limitations of Behavior Cloning for Autonomous Driving" by Antonio M. Lopez and myself is live on arXiv. Imitation learning has indeed potential, but also requires a lot of care for training. Check it out!

    Poništi
  7. proslijedio/la je Tweet

    Successfully defended my Ph.D. thesis: Humour-in-the-loop: Improvised Theatre with Interactive Machine Learning Systems thx: and more. deets:

    Prikaži ovu nit
    Poništi
  8. proslijedio/la je Tweet
    15. tra 2019.

    A great GitHub repository with tutorials on getting started with PyTorch and TorchText for sentiment analysis in Jupyter Notebooks. What a great resource!

    Poništi
  9. proslijedio/la je Tweet
    15. tra 2019.

    . has just released a collection of datasets for conversational AI, consisting of hundreds of millions of examples-

    Prikaži ovu nit
    Poništi
  10. 19. ožu 2019.

    I just realized that even with AGI we cannot make Japanese toilets any smarter.

    Poništi
  11. 17. ožu 2019.

    And the irony of the story: Because keys keep falling out of the MacBook Pro keyword I purchased an external USB keyboard. Now I have the choice: Do I want to use a keyboard or WiFi? I can pick one. That’s a tough choice.

    Prikaži ovu nit
    Poništi
  12. 17. ožu 2019.

    Just ran into this on my MacBook. I mean, what kind of crazy person would want to use WiFi and USB at the same time anyway?

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet

    A company that starts with "V" was founded in 2010 and raised $122M since. While they did mislead investors and the public ("we're building AGI and we're going to have a human-level vision system in 5 years" (said in 2011)), they did not do so with such absolute shamelessness.

    Prikaži ovu nit
    Poništi
  14. 11. ožu 2019.

    I would like to pre-order one AGI, please. Gotta get the revenue stream started to get to the 100x cap ;)

    Poništi
  15. 11. ožu 2019.

    "Even if the projects I'm doing succeeded beyond my wildest expectations, how would it affect people? Whose lives would be improved and how would they be improved?" This sounds like a good prompt to use to pick projects

    Poništi
  16. 11. ožu 2019.

    I believe the glorification of “no domain knowledge required” in recent ML algorithms is misguided. Domain knowledge is often what makes models feasible to train, robust, and explainable in real world applications.

    Prikaži ovu nit
    Poništi
  17. 11. ožu 2019.

    The Promise of Hierarchical Reinforcement Learning It's interesting that the requirement of domain knowledge is seen as a drawback of HRL. To me, the ability to more easily incorporate domain knowledge is what excites me about it the most.

    Prikaži ovu nit
    Poništi
  18. 11. ožu 2019.

    Does anyone know what kind of model Google Translate uses for their OCR bounding boxes when you upload an image? Do you do bounding boxes separately first (how?) or is it all end-to-end?

    Poništi
  19. proslijedio/la je Tweet
    27. velj 2019.

    Excited to announce Habitat, a platform for embodied AI research: — Habitat-Sim: high-perf 3D sim (w/ SUNCG, MP3D, Gibson) — Habitat-API: modular library for defining tasks, training agents — Habitat-Chal: autonomous nav challenge on

    Prikaži ovu nit
    Poništi
  20. 27. velj 2019.

    “Your place of birth is not compatible with our current business needs” … and our mission to maximize shareholder value

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·