Tweetovi

Blokirali ste korisnika/cu @BKrishnapuram

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @BKrishnapuram

  1. Prikvačeni tweet
    6. kol 2019.
    Odgovor korisniku/ci

    Incredibly honored to receive the service award! I’m very grateful to a large team of org committee members, volunteers and sponsors who came together to transform KDD over the last few years.

    Poništi
  2. proslijedio/la je Tweet

    Yes, it's counterintuitive. But this is where I think for medicine can take us in the 2020s if we set our goals right. Take medicine back; restore the human connection w/

    Poništi
  3. proslijedio/la je Tweet

    A new paper has been making the rounds with the intriguing claim that YouTube has a *de-radicalizing* influence. Having read the paper, I wanted to call it wrong, but that would give the paper too much credit, because it is not even wrong. Let me explain.

    Prikaži ovu nit
    Poništi
  4. proslijedio/la je Tweet
    29. pro 2019.

    Aristotelian physics may be flawed, but it has been unfairly discarded in favor of Newtonian physics, despite the latter not solving all of the open problems in the field. Clearly the way forward is a hybrid system, wherein aether obeys the conservation of momentum.

    Poništi
  5. proslijedio/la je Tweet

    "This would effectively nationalize the valuable American intellectual property that we produce and force us to give it away to the rest of the world for free." - Good lord 😳. This does include among its signatories fyi. Shameful.

    Poništi
  6. proslijedio/la je Tweet
    19. pro 2019.

    I think and are on the wrong side of this issue. It is time to re-imagine publication models and break the stranglehold of for-profit publishers on the dissemination of scientific research. Time to rethink how research is done!

    Poništi
  7. proslijedio/la je Tweet
    23. stu 2019.

    Reading through the ML interpretability literature. Question: Why do people think assigning Shapley values to features is a reasonable "explanation"? Yes, they have "rigorous foundations", which is true in the sense that Shapley proved they uniquely satisfy certain axioms: 1/n

    Prikaži ovu nit
    Poništi
  8. proslijedio/la je Tweet
    Poništi
  9. proslijedio/la je Tweet
    Tweet je nedostupan.
    Poništi
  10. proslijedio/la je Tweet
    26. stu 2019.

    Flight attendant: Is there a doctor on this flight? Dad: *nudging me* that should've been you Me: Not now Dad Dad: Not asking for a DL researcher to help, are they? Me: Dad, there's a medical emergency happening right now Dad: Go and see if “lower the learning rate" helps

    Poništi
  11. proslijedio/la je Tweet
    24. stu 2019.

    It’s been 20 years since I submitted my first paper with Nhat Nguyen and the late great Gene Golub on multi-frame super-res (SR). Here’s a thread, a personal story of SR as I’ve experienced it. It won’t be exhaustive or fully historical. Apologies to colleagues for any omissions

    Prikaži ovu nit
    Poništi
  12. proslijedio/la je Tweet
    25. stu 2019.

    For language modeling, do you think many attention heads per layer and many layers are necessary for near SotA results? What do you believe versus what do you think we have scientific proof for? Are you thinking about only one architecture? Are others possible? Are LSTMs dead?

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    15. stu 2019.

    It is incredibly transformative when a tech leader like participates at events like and He gives hope, builds confidence, and helps create new opportunities and collaborative communities across the world.

    Poništi
  14. proslijedio/la je Tweet
    5. stu 2019.

    This never ends. This year, so far, 15 out of 44 people to attend workshop at (which is still in Canada) have been denied visas. That's 33%. We had all this press last year, they were supposed to help us this year.

    Prikaži ovu nit
    Poništi
  15. proslijedio/la je Tweet
    31. lis 2019.

    We explore a simple approach to task-oriented dialog. A single neural network consumes conversation history and external knowledge as input and generates the next turn text response along with the action (when necessary) as output. Paper: 1/4

    Prikaži ovu nit
    Poništi
  16. proslijedio/la je Tweet
    30. lis 2019.
    Poništi
  17. proslijedio/la je Tweet
    25. lis 2019.

    Our NLU research team and Google Search working together for better query understanding. We are just getting started...

    Poništi
  18. proslijedio/la je Tweet
    25. lis 2019.

    We've got a new logo! Thanks for sponsoring the creation of the new logo. And thanks for generating our old logo for more than 10 years (yes, the old logo was generated with a simple Matplotlib script).

    Poništi
  19. proslijedio/la je Tweet
    28. lis 2019.

    New WP: Spending Reductions in the MSSP: Selection or Savings? w & What do we find?🤨In earlier work we threw the kitchen sink. Now we throw the kitchen– still no evidence of risk selection in ACOs 1/n

    Prikaži ovu nit
    Poništi
  20. proslijedio/la je Tweet

    It's bittersweet: I'm leaving , and am now retired. I've learned a lot during my time as an engineer here -- e.g. type annotations came from this experience -- and I'll miss working here.

    Poništi
  21. proslijedio/la je Tweet

    I agree and disagree w my friend 1/ More narrowly focused hospital P4P and disease-specific payment models have not been very successful Physician-led ACOs, focused on total cost of care have been most successful (including on "quality measures") Focus on sub-pops?

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·