Rezultati pretraživanja
  1. 15. lis 2019.

    Just received notification the fourth edition of HUMANIZE workshop will take place in March 2020, in conjunction with . This year we focus on user modeling grounded in psychology for and in adaptive systems. CfP will be out soon!

  2. It's so hard for a machine learning model to unlearn what it has learned from you. It could take for ever! You could be better of throwing your profile away and start anew instead of trying to make it learn your new preferrers.

  3. 4. velj 2019.

    This is how bias really happens—and why it’s so hard to fix ⁦⁩ ⁦⁩ ⁦⁩ ⁦

  4. 29. ruj 2019.

    In addressing the need for ' ' we need to take care not to converge Explainability with Understandability. The ability to explain may not guarantee a parallel ability to understand.

  5. 30. srp 2018.

    of Machine Learning models is a core research area right now. Check our newest blog post on this topic! Let me know if you want to discuss this topic! It's always great to share some insights.

  6. 1. velj
  7. 26. stu 2019.

    This is how to achieve trust in .⁩ Focus should be in human agency, , and . Well said ⁦⁦

  8. 25. sij 2018.

    "What I want on my phone, on my computer, in Alexa, and everywhere that machine learning touches me, is a “why” button I can push (or speak) to know why I got that recommendation" >> for , is the necessary condition for humans to trust machine learning

  9. 2. lip 2019.

    .: The lack of is a particular problem in legal : "in order to become suitable for the law and jurisprudence, any legal AI would ... have to learn to justify itself in human-readable form that can be reviewed and criticized"

  10. 26. ožu 2019.

    Getting ready for the sixth . Only about an hour to go ⏰

  11. “... success of deep and cheap learning depends not only on mathematics but also on physics, which favors certain classes of exceptionally simple probability distributions that is uniquely suited to model,”

  12. 12. ruj 2019.

    We’re hiring machine learning and engineering interns for January on our R&D team. You will be leveraging cutting-edge research to create high impact solutions for real-world problems.

    Prikaži ovu nit
  13. 11. stu 2019.

    Our work (Woo-Jeoung Nam) "Relative Attributing Propagation: Interpreting the Comparative Contributions of Individual Units in Deep Neural Networks" got accepted to ! RAP == New explainability method arxiv -

  14. 21. sij

    [Article] Is Enough? We Need Understandable | «Enabling trust, leading to the adoption of these technologies by both consumers and employees, require a human-first perspective of developing understandable AI.» | by via

  15. Odgovor korisniku/ci

    When you read the article it says that is necessary. But not only for insiders, but for all users of in understandable wording. | “We’ve always known that people over-trust technology, and that’s especially true with AI systems.”

  16. 23. kol 2018.

    If you are interested in understanding how to make the black boxes "talk", take a look at this great blog post! Great conceptual explanation.

  17. 21. ruj 2018.

    A5: A strong culture of governance and ethical forethought is already in place in a number of larger institutions. I am a big advocate of the framework of , , and

  18. 23. sij 2019.

    Singapore is the first Asian Government to release framework on the principles of , , fairness to consumers and -centricity.

  19. 12. ruj 2018.

    Simon Williams just nailed it. Nuanced talk on what AI can do & can't do! "It's not about the - it's about the side" - to drive -based performance, you need that enables human feedback to the model.

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.