Rezultati pretraživanja
  1. The topic of my latest book is because of new courts decision in forbidding the use of a blackbox algorithm to detect welfare using . is needed to become and

  2. Odgovor korisniku/ci

    The court's main argument is that it is not transparent how data is processed and analysed. In my book about eXplanaible AI I explain why black box AI can never be successful and how it can be different with . See

  3. prije 6 sati

    Our recent work (with ) on how an agent can explain sequential decisions/behavior in the absence of shared vocabulary with the human in the loop. (Inspired in part by et. al.'s TCAV work)

  4. prije 7 sati

    We’re here now! Laura is about to start the live pitch in a couple hours and we’re waiting for it like 💃💃💃

  5. prije 14 sati

    This blog explores how facial recognition AI solves crimes, and what the technology's lack of human accountability means for your . Read more here. By

  6. prije 18 sati

    Trust will be the key success factor in AI applications. Deevid De Meyer () & Karel Kremer () present their experiences and lessons learnt at the conf in Brussels next week. Check out:

  7. I am co-organising a workshop on Dialogue, Explanation and Argumentation taking place at on the 8th of June. See our website for submission details:

  8. 5. velj

    ICYMI: Our January newsletter is up on our blog. Check out articles like CIO: Building an XAI Strategy, Responsible MRM, and Explainable Machine Learning in Deployment

  9. 4. velj

    And further on explainability in AI: a successful explanation also requires a model of the explainee (receiver of the explanation) as a basis to tailor and adapt the explanation to her beliefs, knowledge and needs.

  10. 4. velj

    What are the leading companies tackling machine learning interpretability, whether as part of a data platform or otherwise?

  11. 4. velj

    Been a little busy lately, but at the same time so energized by everything we have on table with in the space of and . Next two days —> deepdive into transparency in , yey 🚘

  12. 4. velj

    Lesson 3/3: ML is opaque. Regulators, including and EBA, mandate effective oversight of models. Banks are accountable for their models, so must be able to justify their decisions. This means all decisions must be auditable, traceable, and explainable.

    Prikaži ovu nit
  13. 4. velj
  14. 4. velj

    How do you explain what changes in feature values an obs. **would need to make to improve the predicted outcome**? Enter w/ counterfactuals. Here's a nice paper that explains the concept along with a implementation. Paper:

  15. 4. velj

    Presents -xai weekend getaway tour. CHOOSE YOUR OWN WEEKEND (Friday to Monday) with us for new networking & relations building. If you feel it's time to make new friends, is here to connect you. BOOK NOW BOOK NOW BOOK NOW BOOK NOW

  16. 3. velj

    Discover how to use AI for good in your business, using these case examples of responsible AI across industries. Read more here. By

  17. 3. velj

    System Management by Exception: : my experience of arguing with which I have built. via

  18. 1. velj

    Understanding — from Neurons to RNN (Recurrent NN) to CNN (Convolutional NN) to — and Max Pooling explained: ————————

  19. 31. sij

    I’m excited to share ENNUI (an Elegant Neural Network User Interface)! Build and train neural networks on the browser, visualize the training process, and export code.

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.