Tweetovi

Blokirali ste korisnika/cu @MilkaLichtblau

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @MilkaLichtblau

  1. proslijedio/la je Tweet
    30. sij

    We are thrilled to announce the launch of our Data Genesis program, a long-term research effort dedicated to examining the complex histories, construction, and use of the data that shapes AI.

    Prikaži ovu nit
    Poništi
  2. 29. sij

    Brilliant keynote by on discrimination vs. exploitation and what role technology plays in the distribution of power 👏 will this be available online? I wanna rewatch it.

    Poništi
  3. proslijedio/la je Tweet
    29. sij

    Amazing moment at when the head of data science at Pymetrics came to ask questions of the researchers who audited the algorithms that Pymetrics has built. Question: “How do we work to build your confidence in our algorithms?” Answer: “You probably can’t...”

    Poništi
  4. proslijedio/la je Tweet

    Väter, die mit „OMG, ihre Frau ist nicht rangegangen, da mussten wir Sie anrufen, sorry, dass wir bei der Arbeit stören!“ angerufen werden, antworten bitte: „Puh! Jetzt hätten Sie fast meine Frau aus der Vorstandssitzung rausgeklingelt! Mich stören Sie nur beim Stricken.“

    Poništi
  5. proslijedio/la je Tweet
    28. sij

    Pre-trial risk scoring systems focus on defendants. What if they focussed on judges? 's challenge to to "study up" - to research and develop in a way that puts the magnifying glass over structures of power instead of solely marginalised groups.

    Slide of judge risk system evaluation and criteria - idea judges would be horrified by the idea demographic factors affect their decision-making, but we accept these in models of recidivism.
    Prikaži ovu nit
    Poništi
  6. proslijedio/la je Tweet
    28. sij

    Enjoyed 's talk on the problematic assumptions behind counterfactual/feature-highlighting explanations. Speaks directly to the struggle of end users with algorithmic explanations we saw in design research

    Poništi
  7. 28. sij

    Great read on automatic gender recognition: "...we begin to treat how the tool sees reality as reality itself"

    Poništi
  8. proslijedio/la je Tweet
    27. sij

    There is a livestream for the ! Starting tomorrow :) Looking forward to see the latest research on fairness!

    Poništi
  9. 27. sij
    Poništi
  10. 27. sij

    Heading to and slowly starting to look for post-grad jobs in the FAT* research field. Let's talk :)

    Poništi
  11. proslijedio/la je Tweet
    25. sij

    „One algorithm that lets a robot manipulate a Rubik's Cube used as much energy as 3 nuclear plants produce in an hour.“

    Poništi
  12. proslijedio/la je Tweet

    Preprint for our paper ‘Robot Right? Let’s Talk about Human Welfare Instead’ is up on arXiv . & I argue not just to deny robots ‘rights’, but to deny that robots are the kinds of things that could be granted rights in the first place. 1/

    Prikaži ovu nit
    Poništi
  13. 12. sij
    Poništi
  14. proslijedio/la je Tweet
    10. sij

    Some Moral and Technical Consequences of Automation by Norbert Wiener, appeared in Science in May 1960.

    Prikaži ovu nit
    Poništi
  15. 11. sij

    I'm delighted to announce that our paper on fairness in learning to rank has been accepted to . Joint work with . It was a long way...

    Poništi
  16. 21. pro 2019.

    Love the idea. Let's have a knitting tutorial among the other tutorials :)

    Poništi
  17. proslijedio/la je Tweet

    If you think there's too much yelling about algorithmic bias, here's an analogy. By the mid 90s the privacy community knew there was a huge problem. But it took two decades of yelling and a million privacy disasters before the public and policy makers started taking it seriously.

    Poništi
  18. 14. pro 2019.

    A little delayed and for German speakers: This summer I was on a panel for diversity in data journalism and the discussion is available online: Der Daten-Bias – Brauchen wir mehr Diversität im Datenjournalismus? via

    Poništi
  19. proslijedio/la je Tweet

    Heads up: new FAT* Network events awaiting your submissions! BIAS 2020: The International Workshop on Algorithmic Bias in Search and Recommendation at , submissions due Jan. 27:

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·