Bogdan Kulynych

@hiddenmarkov

Grad student · Into privacy and security, cryptogeography, misrepresentation learning, gradient dissent, bad puns · Ex intern , · 🇺🇦

Vrijeme pridruživanja: rujan 2012.

Tweetovi

Blokirali ste korisnika/cu @hiddenmarkov

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @hiddenmarkov

  1. proslijedio/la je Tweet
    2. velj

    "We characterize fairness limitations using concepts from requirements engineering and from social sciences. We show that the focus on input and output misses harms that arise from systems interacting with the world; that the focus on bias and discrimination omits broader harms"

    Poništi
  2. proslijedio/la je Tweet
    2. velj
    Poništi
  3. proslijedio/la je Tweet
    30. sij

    The deadline for our ICLR workshop on trustworthy ML is coming soon on the 31st. Looking forward to your submissions on security, privacy, and other aspects of trustworthy ML! Cc ⁦⁩ ⁦⁩ ⁦

    Poništi
  4. 29. sij

    Benkler's keynote: "firms want to shape the algorithmic fairness discourse to make it less threatening to them". This is one of the reasons for why we call to look at alt. approaches to addressing harms of tech from the outside — such as POTs

    Prikaži ovu nit
    Poništi
  5. 28. sij

    Loving this take from and : "Accuracy is not neutral", there's a lot of context wrt power behind it. But there's another point which is a hill I will die on: Accuracy is not neutral because it by definition erases minorities.

    Poništi
  6. proslijedio/la je Tweet
    28. sij

    Starting from the idea that the purpose of many if not most ADM is optimization, , B. Kulynych & team propose ‘Protective Optimization Technologies’- POTs,as a more encompassing model than fairness to counter more harms than discrimination link⬇️

    Prikaži ovu nit
    Poništi
  7. proslijedio/la je Tweet
    26. sij
    Poništi
  8. proslijedio/la je Tweet
    22. sij

    Earlier today we published the details of a set of vulnerabilities in Safari's Intelligent Tracking Prevention privacy mechanism: . They are... interesting. [1/9]

    Prikaži ovu nit
    Poništi
  9. 20. sij

    Speaking of friendly uses of adversarial ML: ICLR 2020 Towards Trustworthy ML Workshop accepts submissions on this topic, with deadline on January 31:

    Poništi
  10. 20. sij
    Prikaži ovu nit
    Poništi
  11. 20. sij
    Prikaži ovu nit
    Poništi
  12. 20. sij
    Prikaži ovu nit
    Poništi
  13. 20. sij
    Prikaži ovu nit
    Poništi
  14. 20. sij
    Prikaži ovu nit
    Poništi
  15. 20. sij
    Prikaži ovu nit
    Poništi
  16. 20. sij
    Prikaži ovu nit
    Poništi
  17. 20. sij

    Let us bring these into the realm of systematic study!

    Prikaži ovu nit
    Poništi
  18. 20. sij

    POTs are complementary to fairness: they do not rely on cooperation from the system provider, and are developed to serve those who are directly experiencing the system's harms. When all other means of accountability fail, POTs can provide novel ways of resistance and contestation

    Prikaži ovu nit
    Poništi
  19. 20. sij

    As an example, adversarial machine learning (ML) usually studies how to defend ML models against all sorts of attackers. Can we consider the models to be adversarial and use the techniques from adversarial ML to protect people from harmful ML? Yes we can!

    Prikaži ovu nit
    Poništi
  20. 20. sij

    We consider how methods from computer science can be put to use to systematize and reinforce such attempts to address the externalities of optimization systems. And this is what we call POTs: technological tools to counteroptimize/counteract/surface harms of technological systems

    Prikaži ovu nit
    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·