Aylin Caliskan

@aylin_cim

professor & fellow interested in AI ethics, algorithmic bias, computer vision, machine learning, natural language processing

Seattle, WA
Joined July 2013

Tweets

You blocked @aylin_cim

Are you sure you want to view these Tweets? Viewing Tweets won't unblock @aylin_cim

  1. Pinned Tweet
    Jun 3

    I'm delighted to be moving to the University of Washington's Information School as an assistant professor to research ethics, bias, and equity in AI; after 3 wonderful years at George Washington University. Look forward to working with new colleagues & in Seattle.

    Undo
  2. Retweeted

    Very excited to announce this paper and open dataset!! 🎉🎉Implicit and explicit attitude and stereotype data from 34 countries (plus two bilingual datasets!). Please share widely!!

    Undo
  3. Retweeted

    The following are the planned dates for AIES'22: February 22: submission deadline April 18: notification Early August: conference ()

    Undo
  4. Retweeted

    JUST LAUNCHED: The AI Researchers Portal on . It’s a central connection to many Federally-supported resources for America’s AI research community. Click here for details on grant programs, testbeds, datasets, and much more:

    Show this thread
    Undo
  5. Retweeted
    Dec 15

    Good news! officially announced for August 2022 at Oxford!

    Undo
  6. Retweeted
    Dec 14

    My research group has been following a cohort of ~50 gig workers for 2 years, and have interviewed many others -this figure from Pew's recent survey does not fit with our findings at all...If you gave me the distributions unlabeled I would put numbers exactly the other way around

    Show this thread
    Undo
  7. Retweeted
    Dec 13

    "Attention is all you need" implementation from scratch in PyTorch. A Twitter thread: 1/

    Show this thread
    Undo
  8. Retweeted
    Dec 12

    Make sure you submit your work to this track!

    Undo
  9. Retweeted
    Dec 6
    Show this thread
    Undo
  10. Retweeted
    Dec 2

    Guys calm down, stop freaking out, it's just a bunch of actuators and motors with rubber stretched over them to simulate facial movement. It's all preprogrammed. It's just for research. Internally: nope nope nope nope nope

    Undo
  11. Retweeted
    Dec 2

    Almost six years ago, wrote an article using real data that showed how PredPol could exacerbate racial disparities in policing. 's release today shows that that is *exactly* what happened.

    Show this thread
    Undo
  12. Retweeted
    Nov 29

    The combination of and capitalism works to maximize profits, sometimes with unintended consequences. The iSchool's Aylin Caliskan () recently went on 's In Machines We Trust podcast to discuss it.

    Undo
  13. Retweeted

    Please join the Responsible AI Systems & Experiences (RAISE) group at the University of Washington for the talk “A Perspective on AI Governance” by Kush R. Varshney, a distinguished research staff member and manager with IBM Research, from 9-10am PST Friday, 12/3.

    Undo
  14. Retweeted
    Nov 12

    The new and improved version of "Data and its (dis)contents" is published at today! Co-authored with . Check it out here:

    Dataset design & development: Dataset audits reveal representational harms. Spurious cues exploited by models lead to unanticipated results. Dataset construction can legitimize faulty science. Datasets have been historically insufficiently documented and motivated. Dataset use: Meticulous, human inspection of large datasets turns up disturbing content. Automated dataset improvement is limited by validity of task definition and initial data collection. Dataset culture: Leaderboardism distorts the science of ML research. Data management practice lacks a culture of care for data subjects. Dataset appropriation and reuse practices break connections to context. The push for massive scale engenders poor labor conditions. Dataset collection practices raise legal issues and existing legal frameworks provide insufficient protection to data subjects.
    Show this thread
    Undo
  15. Retweeted
    Nov 10
    Undo
  16. Nov 8

    Our new semantics evaluation method at Resources and Evaluation: ValNorm quantifies semantics to reveal consistent non-discriminatory, non-social group valence biases across languages and over centuries, whereas social group biases are evolving.

    Undo
  17. Nov 8

    Our paper at 's Ethics and NLP session: underrepresentation of minority groups in language models results in bias and overfitting as analyzed w.r.t. frequency in the representation space of contextualized word embeddings.

    Undo
  18. Retweeted
    Oct 31

    Excited that the final version of "Physiognomic Artificial Intelligence" will appear in the Fordham Intellectual Property, Media & Entertainment Law Journal -- thank you for such a great partnership on this!

    Undo
  19. Oct 28

    Dynamic pricing: Unexpected edge cases, ethics, a few minutes on the "Disparate Impact of Artificial Intelligence Bias in Ridehailing Economy's Price Discrimination Algorithms" regulation, digital cartels, law, and privacy and fairness in machine learning

    Undo
  20. Retweeted
    Oct 22

    Please spread the word— is seeking applications for the CITP Fellows Program for 2022-23.

    Undo
  21. Retweeted
    Oct 19

    👇I am looking for a postdoc. Know someone with interest or experience in online mis(dis)information/influence operations, competent in social media analytics & wants to work in a non-western context, like the Indian subcontinent. You can ping me directly

    Undo

Loading seems to be taking a while.

Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.

    You may also like

    ·