Preskoči na sadržaj
Korištenjem servisa na Twitteru pristajete na korištenje kolačića. Twitter i partneri rade globalno te koriste kolačiće za analize, personalizaciju i oglase.

Za najbolje sučelje na Twitteru koristite Microsoft Edge ili instalirajte aplikaciju Twitter iz trgovine Microsoft Store.

  • Naslovnica Naslovnica Naslovnica, trenutna stranica.
  • O Twitteru

Spremljena pretraživanja

  • obriši
  • U ovom razgovoru
    Ovjeren akauntZaštićeni tweetovi @
Predloženi korisnici
  • Ovjeren akauntZaštićeni tweetovi @
  • Ovjeren akauntZaštićeni tweetovi @
  • Jezik: Hrvatski
    • Bahasa Indonesia
    • Bahasa Melayu
    • Català
    • Čeština
    • Dansk
    • Deutsch
    • English
    • English UK
    • Español
    • Filipino
    • Français
    • Italiano
    • Magyar
    • Nederlands
    • Norsk
    • Polski
    • Português
    • Română
    • Slovenčina
    • Suomi
    • Svenska
    • Tiếng Việt
    • Türkçe
    • Български език
    • Русский
    • Српски
    • Українська мова
    • Ελληνικά
    • עִבְרִית
    • العربية
    • فارسی
    • मराठी
    • हिन्दी
    • বাংলা
    • ગુજરાતી
    • தமிழ்
    • ಕನ್ನಡ
    • ภาษาไทย
    • 한국어
    • 日本語
    • 简体中文
    • 繁體中文
  • Imate račun? Prijava
    Imate račun?
    · Zaboravili ste lozinku?

    Novi ste na Twitteru?
    Registrirajte se
Profil korisnika/ce math_rachel
Rachel Thomas
Rachel Thomas
Rachel Thomas
@math_rachel

Tweets

Rachel Thomas

@math_rachel

Director of USF Center for Applied Data Ethics @DataInstituteSF + co-founder http://fast.ai  | deep learning, ethics, math phd, software dev | she/her

San Francisco, CA
fast.ai/topics/#ai-in-…
Vrijeme pridruživanja: svibanj 2013.

Tweets

  • © 2020 Twitter
  • O Twitteru
  • Centar za pomoć
  • Uvjeti
  • Pravila o privatnosti
  • Imprint
  • Kolačići
  • Informacije o oglasima
Odbaci
Prethodni
Sljedeće

Idite na profil osobe

Spremljena pretraživanja

  • obriši
  • U ovom razgovoru
    Ovjeren akauntZaštićeni tweetovi @
Predloženi korisnici
  • Ovjeren akauntZaštićeni tweetovi @
  • Ovjeren akauntZaštićeni tweetovi @

Odjava

Blokiraj

  • Objavi Tweet s lokacijom

    U tweetove putem weba ili aplikacija drugih proizvođača možete dodati podatke o lokaciji, kao što su grad ili točna lokacija. Povijest lokacija tweetova uvijek možete izbrisati. Saznajte više

    Vaši popisi

    Izradi novi popis


    Manje od 100 znakova, neobavezno

    Privatnost

    Kopiraj vezu u tweet

    Ugradi ovaj Tweet

    Embed this Video

    Dodajte ovaj Tweet na svoje web-mjesto kopiranjem koda u nastavku. Saznajte više

    Dodajte ovaj videozapis na svoje web-mjesto kopiranjem koda u nastavku. Saznajte više

    Hm, došlo je do problema prilikom povezivanja s poslužiteljem.

    Integracijom Twitterova sadržaja u svoje web-mjesto ili aplikaciju prihvaćate Twitterov Ugovor za programere i Pravila za programere.

    Pregled

    Razlog prikaza oglasa

    Prijavi se na Twitter

    · Zaboravili ste lozinku?
    Nemate račun? Registrirajte se »

    Prijavite se na Twitter

    Niste na Twitteru? Registrirajte se, uključite se u stvari koje vas zanimaju, i dobivajte promjene čim se dogode.

    Registrirajte se
    Imate račun? Prijava »

    Dvosmjerni (slanje i primanje) kratki kodovi:

    Država Kod Samo za korisnike
    Sjedinjene Američke Države 40404 (bilo koje)
    Kanada 21212 (bilo koje)
    Ujedinjeno Kraljevstvo 86444 Vodafone, Orange, 3, O2
    Brazil 40404 Nextel, TIM
    Haiti 40404 Digicel, Voila
    Irska 51210 Vodafone, O2
    Indija 53000 Bharti Airtel, Videocon, Reliance
    Indonezija 89887 AXIS, 3, Telkomsel, Indosat, XL Axiata
    Italija 4880804 Wind
    3424486444 Vodafone
    » Pogledajte SMS kratke šifre za druge zemlje

    Potvrda

     

    Dobro došli kući!

    Vremenska crta mjesto je na kojem ćete provesti najviše vremena i bez odgode dobivati novosti o svemu što vam je važno.

    Tweetovi vam ne valjaju?

    Prijeđite pokazivačem preko slike profila pa kliknite gumb Pratim da biste prestali pratiti neki račun.

    Kažite mnogo uz malo riječi

    Kada vidite Tweet koji volite, dodirnite srce – to osobi koja ga je napisala daje do znanja da vam se sviđa.

    Proširite glas

    Najbolji je način da podijelite nečiji Tweet s osobama koje vas prate prosljeđivanje. Dodirnite ikonu da biste smjesta poslali.

    Pridruži se razgovoru

    Pomoću odgovora dodajte sve što mislite o nekom tweetu. Pronađite temu koja vam je važna i uključite se.

    Saznajte najnovije vijesti

    Bez odgode pogledajte o čemu ljudi razgovaraju.

    Pratite više onoga što vam se sviđa

    Pratite više računa da biste dobivali novosti o temama do kojih vam je stalo.

    Saznajte što se događa

    Bez odgode pogledajte najnovije razgovore o bilo kojoj temi.

    Ne propustite nijedan aktualni događaj

    Bez odgode pratite kako se razvijaju događaji koje pratite.

    Rachel Thomas‏ @math_rachel 3. stu 2019.
    • Prijavi Tweet

    My talk on "Getting Specific about Algorithmic Bias" https://www.youtube.com/watch?v=S-6YGPrmtYc&list=PLtmWHNX-gukLQlMvtRJ19s7-8MrnRV6h6&index=5&t=0s …pic.twitter.com/efNQtoVClJ

    Slide saying: "Getting Specific about Algorithmic Bias, Rachel Thomas, PhD, USF Center for Applied Data Ethics & fast.ai", with USF and fast.ai logos
    10:27 - 3. stu 2019.
    • 231 proslijeđeni Tweet
    • 699 oznaka „sviđa mi se”
    • Jill-Jênn Vie Evan (he/him) Thomas Gmelig Meyling Murray Coueslant Joy Buolamwini liron Antonino Ingargiola Teemu Roos Mar Cabra
    9 replies 231 proslijeđeni tweet 699 korisnika označava da im se sviđa
      1. Novi razgovor
      2. Rachel Thomas‏ @math_rachel 3. stu 2019.
        • Prijavi Tweet

        I like this framework from @harini824 on how different sources of bias have different causes. This is important, because gathering a more diverse dataset will help in some cases (representation bias), but not in others paper: https://arxiv.org/abs/1901.10002  blog: https://medium.com/@harinisuresh/the-problem-with-biased-data-5700005e514c …pic.twitter.com/DJiAvAiWpI

        31 proslijeđeni tweet 103 korisnika označavaju da im se sviđa
        Prikaži ovu nit
      3. Rachel Thomas‏ @math_rachel 3. stu 2019.
        • Prijavi Tweet

        The Compas Recidivism Algorithm: - it's no more accurate than random people (Amazon Mechanical Turk) - it's a black box with 137 inputs but no more accurate than linear classifier on 2 vars - Wisconsin Supreme Court upheld its use (it is still used in other states as well)pic.twitter.com/cMdGOkQPP5

        30 proslijeđenih tweetova 47 korisnika označava da im se sviđa
        Prikaži ovu nit
      4. Rachel Thomas‏ @math_rachel 3. stu 2019.
        • Prijavi Tweet

        Link to study cited, "The accuracy, fairness, and limits of predicting recidivism" https://advances.sciencemag.org/content/4/1/eaao5580 …pic.twitter.com/HV50LCCtQZ

        Algorithms for predicting recidivism are commonly used to assess a criminal defendant’s likelihood of committing a crime. These predictions are used in pretrial, parole, and sentencing decisions. Proponents of these systems argue that big data and advanced machine learning make these analyses more accurate and less biased than humans. We show, however, that the widely used commercial risk assessment software COMPAS i
        1 reply 6 proslijeđenih tweetova 19 korisnika označava da im se sviđa
        Prikaži ovu nit
      5. Rachel Thomas‏ @math_rachel 3. stu 2019.
        • Prijavi Tweet

        Rachel Thomas je proslijedio/a tweet korisnika/ceRachel Thomas

        Evergreen recommendation for @random_walker's 21 Definitions of Fairness Tutorialhttps://twitter.com/math_rachel/status/976591520575897600?s=20 …

        Rachel Thomas je dodan/na,

        Rachel Thomas @math_rachel
        21 fairness definitions and their politics. Excellent tutorial from @random_walker on fairness in machine learning https://www.youtube.com/watch?time_continue=1&v=jIXIuYdnyyk …
        Prikaži ovu nit
        1 reply 5 proslijeđenih tweetova 16 korisnika označava da im se sviđa
        Prikaži ovu nit
      6. Rachel Thomas‏ @math_rachel 3. stu 2019.
        • Prijavi Tweet

        Even if race & gender are not inputs to your algorithm, it can still be biased on these factors. Machine learning excels at finding latent variables. I regularly hear people wrongly say that not using race as an input will prevent racial bias.pic.twitter.com/wUSNZmoXRf

        10 replies 164 proslijeđena tweeta 314 korisnika označava da im se sviđa
        Prikaži ovu nit
      7. Rachel Thomas‏ @math_rachel 3. stu 2019.
        • Prijavi Tweet

        Runaway feedback loops are a big issue for machine learning (including for predictive policing and recommendation systems). Feedback loops can occur whenever your model is controlling the next round of data. The data quickly becomes contaminated by the model.pic.twitter.com/g6y2dONEfL

        1 reply 21 proslijeđeni tweet 82 korisnika označavaju da im se sviđa
        Prikaži ovu nit
      8. Rachel Thomas‏ @math_rachel 3. stu 2019.
        • Prijavi Tweet

        Many examples of bias won't be fixed by gathering different data or features: “Historical bias is a fundamental, structural issue with the first step of the data generation process and can exist even given perfect sampling and feature selection.” @harini824pic.twitter.com/I4lY2Ewgsi

        1 reply 24 proslijeđena tweeta 81 korisnik označava da mu se sviđa
        Prikaži ovu nit
      9. Rachel Thomas‏ @math_rachel 3. stu 2019.
        • Prijavi Tweet

        Rachel Thomas je proslijedio/a tweet korisnika/ceRachel Thomas

        The concept of "biased data" is often too generic to be useful:https://twitter.com/math_rachel/status/1113203073051033600?s=20 …

        Rachel Thomas je dodan/na,

        Representation bias can arise for several reasons, including:
1. The sampling methods only reach a portion of the population. For example, datasets collected through smartphone apps can under-represent lower-income or older
groups, who are less likely to own smartphones. Similarly,
medical data for a particular condition may only be available for the population of patients who were considered
serious enough to bring
        Measurement bias can arise in several ways:
1. The granularity of data varies across groups. For example, if a group of factory workers is more stringently or
frequently monitored, more errors will be observed in that
group. This can also lead to a feedback loop wherein the
group is subject to further monitoring because of the apparent higher rate of mistakes (Barocas and Selbst 2016;
Ensign et al. 2017).
2. The qual
        Aggregation bias arises when a one-size-fit-all model is used
for groups with different conditional distributions, p(Y |X).
Underlying aggregation bias is an assumption that the mapping from inputs to labels is consistent across groups. In reality, this is often not the case. Group membership can be
indicative of different backgrounds, cultures or norms, and
a given variable can mean something quite different for a
p
        Evaluation bias occurs when the evaluation and/or benchmark data for an algorithm doesn’t represent the target
population. A model is optimized on its training data, but
its quality is often measured on benchmarks (e.g., UCI
datasets (Huang et al. 2007), Faces in the Wild (Dheeru and
Karra Taniskidou 2017), ImageNet (Deng et al. 2009)), so
a misrepresentative benchmark encourages the development
of models that only p
        Rachel Thomas @math_rachel
        Concept of "biased data" is often too broad to be useful. Here is a framework of 5 types (with different types requiring different remedies): - historical bias - representation bias - measurement bias - evaluation bias - aggregation bias https://arxiv.org/abs/1901.10002  pic.twitter.com/4kaYDz45L3
        1 reply 18 proslijeđenih tweetova 63 korisnika označavaju da im se sviđa
        Prikaži ovu nit
      10. Rachel Thomas‏ @math_rachel 3. stu 2019.
        • Prijavi Tweet

        Many AI ethics concerns are about civil rights and human rights. One way to regulate AI is to consider what rights we want to protect regarding: housing, education, employment, criminal justice, voting, & medical care.pic.twitter.com/UjhHmKOF3X

        1 reply 13 proslijeđenih tweetova 48 korisnika označava da im se sviđa
        Prikaži ovu nit
      11. Rachel Thomas‏ @math_rachel 3. stu 2019.
        • Prijavi Tweet

        Rachel Thomas je proslijedio/a tweet korisnika/ceRachel Thomas

        Algorithmic fairness is not the same as justice.https://twitter.com/math_rachel/status/1188889407329193985?s=20 …

        Rachel Thomas je dodan/na,

        Rachel Thomas @math_rachel
        Accuracy does not stop abuse. Algorithmic fairness is not justice. We need to move the conversation to justice. @jovialjoy pic.twitter.com/CJhrlPwI4p
        Prikaži ovu nit
        1 reply 13 proslijeđenih tweetova 54 korisnika označavaju da im se sviđa
        Prikaži ovu nit
      12. Rachel Thomas‏ @math_rachel 3. stu 2019.
        • Prijavi Tweet

        It is important to understand how pervasive unjust bias is. These are just a few of the many, many, many studies on the topic. Racial bias is present in all sorts of data: medical, ads, sales, housing, political, criminal justice, etc. https://www.nytimes.com/2015/01/04/upshot/the-measuring-sticks-of-racial-bias-.html …pic.twitter.com/ArFlTqEPfH

        When doctors were shown identical files, they were much less likely to recommend cardiac catheterization (a helpful procedure) to Black patients.
When bargaining for a used car, Black people were offered initial prices $700 higher and received far smaller concessions.
Responding to apartment-rental ads on Craigslist with a Black name elicited fewer responses than with a white name. 
White state legislators (in both p
        1 reply 14 proslijeđenih tweetova 46 korisnika označava da im se sviđa
        Prikaži ovu nit
      13. Rachel Thomas‏ @math_rachel 3. stu 2019.
        • Prijavi Tweet

        Given that humans are biased, why does algorithmic bias matter? Algorithmic bias matters because: - Algorithms & humans are used differently - Machine learning can amplify bias - Machine learning can create feedback loops - Technology is power. And with that comes responsibilitypic.twitter.com/dU2VNr8zLo

        Algorithms & humans are used differently.
Machine learning can amplify bias.
Machine learning can create feedback loops.
Technology is power. And with that comes responsibility.
        1 reply 37 proslijeđenih tweetova 91 korisnik označava da mu se sviđa
        Prikaži ovu nit
      14. Rachel Thomas‏ @math_rachel 3. stu 2019.
        • Prijavi Tweet

        Algorithms are used differently than human decision makers: - people assume algorithms are objective or error-free - algorithms more likely to be implemented with no process for recourse - algorithms used at scale - algorithms are cheap read more: https://www.fast.ai/2018/08/07/hbr-bias-algorithms/ …pic.twitter.com/XLRQ0WyMrY

        People are more likely to assume algorithms are objective or error-free (even if they’re given the option of a human override)
Algorithms are more likely to be implemented with no appeals process in place.
Algorithms are often used at scale.
Algorithmic systems are cheap.
        1 reply 41 proslijeđeni tweet 116 korisnika označava da im se sviđa
        Prikaži ovu nit
      15. Rachel Thomas‏ @math_rachel 3. stu 2019.
        • Prijavi Tweet

        Rachel Thomas je proslijedio/a tweet korisnika/ceRachel Thomas

        A terrifying example of a city official assuming that ML is always 99% accurate (in reference to use of IBM Watson for predictive policing):https://twitter.com/math_rachel/status/1121505971140907008?s=20 …

        Rachel Thomas je dodan/na,

        During a March 26 city council meeting in Lancaster, a desert community of 160,000 in Los Angeles County, city official Patti Garibay discussed an IBM Watson “dashboard” that police have been using for the past six to eight months to focus enforcement efforts.
        “With machine learning, with automation, there’s a 99% success, so that robot is—will be—99% accurate in telling us what is going to happen next, which is really interesting,” Garibay told the mayor and other local officials, citing test results from “the city of Idaho.”
        Rachel Thomas @math_rachel
        A city official in Lancaster, CA, when discussing an IBM Watson dashboard being used for predictive policing: “With machine learning, with automation, there’s a 99% success, so that robot is—will be—99% accurate in telling us what is going to happen next" https://qz.com/1603797/lancaster-california-police-employ-ibm-mass-surveillance-system/ … pic.twitter.com/BMPAn2RlDG
        Prikaži ovu nit
        1 reply 11 proslijeđenih tweetova 34 korisnika označavaju da im se sviđa
        Prikaži ovu nit
      16. Rachel Thomas‏ @math_rachel 3. stu 2019.
        • Prijavi Tweet

        Machine learning can amplify bias. @mariadearteaga https://arxiv.org/abs/1901.09451 pic.twitter.com/nYjrhzHECE

        1 reply 23 proslijeđena tweeta 44 korisnika označavaju da im se sviđa
        Prikaži ovu nit
      17. Rachel Thomas‏ @math_rachel 3. stu 2019.
        • Prijavi Tweet

        Some steps towards doing better: 1. Analyze a project at your workplace/school. 2. Work closely with domain experts & those impacted. 3. Increase diversity in your workplace. 4. Advocate for good policy. 5. Be on the ongoing lookout for bias.pic.twitter.com/9OsHWjzWLs

        1 reply 9 proslijeđenih tweetova 35 korisnika označava da im se sviđa
        Prikaži ovu nit
      18. Rachel Thomas‏ @math_rachel 3. stu 2019.
        • Prijavi Tweet

        Some questions to ask when analyzing an algorithmic system: Should we even be doing this? What bias is in the data? (all data is biased, need to know how) Error rates on different sub-groups? (e.g. GenderShades) Is there an appeals process? How diverse is the team that built it?pic.twitter.com/oO0hEVwrEu

        Should we even be doing this?
What bias is in the data?
Can the code and data be audited?
What are error rates for different sub-groups?
What is the accuracy of a simple rule-based alternative?
What processes are in place to handle appeals or mistakes?
How diverse is the team that built it?
        16 proslijeđenih tweetova 53 korisnika označavaju da im se sviđa
        Prikaži ovu nit
      19. Rachel Thomas‏ @math_rachel 3. stu 2019.
        • Prijavi Tweet

        To improve diversity, start at the opposite end of the pipeline: your workplace. Improve the experience of the women of color who are already there so they don't leave due to discrimination & mistreatment. read more: http://bit.ly/not-pipeline  and http://bit.ly/women-quit-tech pic.twitter.com/ZaovACsTLL

        41% of women working in tech end up leaving (compared to 17% of men)
women leave the tech industry because “they’re treated unfairly; underpaid, less likely to be fast-tracked than their male colleagues, and unable to advance.”
Interviews with 60 women of color who work in STEM research– 100% had experienced discrimination, particular stereotypes varied by race
        14 proslijeđenih tweetova 43 korisnika označavaju da im se sviđa
        Prikaži ovu nit
      20. Rachel Thomas‏ @math_rachel 3. stu 2019.
        • Prijavi Tweet

        Rachel Thomas je proslijedio/a tweet korisnika/ceRachel Thomas

        Even though Eric Schmidt wants us to stop "yelling" about bias, I continue because real people are being harmed, and:https://twitter.com/math_rachel/status/1188946996029083648?s=20 …

        Rachel Thomas je dodan/na,

        Rachel Thomas @math_rachel
        Odgovor korisniku/ci @smallperks
        I think it's useful to continue collecting & sharing data, examples, & details about the mechanisms of "known" issues like bias to: - convince ppl that are unconvinced - understand how pervasive it is - better understand how to address it - reach those who haven't heard yet
        14 proslijeđenih tweetova 42 korisnika označavaju da im se sviđa
        Prikaži ovu nit
      21. Kraj razgovora

    Čini se da učitavanje traje već neko vrijeme.

    Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

      Sponzorirani tweet

      false

      • © 2020 Twitter
      • O Twitteru
      • Centar za pomoć
      • Uvjeti
      • Pravila o privatnosti
      • Imprint
      • Kolačići
      • Informacije o oglasima