Open Tech Institute

@OTI

Working at the intersection of technology & policy to ensure every community has equitable access to technology that is both open & secure.

Washington, D.C.
Registrerte seg november 2009

Tweets

Du har blokkert @OTI

Er du sikker på at du vil vise disse tweetene? Dette vil ikke oppheve blokkeringen av @@OTI.

  1. Festet tweet
    5. mai

    Thank you for recognizing the affordability crisis in the U.S. and working to expand access to affordable, reliable internet in the . And you shouldn't have to rely on our research - let's get collecting this data too!

    Vis denne tråden
    Angre
  2. for 17 timer siden

    : How does the & tech social media platforms use to tackle & end up amplifying misleading content? , , , , & 's new policy paper makes the process easy to understand.

    Angre
  3. for 17 timer siden

    “The bigger picture here is that as long as everyone is focused on user content, we are not talking about advertising. We are not talking about the money,” on Facebook's dubious support for regulation. Important piece by

    Angre
  4. for 19 timer siden

    Today, also approved a policy statement on privacy breaches by health apps & connected devices. commends this statement, which says "recognizes the importance of protecting the health data on wearable devices." More 👇

    Vis denne tråden
    Angre
  5. for 19 timer siden

    For over a year, has pushed for stronger vertical merger guidelines. Now, progress: voted to rescind its 2020 guidelines, which says "glossed over critical issues, ignored key harms in digital markets & denied public participation."

    Vis denne tråden
    Angre
  6. 15. sep.

    Audience Q: Does growing use of in govt/biz seriously threaten our democracy? : We don't want to forfeit the benefits of breakthroughs just because they could be used for harm. We’ve just got to be thinking thru risks, mitigation, and all scenarios.

    Vis denne tråden
    Angre
  7. 15. sep.

    .'s Catherine Sharkey is most drawn to requiring algorithmic impact assessments & how that fits into an administrative law mechanism. : Impact assessments are super important; thinking about , being bold in the way we distinguish issues & risks.

    Vis denne tråden
    Angre
  8. 15. sep.

    . asks: is there one method of mitigating that is most important? : No! There are benefits and disadvantages to each approach, and they can supplement one another.

    Vis denne tråden
    Angre
  9. 15. sep.

    .: "Risk" is not necessarily the way we would categorize itself. I’d say it’s the way we should categorize our visibility into it. At some vantage point, it’s all high risk. We just need more visibility and transparency.

    Vis denne tråden
    Angre
  10. 15. sep.

    .'s Catherine M. Sharkey: I worry about these lines in the sand... there’s a human in the loop, but the AI actually makes a determination. We should define these concepts along a continuum so we don’t create “safe harbors” that let certain tools evade scrutiny.

    Vis denne tråden
    Angre
  11. 15. sep.

    .: There are so many different ways to document an ML system… the way you choose is not necessarily as important as your consistency. "Every way is not right for every organization."

    Vis denne tråden
    Angre
  12. 15. sep.

    . asks abt proposed gov't requirements for audits. : Good to see policymakers thinking thru these issues, but many of these proposals "lack teeth." Legislation also needs to address the broader ecosystem that allows high-risk AI to be so harmful.

    Vis denne tråden
    Angre
  13. 15. sep.

    . says documentation is important throughout the machine learning lifecycle: who did what, and what the outcomes and impacts were for the system, the users, and non-users. If we don't get it right, we can go back and figure out the source of the harm.

    Vis denne tråden
    Angre
  14. 15. sep.

    .: Audits, impact assessments & more can shed light on opaque high-risk by evaluating variables we're concerned about, like privacy, bias, fairness, & human rights, & they can examine harms of a system & help an entity create a plan to mitigate these issues.

    Vis denne tråden
    Angre
  15. 15. sep.

    We have to be careful thinking about off-the-shelf, private sector tools being imported in terms of governmental use, says 's Catherine M. Sharkey. “We need to get the lawyers, policists and technologists in the room together while developing these tools.”

    Vis denne tråden
    Angre
  16. 15. sep.

    The AI used by to curb disinfo often results in overbroad takedowns & the amplification of misleading content. Companies, as we've found, fall short in creating the remedy channels required for users to dispute decisions made by these black boxes. NEW important report:

    Angre
  17. 15. sep.

    Our panel starts at 1:30 ET! Join the stream below and tweet your reactions with 💻

    Vis denne tråden
    Angre
  18. 15. sep.

    Our panel starts in ten minutes! Join the stream below and tweet your reactions with 💻

    Angre
  19. 15. sep.

    TODAY @ 1:30 PM ET: Don't miss our panel on reducing & harms w/ , & , ft. 's & , 's Catherine M. Sharkey, & 's ! SIGN UP:

    Angre
  20. 14. sep.

    We’re proud to co-author a new report on the role of artificial intelligence in online disinformation with , , , and . Read about how AI can undermine democracy and what we can do to

    Angre
  21. 14. sep.

    Social media algorithms & machine learning are contributing to the rise of online disinformation & pose a growing threat to our democracy. We need platforms and legislators to take action now. NEW report from the Coalition to Fight Digital Deception ⤵️

    Angre

Lastingen ser ut til å ta sin tid.

Twitter kan være overbelastet, eller det kan ha oppstått et midlertidig problem. Prøv igjen, eller se Twitters status for mer informasjon.

    Du vil kanskje også like

    ·