Center for Human-Compatible AI

@CHAI_Berkeley

CHAI is a multi-institute research organization based out of UC Berkeley that focuses on foundational research for AI technical safety.

Vrijeme pridruživanja: studeni 2018.

Tweetovi

Blokirali ste korisnika/cu @CHAI_Berkeley

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @CHAI_Berkeley

  1. Prikvačeni tweet

    Applications for internship are due by 12/15 (this upcoming Sunday)! We rely on word of mouth so please retweet or share the application with someone you think would be interested in the internship! 😀

    Poništi
  2. proslijedio/la je Tweet
    1. sij

    I feel hugely honored to have gotten to record this conversation with about how we can make our increasingly AI-dominated future as inspiring as possible:

    Poništi
  3. proslijedio/la je Tweet
    2. sij

    Alignment Newsletter #80: Why AI risk might be solved without additional intervention from longtermists -

    Poništi
  4. Prof. Russell, CHAI Director, is having an 'Ask Me Anything' session today. He's answering questions from 9-11am PST to discuss his recent book 'Human Compatible' or anything else. The AMA is live here:

    Poništi
  5. proslijedio/la je Tweet
    12. pro 2019.

    Shout out to for his impressive AI alignment newsletter. If you want to keep up to speed with what is going on in the field of AI alignment, there's nothing better: His team has summarised 1,200 papers to date!

    Poništi
  6. proslijedio/la je Tweet
    10. pro 2019.

    Want to ensure AI is beneficial for society? Come talk to like-minded people at the Human-Aligned AI Social at , Thursday 7-10 pm, room West 205-207.

    Poništi
  7. Professor Russell will have an 'Ask Me Anything' session on Monday, December 16th from 9-11am Pacific time. The AMA will be live on the front page of .

    Poništi
  8. We are currently accepting applications for our 2020 internship program. If you know of anyone that would be interested in applying, please share! We highly value word of mouth, having received excellent applicants that way in the past!

    Poništi
  9. proslijedio/la je Tweet

    participant Mark Nitzberg, Partner at and Exec. Director at argued yesterday, that brings down the cost of propaganda but that it is also part of the solution to fight it. Listen in:

    Poništi
  10. proslijedio/la je Tweet
    15. stu 2019.
    Poništi
  11. proslijedio/la je Tweet
    6. stu 2019.

    [Alignment Newsletter #72]: Alignment, robustness, methodology, and system building as research priorities for AI safety -

    Poništi
  12. proslijedio/la je Tweet
    2. stu 2019.

    We’re hiring!! Join as Safe and Ethical AI Research Associate / Senior Research Associate to work on exciting interdisciplinary agenda

    Prikaži ovu nit
    Poništi
  13. proslijedio/la je Tweet
    29. lis 2019.

    Stuart Russell suggests turning the logic of on its head to solve the control problem in ➡️ machines to learn preferences overtime. New paradigm could also reconcile AI 🔄 AI problems; many challenges yet to be solved.

    Poništi
  14. proslijedio/la je Tweet
    26. lis 2019.

    He argues that if we instead design AIs whose objective is to get a better model of humans and do what we want, then there's still a lot which can go wrong - but more competent AI means better outcomes instead of meaning worse ones. That's a safer paradigm to be working in.

    Prikaži ovu nit
    Poništi
  15. proslijedio/la je Tweet
    26. lis 2019.

    Russell's key point: right now, we design AIs with objective functions that don't encompass what we actually want. That means that the "smarter" they get - the better they optimize for something that's not what we want - the worse off we are.

    Prikaži ovu nit
    Poništi
  16. proslijedio/la je Tweet
    26. lis 2019.

    Met with Stuart Russell to talk about the future of AI and his new book, Human Compatible:

    Prikaži ovu nit
    Poništi
  17. proslijedio/la je Tweet

    Que faire contre de potentielles dérives de l' ? 🤖 Stuart Russell, professeur et directeur du , tentera de répondre à cette question lors d'une conférence organisée le 5 nov. à par le et le ➡

    Poništi
  18. "Creating machines smarter than us could be the biggest event in human history – and the last." - The 's 's review of

    Poništi
  19. proslijedio/la je Tweet
    23. lis 2019.

    [Alignment Newsletter #70]: Agents that help humans who are still learning about their own preferences -

    Poništi
  20. proslijedio/la je Tweet
    21. lis 2019.

    Excited to finally release my first full-fledged research project, done under the amazing mentorship of and !

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·