Future of LifeVerified account

@FLIxrisk

FLI catalyzes and supports research and initiatives to safeguard life and develop optimistic visions of the future. Official account. RT is not endorsement.

Cambridge, MA
Joined June 2014

Tweets

You blocked @FLIxrisk

Are you sure you want to view these Tweets? Viewing Tweets won't unblock @FLIxrisk

  1. Given the extent to which modern physics challenges our understanding of the world around us, how wrong could we be about the fundamental nature of reality? On the latest FLI podcast, Anthony Aguirre explores this question & the implications of his answer:

    Undo
  2. "Finance, education, medicine, programming, the arts — artificial intelligence is set to disrupt nearly every sector of our society." Learn more about the preparations being made for these changes, and the work that still needs to be done:

    Undo
  3. "I think we are entering the era of hacking human beings, not just hacking smartphones and bank accounts, but really hacking homo sapiens..." — Hear more on the FLI Podcast:

    Undo
  4. Have you read Human Compatible, the latest work from AI pioneer Stuart Russell? Catch him on the AI Alignment Podcast, where he breaks down the book's major arguments & conclusions:

    Undo
  5. ICYMI: The FLI team talks 2019 accomplishments, 2020 goals & reasons for existential hope on this special podcast episode:

    Undo
  6. Retweeted

    I am excited to announce the launch of my new podcast with my granddaughter, , that will tell the stories of people who have brought us back from the brink of nuclear catastrophe, and how we can do it again. Join me in that fight by tuning in to

    , , and 6 others
    Show this thread
    Undo
  7. An engineer wouldn't say, "I just design the bridge; someone else can worry about whether it stays up." AI developers, too, should take responsibility for the real-world uses of their products. That's the idea behind the Research Goal Principle:

    Undo
  8. The evolution of technologies like AI & biotech is raising complex new questions about the definition of self & the meaning of humanness. On the latest AI Alignment Podcast, catch & on the nature of identity in the 21st century.

    Undo
  9. If your New Year's resolution was to read more, FLI has you covered. We've put together a list of some of our favorite books on existential risk, existential hope, technology, society & more. Browse it here:

    Undo
  10. Retweeted

    Listen to Yuval and Max Tegmark's recent discussion about humanity, morality, and technology on the podcast - link below. -YNH Team

    Undo
  11. . is seeking advisees and collaborators for select AI projects! For more information about the projects and requirements, visit their site:

    Undo
  12. Emerging technologies empowered by artificial intelligence will increasingly give us the power to change what it means to be human. Are there ways we can inform and shape human understanding of identity to nudge civilization in the right direction?

    Undo
  13. How does artificial intelligence impact the UN's Sustainable Development Goals (SGDs)? A pioneering study found that can help us meet 79% of the SDG targets. Read more about the benefits (& hindrances) that AI brings to the table:

    Undo
  14. Undo
  15. On the latest FLI podcast, joins for a wide-ranging discussion on humanity, morality, technology & more. Listen in as these contemporary thought leaders apply their combined expertise to some of the most profound issues of our time:

    Undo
  16. CRISPR gene drives offer humanity an unprecedented level of control over the natural world. They could help us eradicate diseases, fight invasive species & bolster endangered ones. But they could also do irreversible damage. Learn more:

    Undo
  17. Join us for our final podcast of 2019 with and on consciousness, ethics, effective altruism, human extinction, emerging technologies, and the role of myths and stories in fostering societal collaboration and meaning.

    Undo
  18. Curious to know what we’ve been up to recently, what project directions we’re optimistic for, and our reasons for existential hope in 2020? Join the FLI team for this special end of the year episode:

    Undo
  19. As AI takes over more complex tasks, systems must be more flexible—& therefore less predictable. How can we maximize flexibility without sacrificing reliability & safety? Learn more about computer scientist Andre Platzer's groundbreaking approach:

    Undo
  20. On the latest Alignment Podcast episode, learn more about empirical AI safety research from researcher . Jan discusses his work on recursive reward modeling, research directions he's optimistic about & more:

    Undo

Loading seems to be taking a while.

Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.

    You may also like

    ·