Future of Humanity Institute

@FHIOxford

Multidisciplinary research institute at the University of Oxford. We bring careful thinking to bear on big-picture questions about humanity and its prospects.

University of Oxford
Vrijeme pridruživanja: rujan 2012.

Tweetovi

Blokirali ste korisnika/cu @FHIOxford

Jeste li sigurni da želite vidjeti te tweetove? Time nećete deblokirati korisnika/cu @FHIOxford

  1. proslijedio/la je Tweet
    23. sij

    Thanks to structural causal models, we now a more precise understanding of incentives in causal influence diagrams blog post: arXiv:

    Poništi
  2. 16. sij

    Our Centre for the Governance of AI report 2019: "We now have a core team of 7 researchers and a network of 16 research affiliates and collaborators. We published a major report, nine academic publications, four op-eds and our first DPhil (PhD) thesis".

    Poništi
  3. The article is based on their paper in the Journal of Strategic Studies:

    Prikaži ovu nit
    Poništi
  4. Poništi
  5. (2/2) The article is based on their paper in the Journal of Strategic Studies:

    Poništi
  6. (1/2) Ben Garfinkel and Allan Dafoe ask how AI technologies will affect the offense-defense balance of military operations in

    Prikaži ovu nit
    Poništi
  7. (2/2) The discussion focused on and papers: A Causal Bayesian Networks Viewpoint on Fairness () and Path-Specific Counterfactual Fairness ()

    Poništi
  8. (1/2) "A perspective on fairness in machine learning from ", a discussion with , and , co-hosted by and

    Poništi
  9. FHI currently has open research positions at several levels of seniority, and will make up to 11 appointments across research areas. Application deadline: 16 August (noon). More information here:

    Poništi
  10. FHI is seeking talented project manager/(s) for a high impact role. We are looking for driven and experienced individuals, who are interesting in improving the long term future of humanity. For more information:

    Poništi
  11. Applications for the Governance of AI Fellowship, a 3 month research programme close this Thursday. It's an opportunity to do AI governance research, working with Allan Dafoe, , , & Ben Garfinkel.

    Poništi
  12. Risks from artificial intelligence are often thought of as either misuse or accident risks. New piece in by and Allan Dafoe argues that a third type should be paid more attention to: structural risks.

    Poništi
  13. The Governance of AI Fellowship is a 3-month opportunity to do research at the forefront of the rapidly growing AI governance field, aimed at PhDs, postdocs and exceptional Master's students. First deadline Feb 28.

    Poništi
  14. We are seeking two executive assistants to support our senior researchers. For more details and to apply, visit our site.

    Poništi
  15. FHI is looking for a high-impact individual to be the Director's Project Manager. To find out more and apply, visit our site.

    Poništi
  16. Interesting discussion on the 80K hours podcast about how to feed the world in the event of a global catastrophe.

    Poništi
  17. Who does the American public trust to develop artificial intelligence? , Research Affiliate at FHI's GovAI, on the podcast:

    Poništi
  18. DeepMind's research direction for scalable agent alignment via reward modeling. Paper:

    Poništi
  19. Nick Bostrom's new paper offers a perspective from which to evaluate the risk-benefit balance of developments towards ubiquitous surveillance or a unipolar world order. Read it here:

    Poništi
  20. We are excited to announce that FHI will award up to 8 scholarships for scholars whose research aims to improve the long term prospects for humanity by identifying and answering crucial questions. More information on FHI's scholarship program here:

    Poništi

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.

    Možda bi vam se svidjelo i ovo:

    ·