Rezultati pretraživanja
  1. If media gave equal time to speak he would be leading in the polls.

  2. 6. lip 2019.

    Hey , I've replicated GPT2-1.5B in full and plan on releasing it to the public on July 1st. I sent you an email with the model. For my reasoning why, please read my post:

  3. On that stage is the only one who understands and speaks about , , , and .

  4. SafeAI-20 is very pleased to announce the Invited Talk by François Terrier (CEA). Considerations for Evolutionary Qualification of Safety-Critical Systems with AI-based Components. Invited Speakers info at:

  5. 31. svi 2019.

    When people say things like "AI will create new jobs you cant even imagine" this is what i imagine. Poorly paid, outsourced, unreliable, with no safety net and no benefits.

  6. 27. ruj 2017.
  7. 31. sij

    Proud to announce the 2nd AI Safety Landscape meetup to shape a body of knowledge, together with the most relevant initiatives and leaders from NASA, DARPA, PAI, Stanford, Berkeley, Airbus, Boeing, Lockheed Martin,...

  8. 2. tra 2019.

    Possibly the best AI safety overview talk I’ve seen in the past year - killed it. 15 minute whirlwind tour of various approaches from diff teams. If you’re confused about how they all fit together, watch this.

  9. Proud to bring great inspirational Keynote speakers to SafeAI 2019! Prof. Francesca Rossi, IBM / University of Padova, on Ethically Bounded AI and, Dr. Sandeep Neema, DARPA, on Assured Autonomy

  10. 30. tra 2018.

    I'm compiling a list of resources that regularly produce content on the state of long-term-focused work. Suggestions for additions welcome.

  11. 3. velj
  12. 2. velj

    SefaAI-20 is very pleased to announce the Keynote by Ece Kamar (Microsoft Research AI), AI in the Open World: Discovering Blind Spots of AI. Find further Invited Speakers info at:

  13. 31. sij

    SECOND AI SAFETY LANDSCAPE WORKSHOP A dedicated follow-up session to shape an AI Safety Landscape takes place in NYC, US, February 6, 2020, Bloomberg offices. 👇

  14. 26. sij

    The SafeAI-20 Program is available at NYC Feb 7th. Don't miss the Keynote by Ece Kamar - Microsoft AI , invited talks, dynamic panels and high-quality paper prez

  15. 21. pro 2019.

    Three reasons to read this important report. which is new and already doing great work. who is a must read for anything . And the topic of which is critical for the future of warfare and international security (and a big concern of mine).

  16. 9. lis 2019.

    Anthropomorphism is holding back and thinking. Focus on sentience, not just "humanity". + we'll have a better chance of persuading AI to adopt than - given they won't be human.

  17. 28. kol 2019.

    Why is a disaster for insurance, by 1) past doesn’t predict future in technology (see slides) 2) AI risk is systemic & concentrated eg millions of cloned actors Eg Musk messing up all Teslas overnight somehow.

  18. 11. ožu 2019.

    Excited to announce a new blog post on comparing design choices for impact measures in reinforcement learning and a new and improved version of the relative reachability paper

  19. 11. ožu 2019.

    Impact penalties help us train agents to avoid unwanted side effects, but these penalties can still produce undesired behaviour. We compare different penalty design choices and show how to avoid this. Blog post: Paper:

  20. 5. ožu 2019.

Čini se da učitavanje traje već neko vrijeme.

Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.