-
If media gave
@AndrewYang equal time to speak he would be leading in the polls.#AIsafety#FairShare#YangGang#DemDebate5 -
Hey
@OpenAI, I've replicated GPT2-1.5B in full and plan on releasing it to the public on July 1st. I sent you an email with the model. For my reasoning why, please read my post: https://medium.com/@NPCollapse/gpt2-counting-consciousness-and-the-curious-hacker-323c6639a3a8 …#machinelearning#gpt2#aisafety -
On that stage
@AndrewYang is the only one who understands and speaks about#AI,#AIsafety,#DataPrivacy, and#TechnologicalUnemployment.#YangGang#DemDebate5 -
SafeAI-20 is very pleased to announce the Invited Talk by François Terrier (CEA). Considerations for Evolutionary Qualification of Safety-Critical Systems with AI-based Components. Invited Speakers info at:http://tiny.cc/b39gjz
@FLIxrisk@CEA_List@RealAAAI@AAIP_York#AISafety pic.twitter.com/1jIodgYlcr
-
When people say things like "AI will create new jobs you cant even imagine" this is what i imagine. Poorly paid, outsourced, unreliable, with no safety net and no benefits.
#AIethics#AIsafety https://www.technologyreview.com/s/613606/the-ai-gig-economy-is-coming-for-you/ … -
LOVE this
#AIsafety landscape map! Source (Roman V. Yampolskiy, who's not on twitter?): https://www.facebook.com/photo.php?fbid=10213158221082446&set=a.2227672163627.2124093.1002495584&type=3&theater …pic.twitter.com/1MWEJZpoWt
-
Proud to announce the 2nd AI Safety Landscape meetup to shape a body of knowledge, together with the most relevant initiatives and leaders from NASA, DARPA, PAI, Stanford, Berkeley, Airbus, Boeing, Lockheed Martin,...
@AAIP_York@CSERCambridge@CEA_List@NASA@DARPA#AISafety https://twitter.com/AISafetyLands/status/1223244543333748737 …
-
Possibly the best AI safety overview talk I’ve seen in the past year -
@DavidSKrueger killed it. 15 minute whirlwind tour of various approaches from diff teams. If you’re confused about how they all fit together, watch this. https://youtu.be/Tqu4cwne1vA#AISafety -
Proud to bring great inspirational Keynote speakers to SafeAI 2019! Prof. Francesca Rossi, IBM / University of Padova, on Ethically Bounded AI and, Dr. Sandeep Neema, DARPA, on Assured Autonomy
#AISafety http://www.safeai2019.org pic.twitter.com/WGoCeHniqL
-
I'm compiling a list of resources that regularly produce content on the state of long-term-focused
#AIsafety work. Suggestions for additions welcome. https://roxanneheston.com/2018/04/23/long-term-ai-safety-feeds/ … -
SefaAI-20 is very pleased to announce the Keynote by Ece Kamar (Microsoft Research AI), AI in the Open World: Discovering Blind Spots of AI. Find further Invited Speakers info at: http://tiny.cc/b39gjz
@MSFTResearch@FLIxrisk@CEA_List@RealAAAI@AAIP_York#AISafety@ecekamarpic.twitter.com/5VnXTwiDTW
-
SECOND AI SAFETY LANDSCAPE WORKSHOP A dedicated follow-up session to shape an AI Safety Landscape takes place in NYC, US, February 6, 2020, Bloomberg offices.
https://www.ai-safety.org/second-landscape-workshop …
#AIsafety#AI#AAAI2020pic.twitter.com/kldmZt4Bo6
-
The SafeAI-20 Program
@RealAAAI is available at http://tiny.cc/3596iz NYC Feb 7th. Don't miss the Keynote by Ece Kamar@ecekamar - Microsoft AI@MSFTResearch, invited talks, dynamic panels and high-quality paper prez@AAIP_York@FLIxrisk#AI#AAAI#AISafety@CEA_Listpic.twitter.com/GfUKq8jMhV
-
Three reasons to read this important report.
@CSETGeorgetown which is new and already doing great work.@EBKania who is a must read for anything#AI. And the topic of#AISafety which is critical for the future of warfare and international security (and a big concern of mine). https://twitter.com/CSETGeorgetown/status/1208068865772548097 … -
Anthropomorphism is holding back
#AISafety and#AIEthics thinking. Focus on sentience, not just "humanity". + we'll have a better chance of persuading AI to adopt#Sentientism than#Humanism - given they won't be human.https://secularhumanism.org/2019/04/humanism-needs-an-upgrade-is-sentientism-the-philosophy-that-could-save-the-world/ … -
Why
#ai is a disaster for insurance, by@viljasenmika 1) past doesn’t predict future in technology (see slides) 2) AI risk is systemic & concentrated eg millions of cloned actors Eg Musk messing up all Teslas overnight somehow.#AiEthics#ailaw#aisafety#finance#insurancepic.twitter.com/I2OtwrFlGt
-
Excited to announce a new
#AISafety blog post on comparing design choices for impact measures in reinforcement learning https://medium.com/@deepmindsafetyresearch/designing-agent-incentives-to-avoid-side-effects-e1ac80ea6107 … and a new and improved version of the relative reachability paper https://arxiv.org/abs/1806.01186 -
Impact penalties help us train agents to avoid unwanted side effects, but these penalties can still produce undesired behaviour. We compare different penalty design choices and show how to avoid this. Blog post: https://medium.com/@deepmindsafetyresearch/designing-agent-incentives-to-avoid-side-effects-e1ac80ea6107 … Paper: https://arxiv.org/abs/1806.01186
#AISafety pic.twitter.com/Wg2kW7H3YF
-
"Self-Driving Cars May Hit People With Darker Skin More Often" via
@Futurism https://futurism.com/the-byte/biased-self-driving-cars-darker-skin … https://arxiv.org/pdf/1902.11097.pdf … Cc@Jackstilgoe@jovialjoy#AI#AISafety#AIEthics#EthicalAI#DriverlessCars#SelfDrivingCars#AutonomousVehicles
Čini se da učitavanje traje već neko vrijeme.
Twitter je možda preopterećen ili ima kratkotrajnih poteškoća u radu. Pokušajte ponovno ili potražite dodatne informacije u odjeljku Status Twittera.