@eigenrobot very curious about your take on GPT-3, and the imminence of AGI (actually, mostly the AGI thing)
Are you concerned?
What do you think the odds are of an existential catastrophe?
-
-
Replying to @AVandele
Honestly I have no idea The space of possibilities is wide open Maybe the most obviously worrying outcome is if humans end up wrecking themselves because of the social impact of marginally stronger AI one iteration after another World could look a lot different soon
1 reply 0 retweets 1 like -
Replying to @eigenrobot
hmm thanks for the response. I'm struggling to see how the sensitivity of an AGI to initial conditions/loss function isn't so high that it's destined to turn bad Not an original thought ofc but I'm honestly confused about why more people don't see things that way?
1 reply 0 retweets 1 like -
Replying to @AVandele
I sort of see the potential for proliferation of many variants with different starting conditions, parameters, etc Imagine they start with a common kernel and then slowly grow it and develop it over time for different use cases
1 reply 0 retweets 1 like -
Replying to @eigenrobot @AVandele
Can see something like iterations of AI personae which over time maybe just look like people
2 replies 0 retweets 1 like -
Replying to @eigenrobot
So it sounds like you would disagree with 1. "AI concludes that getting generally smarter is going to help it optimize whatever its loss function is" 2. "AI realizes that it can always get generally smarter by repurposing as many atoms as it can for compute" is that right?
1 reply 0 retweets 1 like
I wouldn't say I disagree with it, one could easily create a badly specified optimizer
Plenty of room for all sorts of outcomes, I think I would be a fool to make confident predictions 
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.