@eigenrobot very curious about your take on GPT-3, and the imminence of AGI (actually, mostly the AGI thing)
Are you concerned?
What do you think the odds are of an existential catastrophe?
-
-
hmm thanks for the response. I'm struggling to see how the sensitivity of an AGI to initial conditions/loss function isn't so high that it's destined to turn bad Not an original thought ofc but I'm honestly confused about why more people don't see things that way?
-
I sort of see the potential for proliferation of many variants with different starting conditions, parameters, etc Imagine they start with a common kernel and then slowly grow it and develop it over time for different use cases
- Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.