After learning about signified & signifier, map-territory relations & wth simulacra means, I can finally make my comment: I haven't seen this to be the case! All I've read dispels this myth of an evil AI Overlord - tho representations in media/SciFi are a different storyhttps://twitter.com/samim/status/1117384860233097216 …
-
-
Replying to @cosimia_
It’s all models, data, simulations, predictions, trial & error, optimization. The methods, hardware, data — they are changing and improving, but most of what we see is a technology & method adapted and applied to interesting problems. It’s a creative spark and we still direct it.
1 reply 0 retweets 2 likes -
Replying to @HunterBergsma @cosimia_
I’ve learned enough to destabilize the fearful narrative, but ultimately we will reap what we sow. We can try to find optimizations of bad objectives and waste energy/delude ourselves. We can use bad data. We can build models that are too big to comprehend individually.
1 reply 0 retweets 1 like -
Replying to @HunterBergsma @cosimia_
The methods can be forgotten. We can easily develop overdependance. We can train systems to solve problems and build a utopia, or we can use technologies to monitor and control. Also, this part is fun but mysterious to me — recursive self-improvement, AGI, and AI consciousness.
2 replies 0 retweets 1 like
Agree with you here. Optimisation & goal directed learning off machines will never be good nor evil. It is our responsibility to make it such that in its pursuit of the goal, we have thoroughly defined what that goal is & it's boundaries it has to respect to get there
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.
