I guarantee you that the dominant AI algorithm of the 2050s will not be a stack of differentiable layers trained with SGD.
-
-
Show this thread
-
At this point we are pretty close to training our language models on the entirety of text corpora that humanity has produced so far. Their language understanding capabilities are still close to zero.
Show this thread
End of conversation
New conversation -
-
-
I always find it striking that, for a framework created to a large extent by computer science, there is little discussion of things such as "scaling". At a certain point, how much orders of magnitude more data/resources does each small improvement require?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
As an outsider, I took the OP to mean “when seeking new approaches, keep an eye out for those that produce superlinear returns when you throw compute at them.” Is that a misreading?
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
@LordSomen AI is in weird phase right now. Have to change mindset from “If it works, don’t fix it”. We need something revolutionary. -
Actually in development that is always a case due to strict deadline ..... but during research our mindset always should be why & how ....
End of conversation
New conversation -
-
-
Also because scaling is not always linear: at some point you reach a phase transition
-
Can we call it “ at the edge of chaos” state?
End of conversation
New conversation -
-
-
I'll reference this tweet in response.https://twitter.com/AndrewYNg/status/1045399898537873408 …
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
If new approaches are even allowed to take-root.
Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.