I wish we had a Reasoning Test for AI. The current focus on pattern recognition and lack of progress in reasoning from logical axioms, code generation etc isn’t healthy and yields shallow results that appear plausible if the network has been fed a similar training sample.
-
-
-
The problem is we literally have no clue where to even start with reasoning. We dont have any loss functions to evaluate it, and we dont even know what a model that simulates reasoning would look like. The closest we have is GPT-2 by
@OpenAI - Show replies
New conversation -
-
-
I would like to see a population of SHRDLUs that navigate their world and cooperate with each other via natural language. Survival depending on how well they manage to cooperate. New generations being born via genetic combination of the survivors code.
-
I suspect such agents would quickly evolve their language into the kind of formal, structured language
@paulg is talking about. The exercise wouldn’t help them understand human language much better than they do now. - Show replies
New conversation -
-
-
This Tweet is unavailable.
-
I think the practical application is not interpreting natural language instructions directly, but to interactivity refine them into valid code like autocomplete/correct. Could be trained on big code (using comments, identifier parts, documentation). Code search is too inflexible.
End of conversation
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.