@GaryMarcus Superficial answer: no, input layer resize require reconfig & train but prob fixable with sequential attentional architecture...
-
-
-
@GaryMarcus ...more deeply:prob building useful abstractions that they cant leverage in new scenarios (general vs generalizable learning)
End of conversation
New conversation -
-
This Tweet is unavailable.
-
-
@GaryMarcus@techreview But which architecture scales better to a much larger board? The human or the machine?Thanks. Twitter will use this to make your timeline better. UndoUndo
-
-
-
@GaryMarcus I don't think so. Not in the first encounter, at least...Thanks. Twitter will use this to make your timeline better. UndoUndo
-
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.