the Big News there is fucking laughable "we explored a well-defined search space algorithmically rather than using human habits in it as a baseline" this is the least interesting thing to ever happen in AI
my vague impression is like 60% MOAR SEARCH and 40% MOAR NEURAL NETS BUT SPECIFIC COOL KINDS THAT ACTUALLY DO ANYTHING WE COULD GIVE A DAMN ABOUT but yeah
-
-
but graph search is really powerful technology so why not, too bad about all the damn memory reallocation
-
I mean, is it really radical to say "we've got a machine that can look more levels into chess/go because we've got faster hardware and more memory to hold a search tree"?
-
oh totally not, but that isn't all that's happening
-
well, go on then?
-
these "recurrent" and "convolutional" nets seem like they might have some technique-level juice to them
-
i won't know until i find out whether the ~recurrent nets~ that are all the rage mean anything beyond the 101 level shit they sound like
-
well, 301, obviously, they don't exactly cover connectionism in 101s
-
hmm, looking into those, only recurrent seems at all surprising to me, and that only because I can see that 'only forward' simplifies what might otherwise become a mess-o-perceptrons
- 4 more replies
New conversation -
-
-
I kinda think both are cheating, they both amount to making progress by throwing more bandwidth/cycles at the problem. But maybe there's been some genunine new stuff that I don't know about.
-
some techniques get talked about like they're closer to revolutionary than incremental, i'll let you know whether they're full of shit if i get around to playing with them
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.