The "...they teach you" formulations are indicative of the toolbox approach, of course. If he would explain how "they" justify their teaching and he'd prove (instead of insinuate) why "they" are wrong he might become rationalist.
-
-
Replying to @Plinz @blubberquark
I was reading to the end and asked myself when they would present the better alternative to Bayesian inference. And then: least-squares methods with various regulations? Really?
2 replies 0 retweets 0 likes -
Replying to @data_hope @blubberquark
No, his preferred alternative is to just wing it. You have this grab bag of tools, and you can make new ones, but there can supposedly be no systematic way to pick them. Which is why
@meaningness cites him in support of his “metarational nebulosity”.1 reply 0 retweets 1 like -
This tweet is too small to hold my reply, but we need to separately look at rationality, cognitive modelling, applied ML, AGI and maybe also philosophy, because "Bayesianism" itself is a bit nebulous.
2 replies 0 retweets 0 likes -
Yes, the discussion comes down to whether rationalism is viable, and whether minds are universal function approximators, and whether universal function approximators exist. Are there mathematically optimal ways to get to truth when you are in one of the possible universes?
1 reply 0 retweets 0 likes -
That's the philosophy/meta-rationality thing. If I had the answer, I could build an AGI.
1 reply 0 retweets 0 likes -
Why? An existence proof does not mean that we know how to get there, and conversely, the absence of fully universal function approximation does not prevent us from building minds like ours?
1 reply 0 retweets 1 like -
Going by what you just said, I don't think you, I and nostalgebraist *actually* disagree. Bayesianism can be true, any you still could build a mind without explicitly writing anywhere P(A|B)=P(B|A)*P(A)/P(B)
1 reply 0 retweets 0 likes -
Replying to @blubberquark @data_hope
No, we *deeply* disagree. The question is not so much whether a single given tool is the right one, but whether there is a principled way to find truth. His claim is that you cannot find the algorithm that decides what tool you should use: only minds can do that: by handwaving.
2 replies 0 retweets 0 likes -
My claim is not that this is impossible, only that Bayesianism does not actually do this. I'm not some sort of toolbox partisan on a deep philosophical level. There might be something that works as well as Bayesianism is *purported* to, I just haven't seen it!
1 reply 0 retweets 0 likes
("Works as well" in a philosophical sense, not tool performance sense, I mean)
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.