Obviously I’m not a Strong AI guy, and am pretty much in the David Chalmers camp on the hard problem.
Conversation
I’m not saying this quite right. An intelligence exists within a thermodynamic boundary that separates it from the environment but firs not *isolate* it. The nature of the intelligence is entangled with the specific environment and the boundary actually embodies much of it.
5
2
15
Replying to
I disagree pretty strongly with you on both accounts. The hard problem is only hard because that's the way it is posed. It's the same basic error as Searle's artificial separation of syntax and semantics (obviating pragmatics):
1
5
When it comes to questions of embodiment and thermodynamics, there are two ways to go, and both involve identifying the correct level of abstraction at which to talk about the dynamics of information:
2
Replying to
This is why I don’t engage with your crowd. Too far apart for useful discussion. You guys do you and our descendants can decide who was right.
1
Replying to
I'm not sure which crowd you think I'm with. The thing is that I completely agree with you on the poverty of the IQ++ model of intelligence and the otherwise intellectual bankruptcy of the (utilitarian) LW model of AGI that dominates the discourse.
2
2
I'm quite invested in doing the difficult work in philosophy of computer science/mind that you're calling for. But I think it's a bad idea to resist this stuff from the direction of the hard problem/embodiment, as that just cedes whole swathes of the discursive terrain to them.
1
This is the thing. The work *is* difficult, and doing it properly means giving up some of the easy discursive defences that are common in this area of philosophy and going on the attack, showing just how backward most of this stuff is. Mathematically as much as anything.
1
I'm already sharing too much, but here's the basic version of my critique of the LW-sphere, if you're interested: deontologistics.co/2019/10/22/tfe
1
2
Though my single best contribution to the discourse remains this:
3
2

