“Death” is dissolution of ontic-structural integrity for a *physical system*, and this destroys it as an existing intelligence. Ideas like uploads and mind-state-transfer are both ill-posed and uninteresting for anything complex enough to be called “intelligent.”
Conversation
Unless of course you invent exact quantum-state cloning for macro-scale things. In which case teleporting to Alpha Centauri would be more interesting, and it wouldn’t be a way to cheat death.
1
1
3
Another way to think of it: intelligence is the whole territory of the physical system that embodies it. No reductive model-based state transfer preserving ontic-structural integrity and continuity will be possible. Cloning an intelligence is not like copying software code.
2
3
10
Obviously I’m not a Strong AI guy, and am pretty much in the David Chalmers camp on the hard problem.
1
6
I’m not saying this quite right. An intelligence exists within a thermodynamic boundary that separates it from the environment but firs not *isolate* it. The nature of the intelligence is entangled with the specific environment and the boundary actually embodies much of it.
5
2
15
Replying to
Put philosophically: intelligence is a lens that focuses information in the environment. Contemporary statistical inference AIs live in the informational equivalent of an industrial farming monoculture. Yes it has access to infinite corn. But not the variety of a real environment
1
1
8
Didn’t you have a framework for how it’s more important to think about the right things than to think more effectively (in my sloppy half-remembered summary version)?
1
Replying to
The boundary intelligence thread linked in the next thread is sort of along those lines
1
This?
Quote Tweet
1/ I'd like to make up a theory of intelligence based on a 2-element ontology: boundary and interior intelligence
Show this thread
1
Replying to
Yes exactly! This is one of the big things AI folk miss about consciousness. They treat it as an emergence, the crowning achievement of a sufficiently advanced system. But neurologically it’s really just a hypertrophied boundary filter.
1
1
Replying to
I think those of us who start out in the more situated end of thinking about AI tend to get to this view more usefully. It’s generally an East-coast way of thinking, far from network effects and things going whoosh driven by Moore’s law scaling parameters
Replying to
Totally. One of the few places in AI that works on this is Interactive Machine Learning which started in robotics. Active Learning (trying to identify samples that maximize marginal learning) comes from trying to avoid boring a human interaction partner: burrsettles.com/pub/settles.ac
1
1

