thinking more about this, I seem to get caught in the literary character version of Strong AI “the correct simulation really is a mind” vs. Weak AI’s “the correct simulation is a model of a mind “ rut a lot.
Conversation
Is there a philosophy/lit concept of a “mapped” thing that lies between the two? ie not the thing but not a model/representation of the thing. b/c I don’t think consciousness outside of human minds has to be binary.
1
1
I had a random thought to run the Berkley study through a metaphor involving lossy compression - maybe the narrative clock is the sampled content, and the rhythm clock is the fidelity/sample rate.
1
1
Perfect narrative fidelity is never possible outside a sample rate of ∞, but given a non-zero sample rate, (narrative) consciousness exists at some level, no matter how fuzzy.
1
1
Berk study doesn’t really elaborate on the relationship between the two clocks other than them being more-or-less equal mechanisms that work in tandem and sometimes cover for each other. But, making a bit of a leap, maybe the narrative clock is a function of the rhythm clock.
1
1
It follows that the better the author’s “sampling rate,” the better the aesthetic achievement. Maybe the opposite, perfect fidelity but no narrative, is a brain-dead person/a zombie.
1
1
This is why I like Johnstone on masks and acting in general as a lens for this - actors become living hosts that lend their characters a continuous/∞ sample rate.
1
1
To really overextend the metaphor, perhaps a good actor is literally “channeling” their character.
1
1
Don’t know if this transfers over to AI consciousness or simulation at all. Maybe an adequate simulation is one with a high-enough sample rate, and corrupted/low-fidelity “glitches in the matrix” are all we have to go on to prove simulation.
1
1
Maybe we’re all just deep fried memes, pining for a idyllic past when there was less JPEG. Maybe I need to slow down on the metaphor-mixing.
1
1
Replying to
had a theory that we’re all just tokens in a cryptocurrency for gods/simulators

