Conversation

Replying to and
apart from the more useful formalism (it does put some bounds on IIT related to the amount of useful information exchanged between agent and environment), it looks like the chinese room argument to me
1
2
Replying to and
you can of course unfold the network over multiple percepts but then you're effectively exponentiating it each time (okay that's the upper bound, less with some pruning) and quickly end up in "you need several universes to run this" territory
Image
1