I'm unconvinced about their claim that experience can be fully 'unfolded', their argument only works for one single act of perception that can be modeled as a pure terminating function; once you start interacting with your environment, causal structure is reestablished
apart from the more useful formalism (it does put some bounds on IIT related to the amount of useful information exchanged between agent and environment), it looks like the chinese room argument to me
you can of course unfold the network over multiple percepts but then you're effectively exponentiating it each time (okay that's the upper bound, less with some pruning) and quickly end up in "you need several universes to run this" territory