Conversation

Replying to
20) This means that, no matter how large a validator is, it can only ever produce blocks for e.g. 5% of the ecosystem. That forces further decentralization of block production, and means that you need to get a more diverse set of validators on board with a timeline.
1
48
21) There were some other ideas whose details I've forgotten -- "big blocks/small blocks", "some small validator that has some power"-- do you remember those? -- The other set of ideas were about storing state securely and efficiently.
2
39
Replying to and
The idea here is basically an anti-censorship tactic. Have two classes of block producers. The lower-performance class ("collectors") would just make batches of transactions; you could have many in parallel. The higher-perf class ("sequencers") would combine batches into blocks.
3
32
Only the sequencer would actually "process" txs and compute the state. The key rule is: a sequencer *must* include *all* batchers that the collectors produced. The goal is that even if sequencers are highly centralized, as long as collectors are not, sequencers cannot censor.
2
33
Replying to and
I'm assuming that whatever mechanism selects who can propose each of the batches would also assign each proposer an index? Alternatively you could order by hash(sequencer_reveal, hash_of_batch), where `sequencer_reveal` is a randao-style hash that the sequencer can't control.
3
28