16) You lose the sense of each person, practically speaking, validating their own transactions.
----------
Anyway, enough preamble. What could make a high throughput chain relatively more secure?
There were basically two types of suggestions Vitalik had.
Conversation
21) There were some other ideas whose details I've forgotten -- "big blocks/small blocks", "some small validator that has some power"-- do you remember those?
--
The other set of ideas were about storing state securely and efficiently.
2
2
39
The idea here is basically an anti-censorship tactic. Have two classes of block producers. The lower-performance class ("collectors") would just make batches of transactions; you could have many in parallel. The higher-perf class ("sequencers") would combine batches into blocks.
3
2
32
Only the sequencer would actually "process" txs and compute the state. The key rule is: a sequencer *must* include *all* batchers that the collectors produced. The goal is that even if sequencers are highly centralized, as long as collectors are not, sequencers cannot censor.
2
2
33
Replying to
Ah that makes sense!
How would sequencers deal with conflicting tx's from different batchers?
2
16
Just process them all in sequence and if some transaction's dependencies were invalidated by a previous transaction then that transaction is a no-op?
2
2
16
I'm assuming that whatever mechanism selects who can propose each of the batches would also assign each proposer an index? Alternatively you could order by hash(sequencer_reveal, hash_of_batch), where `sequencer_reveal` is a randao-style hash that the sequencer can't control.
3
1
28


