Conversation

CPU/Mem is not the problem here, as mentioned on a single machine testnet it would probably all work fine. But imagine shooting every 400ms, 10M transactions confirmations around to hundreds of nodes and synchronizing them all 🤯 That's why L2 solutions are popping out.
2
6
Don’t bet against hardware :), it’s an exponential. Ending up on the wrong side of that curve is a death sentence. A $500 playstation has the specs to handle 1gbps/500ktps. I suspect that steady state load is going to be low and dynamic burst capacity will be more important.
1
2
I think the under appreciated fact is that hardware is fungible. Both network and cpu/mem can be scaled on demand. Validators only need to pay the cost of the minimum costs to run a node for steady state, but price for users can be based on maximum scaled capacity.
1
I am betting that the “most important usecase” is - the largest/cheapest/fastest/single sharded state - price discovery engine. l2’s would need to settle to l1, which would create arbs and risk. Imho, scaling with hardware is elegant, since it requires no code/parameter changes
1
2