CPU/Mem is not the problem here, as mentioned on a single machine testnet it would probably all work fine.
But imagine shooting every 400ms, 10M transactions confirmations around to hundreds of nodes and synchronizing them all 🤯
That's why L2 solutions are popping out.
Conversation
10M is 20gbps, which is theoretically doable since pci4 bandwidth is 1tbps.
My guess is in that less then 10 years from now 20gbps will be as common for consumers as 1gbps.
1
2
At that bandwidth, cpu and memory are a challenge. The runtime needs to be as stateless as possible. This is why we are using ebpf as the bytecode, which has been commercially proven at 40gbps/60m packets per second.
1
4
In 10 year we might see that.
But it would mean that every single node / participant to the network has to have outstanding network speed and resources.
Till today in most of the world 1Gb/s is still a dream.
Isn't it better to avoid L1 having to handle it all?
1
Don’t bet against hardware :), it’s an exponential. Ending up on the wrong side of that curve is a death sentence. A $500 playstation has the specs to handle 1gbps/500ktps.
I suspect that steady state load is going to be low and dynamic burst capacity will be more important.
1
2
This is similar to the block size debate in bitcoin.
Of course in 10 years you can have a 100GB block, but why you would do that if you can scale more elegantly?
Same applies here.
You might be able to scale putting tons of hardware but there are better ways :-)
1
I think the under appreciated fact is that hardware is fungible. Both network and cpu/mem can be scaled on demand. Validators only need to pay the cost of the minimum costs to run a node for steady state, but price for users can be based on maximum scaled capacity.
1
In the future you will be able to send the entire Netflix lib in 1 sec but that's not the point.
Wouldn't a L2 approach make Solana much more efficient today (if you want to increase thoughput even more), without increasing complexity or requirements?
standard.co.uk/tech/london-sc
1
1
1
I am betting that the “most important usecase” is - the largest/cheapest/fastest/single sharded state - price discovery engine. l2’s would need to settle to l1, which would create arbs and risk.
Imho, scaling with hardware is elegant, since it requires no code/parameter changes
1
2
Won't change my mind on the elegance part but I don't want to be the annoying guy in the room.
Elegance for me is: apps should really rely on L2 solutions to segregate most of their data / noise from L1.
Hardware scaling: simpler solution, works but not infinite scalable.
1
4
So my sense here is that the right compromise is:
1) use fastest L1
2) if you need more, use a personal shard of that L1


