I think 10m isn't totally out of the question. Solana is at 100k right now, and there's about a factor of 100 of moderately low-ish hanging fruit I think? IDK for sure, @aeyakovenko / @seb_alameda would know more
-
-
Replying to @SBF_FTX @SBF_Alameda and
Problem reaching 10M txs per second is more on the network side. Maybe on a local machine, but on a distributed geographic net? You could create 100 shards and have inter-shard sync, but then the chatter across shards might be too high with block time < 1s Interesting stuff
1 reply 0 retweets 3 likes -
Replying to @paoloardoino @DecentralStn and
Oh yeah you def can't get 1 tx to have a latency of < 1/1m s, but you can do multiple in parallel without sharding using e.g. memory allocation (what Solana does)
1 reply 0 retweets 6 likes -
Replying to @SBF_FTX @SBF_Alameda and
CPU/Mem is not the problem here, as mentioned on a single machine testnet it would probably all work fine. But imagine shooting every 400ms, 10M transactions confirmations around to hundreds of nodes and synchronizing them all
That's why L2 solutions are popping out.2 replies 0 retweets 6 likes -
Replying to @paoloardoino @SBF_Alameda and
10M is 20gbps, which is theoretically doable since pci4 bandwidth is 1tbps. My guess is in that less then 10 years from now 20gbps will be as common for consumers as 1gbps.
1 reply 0 retweets 2 likes -
Replying to @aeyakovenko @paoloardoino and
At that bandwidth, cpu and memory are a challenge. The runtime needs to be as stateless as possible. This is why we are using ebpf as the bytecode, which has been commercially proven at 40gbps/60m packets per second.pic.twitter.com/v7Kd7qxxUZ
1 reply 0 retweets 4 likes -
Replying to @aeyakovenko @SBF_Alameda and
In 10 year we might see that. But it would mean that every single node / participant to the network has to have outstanding network speed and resources. Till today in most of the world 1Gb/s is still a dream. Isn't it better to avoid L1 having to handle it all?
1 reply 0 retweets 0 likes -
Replying to @paoloardoino @SBF_Alameda and
Don’t bet against hardware :), it’s an exponential. Ending up on the wrong side of that curve is a death sentence. A $500 playstation has the specs to handle 1gbps/500ktps. I suspect that steady state load is going to be low and dynamic burst capacity will be more important.
1 reply 0 retweets 2 likes -
Replying to @aeyakovenko @SBF_Alameda and
This is similar to the block size debate in bitcoin. Of course in 10 years you can have a 100GB block, but why you would do that if you can scale more elegantly? Same applies here. You might be able to scale putting tons of hardware but there are better ways :-)
2 replies 0 retweets 0 likes -
Replying to @paoloardoino @SBF_Alameda and
I think the under appreciated fact is that hardware is fungible. Both network and cpu/mem can be scaled on demand. Validators only need to pay the cost of the minimum costs to run a node for steady state, but price for users can be based on maximum scaled capacity.
1 reply 0 retweets 0 likes
In the future you will be able to send the entire Netflix lib in 1 sec but that's not the point. Wouldn't a L2 approach make Solana much more efficient today (if you want to increase thoughput even more), without increasing complexity or requirements? https://www.standard.co.uk/tech/london-scientists-build-ultra-broadband-a4524801.html …
-
-
Replying to @paoloardoino @SBF_Alameda and
I am betting that the “most important usecase” is - the largest/cheapest/fastest/single sharded state - price discovery engine. l2’s would need to settle to l1, which would create arbs and risk. Imho, scaling with hardware is elegant, since it requires no code/parameter changes
1 reply 0 retweets 2 likes -
Replying to @aeyakovenko @SBF_Alameda and
Won't change my mind on the elegance part but I don't want to be the annoying guy in the room. Elegance for me is: apps should really rely on L2 solutions to segregate most of their data / noise from L1. Hardware scaling: simpler solution, works but not infinite scalable.
1 reply 0 retweets 4 likes - Show replies
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.