Why do proof of work systems ramp up the difficulty of hash solving while keeping latency constant, instead of keeping difficulty constant while nodes compete to reduce latency? The later seems more useful, and not inflationary.
-
-
Replying to @TimSweeneyEpic
The reason is because the only point of proof of work _is_ to keep latency high. You don't actually need it for anything else. The entire point of PoW is just to have there be a single value you can check before validating a transaction block. It's DDoS protection.
7 replies 2 retweets 48 likes -
Replying to @cmuratori @TimSweeneyEpic
This is (but one) of the reasons "blockchain" is not a particularly good idea. People want low-latency, high-volume transactions, but the designs of these systems preclude that possibility entirely. They are, by design, not able to do the thing you wanted them to do.
5 replies 0 retweets 34 likes -
Replying to @cmuratori @TimSweeneyEpic
Throughput isn't a problem if block size scales (on BTC, notably, it does not). Latency remains, but various chains have addressed it with e.g. opt-in 0-confirmation transactions, where the payee accepts the double-spend risk.
1 reply 0 retweets 0 likes -
Replying to @moistgibs @TimSweeneyEpic
It's unclear what "block size scales" means here, though. Block sizes can't be scaled arbitrarily because they are universally replicated state. The VISA volume would crush most nodes on the BitCoin network, etc., just for storage.
2 replies 0 retweets 2 likes -
Replying to @cmuratori @TimSweeneyEpic
For example, BTC blocks are limited to 1MB. BCH 32MB. BSV blocks are variable and uncapped, and blocks >1GB have been mined. Throughput scales proportionally. This has storage implications for nodes of course - large block advocates contend that storage is cheap.
1 reply 0 retweets 0 likes -
Replying to @moistgibs @TimSweeneyEpic
That is kind of obviously false, as is well-covered in the original Lightning Network paper.pic.twitter.com/LYRhULDJqS
3 replies 0 retweets 2 likes -
Replying to @cmuratori @TimSweeneyEpic
Visa averages ~1.7K/sec according to quick google search. Yes storage still a problem, but divide TB/year by 25 or so. I'd expect some innovation if/when storage becomes a bottleneck. LN whitepaper may not be the best citation source, some would call it small-blocker propaganda
1 reply 0 retweets 0 likes -
Replying to @moistgibs @TimSweeneyEpic
In theory it could be that if that rate doesn't increase, and storage does, in ten years or so it might be feasible to consider (eg., one 16TB drive per year of transactions right now would be unwieldy; in 10 years that may not be unwieldy because a single drive might store 64TB)
1 reply 0 retweets 1 like
But again, not really a very compelling argument for adopting this technology. It's very inefficient either way, and this is just storage, to say nothing of how you actually go about efficiently validating transactions over a 16TB-per-year backing store.
-
Show additional replies, including those that may contain offensive content
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.