Why do proof of work systems ramp up the difficulty of hash solving while keeping latency constant, instead of keeping difficulty constant while nodes compete to reduce latency? The later seems more useful, and not inflationary.
-
-
The size of the block does not limit transaction rate. That's like saying the size of UDP packets limits your bandwidth. The chunk size would not matter if you didn't have PoW, you'd just send as many blocks as you want, as fast as you wanted. PoW is the rate limiter, period.
-
(and more precisely, "the block order" is the wrong phrase to use. Block order is always rigorous because the previous block hash is always included in the next block. It's block _primacy_ that is arbitrated by longest-chain-wins.)
End of conversation
New conversation -
-
-
Longest chain is defined by accumulated work, not by number of blocks.
-
That's a nonsensical statement. Think whatever you want, I don't care. The paper is unambiguous about these things.
- Show replies
New conversation -
-
-
Hi Casey! Yes, the longest chain wins, but the forks should be rare enough, so network converges eventually. With a few seconds block you will have permanent existence of multiple network versions. It might be interesting for you to check Proof of History.
-
It allows sub-second block times by imtroducing concept of time flow to decentralized system and with that it becomes possible to explicitly synchronize nodes so only a single node per time slot can submit block.
End of conversation
New conversation -
Loading seems to be taking a while.
Twitter may be over capacity or experiencing a momentary hiccup. Try again or visit Twitter Status for more information.