(This post is part of the Understanding GHOSTDAG series)
In this chapter, we introduce some blockDAG protocols and discuss their properties, focusing on their security.
The first two examples will follow from “DAGifying” existing chain selection rules. In the previous post, we explained in detail how an ordering rule is induced from a chain selection rule. The gist of the matter is that given two blocks and , we define the ordering such that precedes if there is a chain going through which is “better” (in the sense furnished by the chain selection rule) than all chains going through .
DAGified Heaviest Chain Rule
The DAGified HCR ordering rule prefers over if there is a chain running through that is heavier than all chains running through .
Apparently, the resulting rule does not have the safety property.
Say that at some point the DAG looks like this:
One can see that there is a chain of length going through , while all chains going through are smaller. Hence precedes .
Now we wait some times to see if this stabilizes. Say that at some point in the future the DAG might look like this:
It is easy to verify that the new blocks do not change the ordering between and . One way to see it is to note that they do not create new tips that are parallel to the original (dark blue) tips. Since there isn’t an original tip that only has one of the blocks and in its past, the chains going through and are extended the same way. One can easily verify that now the longest chain going through has nine blocks while the longest chain going through only has eight.
Now say an attacker posted a few more blocks
Note that the attacker created much less blocks than the honest network, yet now the longest chain going through is longer than any chain going through ! This proves that “DAGified HCR” is not safe.
So what went wrong? The heaviest chain rule does not allow all blocks in the network to contribute to the security. Even though it provides a way to consistently incorporate the entire DAG, the security only accumulates as fast as the heaviest chain. This means that work done in parallel still doesn’t count towards security even if technically there are “no orphans”. Indeed, a good generalization of HCR will have to somehow extend it to account for the work of parallel blocks. Remember this.
DAGified GHOST (Baby Tangle 1.0)
DAGifying GHOST furnishes the following ordering rule: put before if has a heavier future. There is also a probabilistic version: choose randomly between and with probability proportional to the future size. (The Tangle 1.0 protocol is essentially the latter variation with two added constraints: blocks contain only one transaction, and have at most two parents.) For simplicity we will focus on the deterministic version, but the discussion applies to both.
One can prove that DAGified GHOST has safety, which is already an improvement over DAGified HCR. However, it does not have liveness.
Consider the following blockDAG:
One can see that whereas so the GHOST rule would prefer .
Since it has been some time since and were created, it is very likely that all future honest blocks will have both of them in their past, so after a while the blockDAG might look like this:
One can now check that and . However, this check in unnecessary. Since all new blocks are in the future of both and , we know that the difference didn’t change. In fact, as time progresses, the probability that an honest block has exactly one of the blocks and goes to zero very fast. Hence, the difference remains constant (in this case, exactly one). We can write this as and note that as long as this is the case, the GHOST rule will always prefer over .
However, the assertion that new blocks will have both and in their past only holds for honest blocks. An adversary could create blocks as follows.
These blocks increase by while not contributing to . Soon enough, the network will converge to a situation where all new blocks have the two attacker blocks in their past as well:
So the consensus will now suddenly agree that and that is actually the precedent block. At least until the attacker posts two more blocks:
This, by only posting two blocks at a time, an attacker can indefinitely cause the consensus to switch from preferring to preferring .
The upshot is that if the adversary waits sufficiently long, they can not revert the transaction anymore, making finite, establishing safety. However, they could engage in the balancing act above, making arbitrarily long, violating liveness!
So what went wrong? The problem is that shortly after and are created, all miners are aware of both, so all future blocks will have both in their past. In other word, as long as it is not manipulated by malicious actors, the difference remains fixed. This is bad, because this difference measures the cost of an attack. In contrast, in the chain setting, each new block can be in the future of at most one of the two blocks. This means that the longer the network prefers , the more advantage it gets over . The difference grows with each block making the cost of an attack increase. By allowing (and actually incentivizing) miners to point at all tips we actually excuse them from making a decision. Only the first few blocks above and weigh in on the decision, while the rest abstain, making it that much easier to overturn retroactively.
SPECTRE
One can take the intuition of blocks “weighing in” — i.e. voting — on the decision quite further.
The idea is that instead of counting how many blocks are in we can ask how many blocks prefer .
But what does it mean to “prefer” ? The intuition is that a block would “prefer” if from its point of view, there are more blocks that prefer than there are blocks that prefer . This definition is recursive: we define the preference of a block based on the preferences of blocks in its past. There is only one exception: a block cannot prefer a block that is not in its past (it only makes sense that you cannot for an option you don’t know exists). This provides us with a basis for the recursion as follows:
If but then prefers .
If then prefers if there are more blocks that prefer in than there are in .
(Define a tie-breaking rule for scenarios where and have the same number of votes).
This chain rule is quite close to the first provably secure blockDAG algorithm in the literature: the SPECTRE algorithm, published in 2016 by Yonatan Sompolinsky, Yoad Lewenberg and Aviv Zohar. However, protocol as we defined it is still not quite secure. While not susceptible to the safety and liveness attacks we have seen, it is still susceptible to premining attacks: if an attacker has enough time to prepare in advance, she could increase her chances to double-spend in the future. This could be solved by extending the rules above to all blocks in the DAG, including those not in the future of or . This extension is not complicated, but it is outside the scope of this book. The full details are available in the paper.
To see the difference between DAGified GHOST and SPECTRE consider the following blockDAG:
One can verify that and so DAGified GHOSTDAG would put ahead of '.
However, we can see that from the point of view of and one block prefers while no block prefers . This will cause and to prefer . Thus, sees four blocks that prefer and only three blocks that prefer and it thus prefers itself.
The subtle point with SPECTRE is that it does not actually output an ordering, it outputs something weaker than that: a pairing. For any two blocks, it tells you what block it prefers. But it could be that it would prefer over over over (this is related to something called the Condorcet paradox.) However, the validation rules guarantee that this could never happen if , , and contain a contradiction.
The upshot is that it seems that SPECTRE is the “correct” way to DAGify GHOST. In the next chapter we will similarly see that GHOSTDAG is (despite the annoying name) the “correct” way to DAGify HCR.
Combining DAGs and Chains
One way to combine some of the benefit of DAGs with the simplicity of chains is to use DAGs for throughput but chains for confirmation.
For example, consider the following protocol (which is a simplified version of a protocol called Hathor): there are separate DAG blocks and chain blocks. The chain blocks are arranged in a chain, but any chain block must also point at a DAG block. Furthermore, if is a chain block pointing at DAG block , and if is in the future of , then it must point at a DAG block in . The resulting blockDAG could look like this:
but not like this:
Given the chain, it is easy to define a sorting rule. Any chain block inherits the topological sorting from its parent, and extends it to include the new blocks. It doesn’t really matter how, as long as it is topological. So for example, the sorting from the point of view of chain block will start with the sorting of the (closed) past of already determined by , and append to it the blocks in a topological way. Incidentally, the alphabet order seems to work in this case. Hathor itself uses DAGified GHOST.
It follows that if we are confident that a chain block will not be orphaned, we are equally confidant that the topological ordering of the DAG below this block will not change as well. Hence, all transactions confirmed in that part of the DAG will remain confirmed.
The obvious drawback of this approach is that while it manages to scale throughput, it suffers from confirmation times as slow as traditional blockchains. Other than that, it is also a bit unclear how to set the incentives of the network such that miners will follow the rules.
This protocol is the simplest form of DAG and chain, but not the only one. There are protocols who create a general DAG, but somehow sample “special” blocks out of this DAG. These blocks are chosen in a way that provides a chain, and they act to finalize the DAG blocks below them. This leads to more complex algorithms, but actually simplifies questions about incentive alignment. The two prominent examples are FruitChains and TailStorm.
Enjoyed reading this article?
More articles like thisComments
No comments yet!