KasMedia Logo

(This post is part of the Understanding GHOSTDAG series)

Safety and liveness are properties defined in terms of the asymptotics of confidence. They provide a qualitative, rather than quantitative, criterion. Safety and liveness are very important for a computer scientist, but establishing them does very little to help a merchant who wants to know how long to wait before accepting a transaction.

In this post, we will discuss how to concretely reason about confirmation times. We will use Bitcoin as a case study to provide enough depth for a more general discussion.

Counting Confirmations

The first conceptual shift we need to make is from taking times to counting confirmations.

The term confirmation generally refers to a block that adds weight to a branch of the chain that includes the relevant transaction. However, when counting confirmations, we need to be careful not to count how many blocks there are in general, but just on the heaviest branch that includes the transaction.

For example, consider the following block tree:

24

In the HRC rule, the transaction tx only has one confirmation: despite having two blocks above it, there is no way to select a chain such that both blocks contribute to the branch's weight. In contrast, in GHOST tx has two confirmations.

Counting confirmations makes sense because the number of confirmations is what actually determines confidence. The confidence a merchant has after four confirmations is the same, regardless of how long it took to create them (except in extreme situations where it took much too long, but the likelihood of this scenario is very small, even for a rather small number of confirmations).

After computing how many confirmations are required for a desired confidence, we can translate the expected time to achieve this confidence by simply multiplying by the block delay: if the confidence we want requires six Bitcoin blocks, the expected time for confirming the transaction is an hour. However, it could vary wildly (see the appendix for an extended discussion of how wildly noisy block delays can get).

Notice, however, that this expected time is only true since first inclusion and assuming the selected chain has not changed since. This estimation disregards two important periods:

  • The time since the payer broadcasted the transaction until it is included in the blockchain.

  • The time since the first inclusion and until it was added to the selected chain for the last time before it got sufficiently many confirmations (in the starred section, we called this time S(α,ε)S(\alpha,\varepsilon).)

The former is important, but it is a part of a different discussion. In particular, note that this period is unaffected by α\alpha and ε\varepsilon. It is rather a function of the state of the network and the fee paid by the transaction. This will be discussed thoroughly in the third part of this book.

The latter also plays a role, but we will ignore it for now. In a nutshell, the current discussion is limited to the time it takes to achieve safety, not liveness.

Confirmation Times and Difficulty

When counting confirmations and not considering their weight, we are making the simplifying assumption that the hash rate is constant.

As we discussed, this simplification is excusable in the context of a security analysis, as we are collecting theoretical evidence of the security of our system. But in the current context, we are trying to provide hard guarantees to merchants, so how can we ignore this aspect?

The answer is that while difficulty does change, the changes in the difficulty during the time it takes to confirm a transaction are typically negligible. For example, in Bitcoin the confirmation times are in the orders of hours, while the difficulty only changes once every two weeks.

In a much later post, we will have a deep discussion about difficulty adjustment, and we will uncover that in the sliding window approach, where the difficulty is computed for each block separately, the difficulty tends to oscillate in a frequency related to the length of the difficulty window. In particular, if the length of the difficulty window is of the same order of magnitude as the confirmation times, this might cause problems. This was first noticed in Bitcoin cash, encouraging the development of better averaging methods such as ASERT.

In Kaspa, this doesn't pose a problem since the fluctuations are much longer than the confirmation times, which justifies the assumption that we can treat difficulty as constant.

Confirmation times in Bitcoin

Let ε(C,α)\varepsilon(C,\alpha) denote the confidence against an α\alpha attacker after CC confirmations. Finding an accurate formula for ε(C,α)\varepsilon(C,\alpha) HCR is difficult, but not impossible. In The Mathematics of Bitcoin, Cyril Grunspan and Ricardo Pérez work out the terrifying expression ε(C,α)=Γ(C+12)πΓ(C)0α(1α)tC11tdt\varepsilon\left(C,\alpha\right)=\frac{\Gamma\left(C+\frac{1}{2}\right)}{\sqrt{\pi}\Gamma\left(C\right)}\intop_{0}^{\alpha\left(1-\alpha\right)}\frac{t^{C-1}}{\sqrt{1-t}}dt where Γ\Gamma is the infamous Gamma function.

We are definitely not going to work with this monstrosity, I just included it here so you could appreciate it.

The complexity of this expression stems from accounting for subtle phenomena such as the discrete nature of block creation, the variance in block delays, orphan blocks and so on. In a future post we will review how the security of Bitcoin is established, and conclude that for large values of CC we have the much simpler approximation

ε(C,α)(α1α)C.\varepsilon(C,\alpha) \sim \left(\frac{\alpha}{1-\alpha}\right)^C\text{.}

Before moving on, let us stop and appreciate the beautiful simplicity of this expression. We know ε(C,α)\varepsilon(C,\alpha) should satisfy some properties. For a fixed CC, it should go to 00 as α\alpha does, and to 11 as α\alpha approaches 1/21/2 from the left. For a fixed α\alpha it should decrease exponentially in CC. We see that the expression we wrote above is one of the simplest expressions we can come up with that satisfies this. Moreover, the ratio α/(1α)\alpha/(1-\alpha) is very natural to consider, as it is the expected amount of blocks the adversary creates for each block created by the honest network.

This is how this approximation (of ε\varepsilon function of CC) looks for several values of α\alpha:

25

(this graphs the probability of the complement, that is, that the transaction does not revert).

In particular, the confidence guarantees drop quite fast for large α\alpha. This should give some insight into the consequences of a large attacker. While waiting six blocks provides excellent security against 30% attackers, the security of even 10 blocks against a 45% attacker feels pretty low.

That’s a good rule of thumb to remember. People often say PoW is secure against “any less-than-half” attackers, but their security policy sometimes assumes much smaller attackers.

Time Independence of Confidence

A crucial subtlety of the analysis above is that the formula for confidence does not include the block delay λ\lambda. In other words, it does not care how long it took for the confirmations to accumulate. This is a very important point: it means that by to increasing the rate CC is accumulating without increasing α\alpha (which could happen if you, say, increase orphan rates), you can provide the same confidence faster.

This is the true promise of DAGs, the ability to bring down confirmation times while retaining security.

However, as we shall see, things are not always as simple. The DAG structure affords the adversary some flexibility that can make ε(C,α)\varepsilon(C,\alpha) much less nice than we would like. Dealing with this flexibility is the main reason many DAG protocols actually do not provide meaningful improvements over confirmation times.

In the next chapter, after introducing GHOSTDAG, we will isolate a “magic property” that provides great control over the attacker’s newly found flexibility, allowing it to provide unprecedented confirmation speed.

A Word About 51% Attacks

All of the analysis above, and in fact all possible security analyses, is under the assumption of honest majority. But what makes this assumption justified? Can we be secure against a dishonest majority?

For the latter question, note that being “secure against honest majority” is equivalent to “letting a minority make the decision,” which directly contradicts consensus. In other words, a majority can’t be “dishonest” because they just do what most voters want.

There is no way to mathematically protect yourself from a dishonest majority. The best we can do is to make obtaining a majority intractable. This is not done by mathematical but rather by economical sophistication. This renders this entire discussion outside the scope of this book, but some words are due.

A simplistic approach for risk management that is way too common is the following: if your transaction is worth say 1000 USD, and the energy put into each block is worth 100 USD, then wait for 10 confirmation.

There are a lot of problems with this argument. First, it ignores all transactions besides your own. What if the total value of the block is ten million USD? Should you wait 100,000 blocks? What about adjacent blocks?

But there is actually a bigger problem: this approach only considers energy costs, and ignores the price and scarcity of the hardware itself. In other words, it only considers the operational expense (opex) and ignores the capital expense (capex).

Running unchecked with this logic can lead to some ridiculous conclusions. For example, some people argue that Bitcoin is less secure now than it was in 2013, because the ASICs used to mine it now require less energy than the GPUs used back then.

The reality of the matter is more nuanced.

For example, one could argue that coins minable using generic hardware are easier to attack, despite requiring more energy. Generic hardware is much more readily available for rent and purchase. More importantly, it does not lose its value if any specific chain is attacked. In contrast, it would be hard to rent/buy half of the entire market of ASICs for a given coin, and the coin might severely devalue following the attack, which could cause a huge capital loss for the adversary, despite the low energy costs of the attack.

This becomes sharper for coins with higher capex-to-opex ratio (such as coins using a very ASIC-friendly hash, making ASICs more energy efficient).

Those are all just rather soft points to think about. There is no easy way to compute the “confidence” against 51% attacks, and the only insights available are through careful inspection of the mining hardware market.

Enjoyed reading this article?

More articles like this

Comments

No comments yet!

Post a comment

KasMedia logo