KasMedia Logo

(This post is part of the Understanding GHOSTDAG series)

Before we can talk about the blockChain paradigm, we need to clear up a few points about blockChains. We first introduce graphs, a notion that will follow us through the book.

We also want to talk about validity of blocks: a part of the job of the network is to make sure the data does not contain nonsense or contradictions. This book largely assumes this is already resolved, and all blocks we see are OK, but we do need to take some time to clarify what “valid” roughly means and at what point of the flow this validation actually happens.

Finally, to streamline working with a probabilistic beast like PoW, we need to set up a bit of formal framework. We introduce the concept of a confidence parameter. This concept will mostly serve us in the next chapter, but it is better to introduce it early and let it ripen for a while.

BlockChains are Actually Trees

In math, we use the term graph to describe a set of vertices with arrows going between them, like this:

1

We are usually interested in less whacky graphs, where there aren’t two arrows with the same source and destination, or “loops” going from a vertex to itself. For example:

2

Such graphs are called simple graphs, but we will assume all graphs are simple and just call them graphs. In a simple graph we can say that a vertex AA points at a vertex BB if there is an arrow from AA to BB. We say that BB is reachable from AA if one can “get” from AA to BB by only following arrows. For example, in this graph BB is reachable from AA but not from AA’:

52

The graphs that currently interest us are of a form called a tree. A tree is a graph with the following properties: it has a vertex called the root that is reachable from all other vertices, and every vertex other than the root points at exactly one other vertex. For example:

3

Blocks that have no other blocks pointing at them are called tips.

In the context of blockChains, blocks as vertices, and the root as the Genesis block. We usually consider all blocks ever created, including orphan blocks, giving the blockChain the form of a tree. That’s right, blockChains aren’t chains. Strictly speaking, they are trees. When sketching block trees, we use rectangles instead of circles to remember that vertices represent nodes:

4

Valid Blocks

A cryptocurrency is a multilayered construction, with the consensus only being one layer. In this book, we will very rarely venture outside the consensus layer. Above the consensus layer lives the state layer, in charge of computing and tracking everything that is agreed upon in the consensus layer.

For the most part, the consensus layer cares about the information in a block‘s header: that it is pointing to a known block, satisfies the difficulty target, and so on. The state layer assumes the block headers have already been verified by the consensus layers, and only care about the data. For example, in Bitcoin and other transactional systems, the state layer verifies that all transactions are properly signed, do not double-spend, etc.

This creates a delicate situation where a block could have a perfectly valid header, but violates the rules of the state layer. For this reason, we typically let the state layer intervene in the form of a validation rule. This validation rule determines the criteria for the block's data to be valid (such as not spending coin without an appropriate signature).

These validation rules allows us to assume that all blocks are valid. Not because invalid blocks are impossible to create, but because they could be detected and dropped before the consensus is computed.

The only assumption we make about a validity rule is that it allows computing the consensus given a chain of valid blocks. For example, in transactional coins we avoid contradiction by not allowing any block to spend a balance that did not exist when the block it was pointing to was created.

The Confidence Parameter ε\varepsilon

In a previous post we mentioned that PoW can only furnish probabilistic finality, and vaguely explain that this means that the meaning of this is that the probability of something bad happening (or something good not happening) is not strictly zero, but can be made very small with little effort.

This is formally manifest in a quantity called the confidence parameter, denoted ε\varepsilon. The confidence parameter expresses the probability that things will not go as planned, so we want it to be small.

We do not think of the confidence parameter as a constant, but as a function of various parameters, and consider its decay with respect to this parameter. Most commonly, we will consider the confidence with respect to an adversary size α<1/2\alpha<1/2 and a time length TT, and denote the resulting confidence with ε(α,T)\varepsilon(\alpha,T). In probabilistic finality, we will usually want the confidence parameter to decay exponentially as time decreases, that is, that for any α<1/2\alpha<1/2 it holds that ε=O(eT)\varepsilon = O\left(e^{-T}\right)

Sometimes when we talk about the confidence we will actually mean the probability that everything goes right, namely, about the complement 1ε1-\varepsilon. So it is not uncommon to say that the confidence increases to describe situation where the confidence parameter ε\varepsilon decreases. This makes sense because low ε\varepsilon means high confidence. This terminology might be a bit confusing, but it is very standard, so we stick to it. Our intention will always be clear from context.

Enjoyed reading this article?

More articles like this

Comments

M

Mishaliik

"For example, in this graph B is reachable from A but not from A’." Are the vertecies supposed to be lettered in the graph images? Or I'm missing something? Ortherwise I love the reading. :)

D

Deshe

No, you are correct, I forgot to insert a picture. Fixed

M

Mehdi

we will usually want the confidence parameter to decay exponentially as time decreases => maybe as time increases instead

Post a comment

KasMedia logo