(This post is part of the Understanding GHOSTDAG series)
Truly understanding blockChain and blockDAGs, to the extent you could make your own contributions, requires quite a lot of math. This appendix will surely not teach you all of it, but it is a good starting point. Here we have more of a collection of the math that came up while writing the rest of this book (appropriately, it will keep updating is as writing progresses).
This is not a math book, so the presentation will not be completely formal and I will allow myself to handwave as I please. I will do my best to refer to more formal resources that I personally recommend. However, unlike these resourcecs that naturally draw motivations from many walks of life, I tried to draw inspiration and examples from the world of consensus and proof-of-work as much as possible. So while this appendix is a bit erratic in its difficulty curve, I think it is worth a read, even if to be only partially absorbed.
Exponentials and Logarithms
Exponentials and logarithms describe quantities that increase either very fast or very slow, respectively.
We say that the quantity is exponential with respect to quantity if whenever we add to , multiplies. For example, could describe the number natof possible results of flipping coins. Clearly , as if we flip one coin there are two possible outcomes. We also note that adding one flip will double the number of possible outcomes. Each possible outcome of flipping coins “splits” into two possible outcomes of flipping coins: one for if the additional coin is head and other for if it is tails. In other words, we get that and it is not hard to work out from here that .
Typically, we will not talk about exponential growth but about exponential decay. This is simply doing a “one over” to exponential growth. If grows exponentially fast then decays exponentially fast. We usually see this “one over” in the context of probability, where the number of options grow exponentially, making the chance a particular option decay exponentially.
For example, say we flip coins, what is the probability that they are all head? We already know that there are possible outcomes, so the probability we get this one particular result we want (assuming the coins are fair) is , which we will usually denote .
Logarithms are the opposite. They grow extremely slow. Whenever a value is multiplied, the value of its logarithm increases additively. For example, we can reverse the previous question and ask: how many coins do we need to flip so that there could be at least different outcomes? If we know the answer is that one coin is enough. With some more consideration it is easy to realize that for we need coins. The way mathematicians notate this observation is .
More generally, is the power of you need to take to obtain . It is the unique number satisfying . The number is called the base of the logarithm.
Since we deal in asymptotics, it happens that we don’t particularly care about the base, so I will just write . If that’s too abstract for you, feel free to imagine we are in base two. A word of warning though: the implied base in the notation varies from textbook to textbook. Generally speaking, in computer-science books the logarithms are in base two, in math books the logarithms are in base (and will be usually denoted as ), in physics books they don’t care about the basis of the logarithm, and everywhere else the logarithms are in base 10.
A nice way to think about logarithms is as the “number of digits”. In base 10, we have that has digits and it just so happens that . We can think of as a function that increases from to as we go over all numbers with digits.
We can work out some more properties of logarithms, but we really do not need them. Still, if you want to practice logarithms you can try proving the following (using only the definition of the logarithm given above):
We will typically care about how quantities increase as thing get smaller. This will come up when we talk about the definition of security for blockChains. We ask ourselves questions like “how long must a receiver wait before he knows the probability a transaction is reverted is smaller than ?”
For example, if you already read the discussion about probabilistic vs. deterministic confirmations, you should already expect that this time goes to infinity as goes to , but we can still ask ourselves how fast this happens. We say this happens logarithmically slow if whenever we halve , the time only increases by a constant amount. One way to express it is to say that the time it takes looks like , note that if we make two times smaller it follows from the logarithm rules that which is just what we wanted.
Big O Notation
The above is littered with vague statements like “grows as” or “looks like”. The purpose of the big O notation is to give this a formal definition. Honestly though, understanding the definition is not as important as getting used to the notation.
I will state the formal definition, try not to let it scare you, and then we will break it down. The definition will look a bit laden with details, but that’s exactly the point: we pack all these details into the definition so that we will not have to worry about them.
Formally, given two functions and we say that if there are values such that for any it holds that Intuitively, that means that is smaller than something that “looks like ” when is ”sufficiently large”.
The need for this kind of notation is deep rooted in computer science. Say you want to understand how long it takes your algorithm to run. This times depends on many things that might not particularly interests you right now, and are not even well defined. For example, the basic operations might take different times on different CPUs. Saying that the runtime of your algorithm is (where is the size of the input) is saying something like “look, I don’t know exactly how long this would take, it depends on all sorts of practical considerations that are other people’s job to work out, what I can tell you is that if you double the length of the input the runtime should pretty much double, at least for inputs that aren’t too small”.
Using this notation, instead of saying “the runtime increases logarithmically fast is goes to ” we can just succinctly write .
The “Big O” part is called the growth asymptotics while the values are called the growth constants.
When do constants matter? When we make a concrete design. A computer scientist researching algorithms for multiplying matrices usually mostly cares how the algorithm behaves as the sizes of the matrices increases. However, an ASIC designer trying to build a circuit for multiplying matrices of given size and other attributes mostly cares about the actual constants.
A nice example from the world of blockChains is safety versus confirmation times. In both cases we look at how the probability a transaction is reverted decays with time. Safety is a statement about the asymptotics of this decay, whereas confirmation times deal with the constants.
Taylor Series of the Exponential function*
Don’t worry, I have no intention to develop the theory of Taylor sequences, just state this one useful fact:
What is this you ask? Just some number, approximately . It is a very fascinating number, which you can read all about in this fascinating book. We use this number because it makes the above formula simple. If we want to approximate an exponent with another base we just need to note that
We will use this series for a small value of , in this case we can note that the summand actually vanishes very fast, to the effect that even for it is small enough to neglect. In other words, it holds that and for we can say that
Probability Theory
(As a general resource for probability theory, I highly recommend Sheldon Ross’ First Course in Probability)
A probability function is something that assigns likelihood to events. This likelihood as expressed as a real number between and . We usually use capitalized latin letters from the beginning of the alphabet, such as , to denote events. The probability of the event is denoted as .
For any event , the complement of is denoted and includes all outcomes outside of . For example, if is the event that the result of a die roll is even, then is the event that the result is odd (assuming we only model outcomes where the die toss had a concise result, and disregard events such as the die shattering on impact or remaining suspended mid-air).
Two events are disjoint if there is no scenario included in both. For example, the event “the result is odd” is disjoint of the event “the result is four”. Note that an event is disjoint of if and only if it is contained in .
The probability function has some useful yet hardly surprising properties, I will not list them all. Two important properties are:
The probability of the complement:
Additivity of disjoint events:
(Exercise: prove that the first property is a special case of the second property)
Random Variables
Sometimes, we want to consider processes whose outcome doesn’t have a clear numerical value. For example, the result of a die toss has a “natural” numerical value (the value written on the top side), while the result of flipping a coin does not.
Random variables are assignments of numerical values to outcomes. For example, if we want to model a one dollar bet that a fair coin flip will turn out head, we can define a random variable that assigns to the result head and to the result tails.
There are many interesting random variables. For example, we could define a variable that takes a sequence of coin tosses of any length and outputs the number of heads, or the length of the longest contiguous subsequence of identical tosses. Random variables are often denoted by capitalized letters from the end of the English alphabet, so let us denote the last variable we described as . Random variables are a very expressive tool that allows us to zoom in on the property that interests us, and abstract away the rest. For example, if we indeed only care about the length of the longest subsequence of identical tosses, then we don't really care that HHTTT and THHHTTHTTH are completely different sequences, only that they satisfy .
Given a random variable we can ask what is the probability that it will give some value . This quantity is denoted by . For example, the random variable describing the fair coin above satisfies that .
Expectation (and Variance)
Given a random variable, we can ask what would be its value on average. That is, what should we expect to get from it. This quantity is called the expectation, it is just an average of the possible values, weighted by their probability. In other words, if can get the values then the expectation of is
So for example, if is our fair coin, we get that
meaning that when we bet on a fair coin, we don’t have any advantage or disadvantage.
What if the coin wasn’t fair though? Say it has probability to land on heads? Then the probability to land on tails is , and we get that the expectation is
Not surprisingly, we find that the game is in our favour when the coin is ().
Laws of Large Numbers
Laws of large numbers clarify the practical meaning of the expectation. Intuitively, we expect that if we flip a fair coin many times, about half of the flips will be heads. We can accept that two out of three coins were heads, but at some level, we all know that if we flip a fair coin three million times, the number of heads should be much closer to a million and a half than it is to two millions. We have an innate understanding of how repetition reduces noise, the law of large numbers is the simplest way to formalize this understanding.
A law of large numbers looks like this: if are independent and identical random variables (the prototypical example being that describe the th flip of the coin), then
The expression above is not well-defined, we never explained what this notation means. It clearly talks about how the average of the first instances behaves as grows, but that description is vague. The truth is there are many ways to define what means for random variables. These different definitions are called modes of convergence, and each one will produce a different law of large numbers.
We de not need to formally state any law of large numbers, the intuitive principle is more than enough for us. The main purpose of this digression was to provide an example of the math you will have to learn to understand ongoing research.
To illustrate the principle, take a look at this graph:
It was generated by simulating ten thousands coin flips and tracking their average. I repeated the experiment five times, and plotted each repetition with a different color. Notice that while eventually the process converges to some expected point, there are many ways to get there. The “speed” at which a random variable approaches its expectation is often quantified using something called variance. I will not formally define the variance, but intuitively it represents how “scattered” a random variable is around its expectation.
Exercise: say I run this simulation again, what is the probability I will get a result identical to one of the lines above?
The Distribution of a Random Variable
Let be the result of throwing a normal six-sided die. Now consider a “weird” six-sided die, it is still fair in the sense that all sides are equally likely, but for some reason it has the numbers two through seven written on the sides. Let be the result of throwing the weird die and subtracting one from the result.
Formally, and are different, but they distribute the same, in the sense that each value is equally likely to appear as a result of and of . We denote this by . Often, we don’t really care about the details of defining a random variable, but only about how it distributes. The same distribution can describe many different phenomena, so it is valuable to consider distributions on the abstract.
The Math of Block Creation: the Poisson and Exponential Distributions
We now describe two distributions that are by far the important distributions in the world of proof-of-work.
Imagine you are in a bus station, waiting for a bus that comes once every ten minutes. The busses aren’t regularly spaced, but instead every microsecond someone flips a coin to decide whether to send out a bus, and the probability that a bus is sent out is such that, on average, a bus goes out every ten minuts.
The Poisson distribution describes the number of busses that one would see in a span of ten minutes, while the exponential distribution describes the time that passes between two consecutive busses.
Granted busses are a weird example here, but there are other examples where these distributions fit more naturally, such as observing alpha particles emitted from a decaying radioactive matter that’s supposed to emit one particle per ten minutes, considering the distribution of typos in a book, or considering the block production of a blockchain.
As a rule of thumb, a continuous process is a Poisson Process if the probability that something happens at a given moment is independent of the probability it happens in another moment. The decaying particles do not care how long it has been since the last alpha particle, and the block production process does not care how long you have been mining so far.
We select some unit of time, say minutes, and let denote the number of times we expect the event to happen within this time unit. For example, to describe a block once per ten minutes we set . We call the parameter and denote these distributions as and .
So how do they distribute? The key observation is that these are the only distributions that satisfy the independence property I described above. I will not do this here, but from this property alone one can compute that
It is comfortable to generalize this a bit. Let be the block delay (in minutes), how does the number of blocks created within minutes distribute? To do so, we “change units”. Instead of describing the block delay as minutes, we describe it as periods of length minutes. In other words, in every period of minutes we expect blocks, so the number of blocks we will see distributes as . Combining this all, we get this handy formula: the probability to see blocks within minutes if the block delay is is given by:
So for example, to work out the probability the Bitcoin network produce six blocks in the next hour we set , and to get the answer
This is already quite surprising! We know that the network should prepare six blocks an hour, yet the probability this actually happens is eerily low. This is an example of just how noisy the block creation process are. To see more striking examples, we want to talk about how the delay between consecutive blocks distributes. This is described by the exponential distribution.
The exponential distribution is a bit harder to describe, because it is continuous. The probability that the delay between two blocks is exactly some value is 0. The best we can do is to estimate what is the probability that it falls in some range. I will again skip the calculation, but for any the probability is
In particular, if is the block delay (how many minutes we expect to wait between two consecutive blocks), then (how many blocks we expect to see every minute) is . For example, the probability that the delay between two consecutive blocks is between 8 and 12 minutes is
For people who have not seen this before, this should be at least a bit surprising. Despite an average delay of 10 minutes, we see that the probability of being within two minutes of this duration is actually quite small. The probability that the deviation from ten minutes is at most six and a half minutes is about one half.
Indeed, the block creation process is noisy, and that is one of the reasons we want to wait a reasonable amount of blocks before feeling secure, because the probability that our chain reverts by pure chance alone is not-negligible at all when there are just a few blocks. Taking the average over a large amount of blocks reduces this high variance.
As a final exercise, let us find the for which the probability that the block delay is at most is exactly . Do you have any guess what that might be? The probability that the block delay is at most is given by so we want to solve the equation for . I will leave the details to you but the result is .
This is remarkable! Even though the average delay between two consecutive blocks is 10 minutes, for any particular pair of consecutive blocks, it is more likely that the time between them is less than eight minutes than it is more than eight minutes! The intuition is that these high likelihood pairs of blocks with short delays between them are offset by rare blocks with very long delays between them. For example, the probability that two consecutive blocks have a delay of more than an hour between them is given by
This might seem small but it really isn’t. This happens once every 500 blocks, and given that 144 blocks are created on an average day, this means that, on average, we should see consecutive blocks more than an hour apart about twice a week.
Finally, we can get a nicer expression by using a multiplicative approach. We can ask ourselves what is the probability that the block delay is at most twice smaller to twice larger than the expected block delay. A bit more generally, for a given we have that
In particular, for this comes out about 47%, showing that most block delays are outside this interval! This is the plot of the probability as a function of :
We are not surprised to see the probability starting at for and tapering off to as goes to infinity. However, what should surprise you is how slowly this happens (note that the xx axis is logarithmically scaled.) Even at , we have that the probability is around 90%. Close to 10% of the block delays are shorter than 1 minute or longer than 100 minutes! Even at the probability is just around 99%, telling us that in an average day we should expect to see at least one block delay shorter than 6 seconds or longer than 16 hours!
So why don’t we ever see 16 hours block delay in the wild? Is our math wrong. No, it isn’t, and the answer is actually right in front of you! But I’ll let the reader figure that one out for themselves.
Enjoyed reading this article?
More articles like thisComments
No comments yet!