(This post is part of the Understanding GHOSTDAG series)
Many people “know” that a blockChain protocol is “secure” if “transactions cannot be reverted”, which is not incorrect but hardly the whole truth either. The purpose of this chapter is to provide a more wholistic vantage on the security of a blockChain. For that, we first need a more salient understanding of what security even is. In particular, we must understand and come to peace with the inherent limitations of defining security notions.
Security Notions
The Pentagon is naturally an extremely confidential place, with strict protocols to prevent data leakage. Yet, during the cold war, the soviet allegedly managed to obtain vital clues about when major events are about to go down. It was eventually revealed that they used an OPSEC called PizzInt (pizza intelligence). By tracking the amount of pizzas ordered to the Pentagon, they could reliably tell that something is afoot when the meter went off the charts. This gauge of international crisis is known to this day as the “pizza meter”. Famously, it was noticed and reported the night before Iraq invaded Kuweit.
The Pentagon tried to plug the leak by updating the security protocols, requiring personnel to pick-up orders rather than having them delivered. This only partially works, as tracking the activity of pizza joints near the Pentagon and White House still seems to provide information. Crisis begets increased activity, even if the destination of the multitudes of pizzas was not technically known to the pizzeria’s staff, as we have seen in April prior to the Iranian attack on Israel.
The purpose of this story is not to put down the CIA, but to illustrate just how much hard it is to take everything into consideration. A concrete discussion can only happen within a model of the reality, and a model simple enough to actually work with will surely miss something.
With that analogy in mind, we turn to the question: what do cryptographers mean when they say security?
A security notion is some criterion that is, on the one hand, general enough to encapsulate a large family of attacks and, on the other hand, concise enough to reason about formally.
“Defining” security as “it is unfeasible to harm my network” is a bad definition, because it is not concise. It is plagued by the vague terms “unfeasible” and “harm”. Conversely, “defining” security as “this particular attack against my network doesn‘t work” might be concise, and you might even be able to mathematically prove it, but it is hardly general enough. It does not even prove that there aren’t other attacks that achieve the same goal and are successful against your protocol.
Most commonly, the art of defining a security notion comes down to isolating a particular goal and show that no attack can achieve that goal (perhaps relative to some assumptions about the attacker, e.g. that they don’t have access a galactic scale supercomputer).
When dealing with cryptography, one should always remember that security properties are always about a specific type of attack. There are many encryption schemes that are rightfully considerd secure, despite zero of them providing any security against a $5 wrench attack.
For any type of cryptographic primitive, there is an entire roster of security definitions. A signature scheme can be unforgeable but malleable. An encryption scheme is not guaranteed that it is possible to recognize that two ciphers were created by the same secret-key, etc. We define security notions as we need them, and the only tools we have to get convinced that our definitions are strong enough are scrutiny, experience, and criticism. This is exactly why it is important for me to have an open, critical discussion about other techs, as some of my online followers might have noticed.
In the next post we will reason about the security of Bitcoin. For such a discussion to be humanly possible, we will have to make (sometimes implicit) simplifying assumptions. For example, an assumption ubiquitous throughout this book is that the difficulty is constant: expected block creation rates remain constant, and are not governed by a mechanism that might have its own weaknesses, but by cosmic forces beyond the reproach of any would be hacker.
So while the proof is correct, it does not establish the security of Bitcoin in a more realistic setting where the difficulty is varying. Formally reasoning about varying difficulty turns out to be very hard. Only in 2016 did Juan Garay, Agelos Kiayias, and Nikos Leonardos manage to prove that Bitcoin remains secure in varying difficulty in some scenarios. And even the scope of attacks in this analysis is quite limited: they assume that all time signatures are authentic. In other words, they prove security against an attacker that tries to exploit natural changes in the hash rate , but do not reason against attackers that manipulate the difficulty adjustment mechanism itself by feeding it manipulated timestamps.
This is not because the authors “missed it”, or did a sloppy job. On the contrary, Garay and Kiayias stand at the forefront of the most in-depth line of work to model and formalize the security of Bitcoin. It is just extremely difficult.
(Those of you who read the starred section about how PoW works might notice another common assumption: that the underlying hash function is sufficiently similar to a random-oracle.)
This difficulty is not unique to cryptocurrencies, it is a global limitation of cryptography. Your analysis is only as good as the scope of your definition, and your definition will never capture all possibilities.
For a good cautionary tale, consider the Falcon signature scheme. This scheme emerged with the rising concerns of quantum computers breaking contemporary signature schemes. The National Institute of Standards and Technology (NIST) created a competition where they invite cryptographers to create signature schemes that are arguably secure against quantum attackers. Falcon is one of the leading NIST candidates, yet it was still found to be vulnerable to a side channel attack. A “side-channel“ usually refers to information that might leak out by the implementation of a primitive, even though it should not theoretically. In this case, the researchers found that if Falcon signatures are generated by inappropriate hardware (in particular, hardware that does not have fixed time floating point arithmetic, five points if this reminds you of the heartbleed exploit), an adversary that has access to perform electromagnetic measurements on the device itself while it generates signatures can recover the secret-key and forge signatures.
This result does not mean that the security analysis of Falcon is wrong, it just means that it did not model the properties of the hardware used to run the protocol. It also does not mean that Falcon is broken, only that it requires appropriate hardware to maintain its security.
However, before that was discovered, Algorand implemented Falcon signatures to provide post-quantum security on their blockChain. They dodged a bullet: the vulnerability was disclosed before Falcon signatures were introduced to hardware wallets, but imagine if it wasn’t. Most chances are that hardware wallets would not have used constant time fixed point arithmetic (it makes the hardware considerably more expensive), and one day people would have discover this huge security breach right above their money.
Unconditional Security
Do we have to make assumptions about the attacker, or can there be cryptography that is “unconditionally secure”? Well, one thing you always have to assume that there is some information that is not available to the adversary, otherwise the honest user has no advantage. Given that assumption, there definitely are unconditionally secure systems, such as the infamous one-time pad.
However, in his 2018 paper Is the Security of Quantum Cryptography Guaranteed by the Laws of Physics?, Daniel Bernstein challenged the assertion that “private information” could even exist. Fascinatingly, he does so by appealing to string theory!
Now hear me out. While you might have heard that string theory is “controversial” or whatever, one principle that dropped out of it and is largely uncontested is the holographic principle. Loosely, this principle states that I can obtain perfect information about what is going on inside a certain volume by only measuring its surface. In principle, if I have surrounded your house with sufficiently dense and sensitive sensors while you generated your secret-key, I can recover it completely from what I have learned.
This is obviously not a practical attack in any way, this argument just comes to show that we always have to make assumptions. The most common concession is assuming the adversary has some computational limitations. For example, we are perfectly content with encryption systems where breaking the cipher is possible, but the amount of work it requires grows exponentially with the size of the key. A reasonably small key (say, 256 bits) is enough to guarantee (for the attack vectors covered by the definition, at least) that breaking our cipher will take a hundred times the universe's age. Even when discussing “unconditional” security, we make some implicit assumptions. At the very least, the adversary cannot design an experiment to distinguish the standard model from string theory.
A Cautionary Tale
A concrete example from the DLT world of how security properties can mislead us if we are not careful relates to the properties known as safety and liveness. We will define these more thoroughly in the following two posts. In a nutshell, one is a statement about the hardness of reverting a transaction, and the other is a statement about the hardness of delaying the acceptance of a transaction.
The liveness property was initially not noticed for a very good reason: for the heaviest chain rule, safety implies liveness. When Bitcoin was the only example, there was no need to define liveness. In 2013, the GHOST algorithm was published, with proof that GHOST remains “secure” in high block rates. The problem? You guessed it! They “only” proved the safety property.
It was soon noted that running GHOST with block rates that are too high allows something called a balance attack, which violates liveness. This has been known as folklore since at least 2014, but it was first properly analyzed in 2016 by Christopher Natoli and Vincent Gramoli.
In-between these two events, a project called IOTA was under development, eventually releasing their mainnet in July 2016. IOTA used the Tangle protocol — the first blockDAG protocol to be deployed into production. Tangle was correctly shown to be “secure” except, you guessed it again! It was only shown to be safe. This time, the word only is not surrounded by air quotes since the realization that the network is not secure only came after it launched (as the story goes, despite conferring with some academics and discarding their advice). In the following chapters, we briefly touch upon Tangle and see that it is essentially the most straightforward way to “DAGify“ GHOST, inheriting its liveness issues.
Attempting to mitigate Tangle‘s liveness issues, IOTA deployed centralized coordinators that resolve conflicts. These coordinators undermine decentralization since the choice of what side of the conflict wins is in the hands of whoever operates the coordinators, and the list of coordinators is permissioned and maintained by the IOTA foundation. The co-founder who insisted on launching Tangle prematurely without understanding its security tried to remove the necessity of coordinators for several years before giving up and moving on to future endeavors.
The moral of the story is that security of any form must never be taken for granted, or you could easily deploy a broken system.
Fortunately for IOTA, it was not abandoned. A new generation of researchers took it upon themselves to salvage what was possible and rebuild the rest, but carefully this time. Their first order of business was to scrap Tangle in its entirety and develop a new protocol (appropriately named Tangle 2.0) while working hard to rehabilitate IOTA’s professional reputation and scientific standards. Tangle 2.0 furnishes some interesting new approaches, but as a proof-of-stake protocol, it is outside the scope of this book.
Enjoyed reading this article?
More articles like thisComments
No comments yet!