KasMedia Logo

Thesis. A famous impossibility result in distributed systems theory formally proved that a protocol cannot achieve a partially synchronous communication model with a fault tolerance bound of ½. However, Kaspa’s future consensus model, DAGKNIGHT, solves this problem, a feat unachievable by any other protocol, including Bitcoin, Ethereum, and Classical BFT models. Solving such a result entails a strong form of correctness while modeling the real world (i.e., internet latency), — and thereby creating a master of time.

Within this essay, I will explain 1) the nature of distributed systems, 2) the properties of correctness, timing and fault tolerance and their tradeoffs, 3) finality and dynamic availability (i.e., probabilistic finality vs. absolute finality), 4) the adaptation of these properties within Bitcoin, Classical BFT models, and Ethereum and their shortcomings, 5) how DAGKNIGHT breaks the impossibility result in question and thereby resolves the shortcomings of all other protocols, and 6) conclude how DAGKNIGHT masters time and what this means for distributed systems theory. 

Distributed Systems Defined 

There are many definitions of a distributed system; Leslie Lamport claimed, during a correspondence in 1987, that “A distributed system is one in which the failure of a computer you didn’t know even existed can render your own computer unusable.”¹ In his book Blockchain Consensus, Imran Bashir defines a distributed system as “…a collection of autonomous computers collaborating on a message-passing network to achieve a common goal.”²

Fundamentally, however, the goal of a distributed system is for all its components to agree on the system’s state — they reach a consensus. Facebook, Google, Twitter, Amazon, and the World Wide Web are distributed systems, though centralized by a corporate entity. For this article, I will discuss the history of distributed systems of a different kind (blockchain, or distributed ledgers, and BlockDAGs). 

First, we need to understand a few core attributes of distributed systems: correctness, timing, and failures (fault tolerance bounds). 

Correctness, Timing, and Fault Tolerance in Distributed Systems

Correctness (or the feasibility of a distributed system) was defined by Leslie Lamport in Proving the Correctness of Multiprocess Programs. Lamport argued that both liveness and safety properties must be applied to achieve correctness. Safety is the inability to revert a decision, while liveness is the inability to delay a decision.³ In other words, safety guarantees that different honest nodes will never decide on different values, and liveness guarantees that each honest node will eventually decide on a value. As we will see, some distributed systems favor liveness over safety or safety over liveness.

Timing and Fault Tolerance

Timing captures the behavior of a distributed system’s communication model based on delay (such as network delay and processor delay) and speed assumptions. These assumptions determine the extent to which an adversary can control message delays within the system. In other words, the communication model defines the limits to the adversary’s power to delay messages.

Three communication models exist: synchronous, asynchronous, and partially synchronous. Each model determines the fault tolerance bound allowed — that is, a communication model determines whether a protocol is guarded against a ⅓ fault tolerance bound or a ½ fault tolerance bound.

The Synchronous Model

Within a synchronous model, an a priori known bound ∆ on the maximum message delay exists, whereby all messages must be delivered in some pre-configured amount of time, and all participants in the network know how long it takes for messages to be delivered. Thus, it can be said that for any message sent, an adversary can delay its delivery by at most Δ. Bitcoin, for example, is based on a synchronous model according to the longest chain rule. The longest chain rule adheres to a block delay, L, whereby L continuously updates to keep block arrivals (otherwise known as the block creation or confirmation time) in ~10 minutes — the upper bound delay ∆ is 10 minutes.

Synchronous systems are simple and provide strong positive results; however, they are overly strict and inefficient at modeling the real world without harming safety (e.g., with network outages and attacks of unexpected durations). For example, setting a long delay ∆ could provide strong, positive results representing the real world but lead to long timeouts and performance degradation. On the other hand, setting a short and aggressive delay ∆ to improve performance may not model the real world and harm safety. In other words, if you overestimate the delay, the system functions slower than it can be, and if you underestimate the delay, the system is not secure.

Graphic4 120dpi F75ff13c3d.jpg

Moreover, attempting to implement a sweet spot between the two extremes is also not ideal. For example, if a sender broadcasts a message to two receivers, with one message arriving after Δ−ϵ and the other arriving after Δ+ϵ, the real world would behave differently than the set model and harm safety.

Regardless of the issues associated with synchronous models, synchronous systems can achieve a fault tolerance of ½, the highest fault tolerance bound achievable. Intuitively, this is achieved because it can be assumed that every node within the system has seen every vote after a fixed amount of time has passed, at which point a consensus can be made. Formally, if we assume a total of n nodes, among which f is faulty, we want to have at least f+1 correct messages to outnumber the f faulty messages once all messages are received (according to a synchronous assumption), whereby

n ≥ f+(f+1) 
f ≤ (n-1)/2 
f < n/2 
Up to ½ byzantine-fault tolerance achieved.⁴

The Asynchronous Model

Asynchronous models attempt to overcome the problems of synchronous models by assuming nothing about network delays — there are no bounds on network latency, even between correctly functioning nodes. The pros are twofold: first, without a time-bound, message delays cannot cause unexpected harm to safety; second, since there are no fixed values for timeouts, nodes adapt to the actual latency of the system.

However, this comes at a cost. According to the FLP Impossibility theorem — written by Fischer, Lynch, and Patterson in their work, Impossibility of Distributed Consensus with One Faulty Process⁵— an algorithm guaranteeing consensus cannot be created under an asynchronous model even if a single faulty node is present. In other words, all it takes is one bad actor in an asynchronous model to make consensus impossible. However, while consensus isn’t guaranteed, consensus is probable under randomized algorithms within T seconds, as probability exponentially approaches 1 as T grows. Because of this, asynchronous probable consensus algorithms can only achieve a fault tolerance bound of ⅓ (and not ½). 

Graphic6 120dpi 88b261035a.jpg

Moreover, the CAP Principle by Eric Brewer⁶ — another impossibility result within the asynchronous model literature, proves that correctness properties of consistency (safety), availability (liveness), and partition tolerance cannot exist all at once in an asynchronous distributed system. Partition tolerance guarantees that a system will still operate during a network partition, where some messages between nodes cannot be delivered (fault tolerance). Thus, in an asynchronous system, a trilemma occurs — one must choose two of three but cannot have all three: safety, liveness, and partition tolerance. While this theorem applies to asynchronous systems, partitions still abide by asynchronous assumptions and occur within all timing models. Therefore, regardless of the model used, a system must favor liveness or safety during a network partition.

Graphic8 120dpi 8c5522b414.jpg

There are two ways a system can be partially synchronous. Within the first version, synchronicity is assumed when the system functions normally, and an asynchronous model is assumed during malfunctions such as network hiccups or attacks. A global clock exists in this way, and a parameter ∆ specifies the maximum delay a message might suffer in the synchronous phase. However, during the asynchronous phase, the parameter ∆ plays no role. A global stabilization time or GST dictates when the communication model switches from synchronous to asynchronous. This version is called the GST version. The second version is to assume some finite unknown upper bound Δ on message delivery without the bound known in advance and can be chosen by the adversary.

GST does not exist in the unknown Δ version, and there is no transition between an asynchronous and a synchronous phase. Instead, messages will always be delivered within ∆ time steps, like in the synchronous model.

Graphic5 120dpi 34072f6b96.jpg

It is especially important to remember that partially synchronous models, regardless of the kind chosen, have a fault tolerance of ⅓, as formally proven in Dwork, Lynch, and Stock’s work defining the impossibility result in question: a synchronous model cannot achieve a fault-tolerant bound of ½.⁹

Finality and Dynamic Availability

Historically, protocols have been divided into two camps: longest-chain-based protocols and classical BFT consensus protocols. The former utilizes a synchronous model and follows Nakomoto Consensus, i.e., the longest chain rule, whereby if there is a disagreement between chains, pick the longest chain.¹⁰ The latter adopts a partially synchronous model and operates according to a state-machine-replication quorum. Validators are chosen randomly to propose blocks, while the agreement on canonical blocks is done through a multi-round process where every validator votes for a specific block per round. Thereafter, all honest and online validators agree on whether a block is part of the chain. Unlike the longest-chain-based protocols, consensus on block production doesn’t depend on the length or size of the chains. Moreover, BFT consensus favors safety over liveness to offer absolute finality — a decision made within the system will never be reverted. However, by favoring safety, classical BFT models cannot obtain dynamic availability. Classical BFT consensus models include Tendermint,¹¹ PBFT,¹² and Hotstuff.¹³

Graphic3 120dpi C46b238c94.jpg

Longest-chain-based protocols, on the other hand, cannot achieve absolute finality; instead, they achieve probabilistic finality to attain dynamic availability. Dynamic availability allows variable participation levels; nodes may join or leave the system anytime, and the system remains secure. Dynamic availability, therefore, is truly permissionless — nodes aren’t punished for downtime or technical failures. Moreover, within a dynamic available protocol, decision-making when there is low participation during network partitions or attacks is still achieved. Longest-chain-based protocols include Bitcoin and GHOST (as a generalization of the longest-chain rule).

Many people will claim that longest-chain-based protocols favor liveness over safety. However, this isn’t exactly true. This is very important to note. The distinction between liveness and safety is artificial and a misnomer with models that achieve liveness probabilistically. This is because safety and liveness are one and the same thing in such models. Let’s see how. 

One may think that absolute finality is favorable over probabilistic finality; however, due to how each model handles partitions, this isn’t the case. When participation is too low during a partition of a classical BFT consensus system, a quorum may not be reached as too few votes are cast (as a quorum requires a certain amount of votes) — and thus, blocks do not get the votes they need and the blockchain stalls. However, a longest-chain-based system partition will not halt even if the number of active participants drops to one (as long as an honest majority always exists). Instead, ledger output will keep growing during a partition, introducing self-healing: the ability for nodes to converge and achieve consensus, even if nodes are in a state of initial disagreement. The ability to heal is tied to the strength of the network’s safety.

Graphic7 120dpi 33136ad9cb.jpg

As shown above by Neu et al,¹⁴ Both models (classical BFT and longest-chain) are simulated under identical environments, with dynamic participation and sporadic network partitions. As shown, the ledger output by a longest chain protocol (providing dynamic availability) always keeps growing. However, the ledger output by a classical BFT protocol (providing finality) stalls during low participation or partitions.

Thus, while classical BFT consensus models provide safety and absolute finality, this comes at the cost of network halts without the ability to self-heal or recover. Moreover, this also comes at the expense of providing a permissionless system and, therefore, becoming centralized. For example, some BFT protocols require nodes to stay online (otherwise, they will receive punishment) — or a validator set of a certain size may be required, limiting organization participation and introducing bureaucratic rules. 

Ethereum’s Proof-of-Stake Adaptations and its Pitfalls

Many people wanted to move away from proof-of-work to avoid electricity and hardware costs and implement scalability.¹⁵ Proof-of-stake initially started modeling Nakamoto Consensus as chain-based blockchains — synchronous models. In Nakamoto proof-of-stake models, the consensus algorithm randomly selects a stakeholder (in proportion to their stake) and assigns them the right to create a single block — it functions as simulated proof-of-work. However, many didn’t realize this would lead to two major issues: the nothing-at-stake problem and long-range attacks.

The nothing-at-stake problem occurs when two miners create blocks at approximately the same time, and the network must determine which competing chain to adopt according to the longest-chain rule. In proof-of-work, the longest chain with the most work is chosen — in an economic sense, work comes from consuming a resource external to the system. In proof-of-stake, however, selecting the correct chain requires no external resources and, therefore, has no natural opportunity cost. Without opportunity cost, stakers can stake their coins on every version of the chain, causing the system’s security to collapse. Rational stakers, in this sense, would, therefore, extend every possible chain seen to maximize their returns.

Graphic 1 2 C7033c42d1.png

A long-range attack occurs when an adversary obtains (via bribing) past validators’ keys to re-write the blockchain’s history. In other words, an attack could create a false history of the network.

To resolve the nothing-at-stake problem, Vitalik Buterin proposed Slasher, whereby stakers could lose their block reward if they signed blocks at the same height on two forks.¹⁶ However, ​​Vlad Zamfir and Ethan Buchman implemented an additional feature for Slasher: staking security deposits. Instead of forgoing some profit, a provably faulty node would lose a security deposit via slashing — a much greater punishment.¹⁷ 

These implementations were later incorporated into Ethereum’s Gasper consensus model and proved that ETH, as a proof-of-stake asset, only retains value from forced security deposits and slashing punishments — without locking ETH up, it has no value to secure Ethereum economically.

In other words, its value isn’t created by something else, like the unforgeable costliness of proof of work, but is instead forced (which is strikingly similar to our fiat system today). Therefore, it cannot function as a strong or hard form of money. Bitcoin has unforgeable costliness as it costs a lot via capEx (hardware miners) and opEx (electricity) to produce new bitcoins. Producing bitcoins cannot be easily faked, as a counterfeiter would need to redo all of the costly proof-of-work that came before it and do it at a rate that outpaces all of the ongoing proof-of-work within the network.

The Frankenstein: Ethereum’s Gasper¹⁸

Gasper¹⁹ aims to combine the benefits of a longest-chain rule-based model (as a synchronous communication model) and a classical BFT model (as a partially synchronous communication model) backed by a proof-of-stake design. The former is implemented via LMD GHOST (Latest Message Driven Greediest Heaviest Observed Sub-Tree), a generalization of the longest chain rule, and the latter is implemented via Casper FFG. LMD GHOST is a variant of GHOST as an inclusive protocol, from the work Secure High-Rate Transaction Processing in Bitcoin by Yonatan Sompolinsky and Aviv Zohar.²⁰ Within Ethereum, LMD GHOST functions as a fork choice rule. At each fork, instead of choosing the longest chain, validators choose the chain with the total most support from all validators, counting only the most recent message of each validator as support (the latest message). In a proof-of-work-based GHOST model, support refers to the heaviest chain with the most work; however, in LMD GHOST, support refers to the chain with the most votes, known as attestations. An attestation is a validator’s vote, weighted by the validator’s staked ETH balance.

These attestations are Casper FFG (Friendly Finality Gadget) votes via checkpoint blocks and introduce accountability by slashing a validator’s balance if they violate the system’s rules — the amount depending on the offense. Casper FFG is a variation of Casper the Friendly Ghost from the work Casper the Friendly Ghost: A “Correct-by-Construction” Blockchain Consensus Protocol by Vlad Zamfir²¹. Ethereum, however, decided to implement Casper as a Finality Gadget. A Gadget functions as an additional safety layer atop a chain with probabilistic liveness (i.e., LMD GHOST). Casper FFG, therefore, finalizes proposed blocks and ensures their safety even during network partitions. However, since Casper FFG operates according to a partially synchronous model, finality is achieved only when less than one-third of the total validator set is faulty or adversarial — that is, it has a fault tolerance of ⅓. Since Casper FFG provides accountable safety, it does not, on its own, offer liveness in the classical sense; instead, it offers a weak form of liveness: plausible liveness. In the words of Vitalik,

“Plausible liveness basically means that ‘it should not be possible for the algorithm to get ‘stuck’ and not be able to finalize anything at all.’”

As a measure against long-range attacks, Gasper also implemented weak subjectivity checkpoints, as the history of Gasper’s blockchain before weak subjectivity checkpoints cannot be reverted, i.e., if a node receives a block conflicting with a checkpoint, it will reject that block (preventing long-range attacks). The problem with weak subjectivity is that a new node joining (or rejoining) the network must trust and rely on other nodes to obtain a correct, updated state of the system — which is the antithesis of what decentralization aims to achieve: a system built without trust. Proof-of-work, however, operates according to objectivity, whereby a new node can independently conclude the same conclusion as the rest of the network about the current state — achieving a trustless system. 

Thus, while Ethereum’s Gasper merges synchronicity with partial synchronicity and liveness with safety, it does so at the expense of many things, mostly stemming from the pursuit of forcing proof-of-stake to work.

Key problems brought about by Proof-of-stake:

First, it substitutes unforgeable costliness with forced security deposits, creating a weak economic form of value. This is due to the inability to implement the Nakamoto Consensus.

Secondly, it implements slashing, including offline punishments. While the penalties for being offline are small and may only amount to the opportunity costs of missing rewards, dynamic availability is still limited. 

Third, weak subjectivity checkpoints implement trust within the system — challenging the purpose of blockchain technology. 

Key problems brought about by artificial safety/liveness distinctions:

Fourth, instead of improving its fork choice rule (e.g., LMD GHOST), Ethereum gave into the incorrect, artificial distinction between liveness and safety within protocols valuing probabilistic liveness. That is, instead of improving the network’s self-healing capabilities, they implemented a gadget utilizing classical BFT consensus (Casper FFG).

This led to the fifth issue — Casper FFG can only achieve a fault tolerance bound of ⅓. While this does not seem like an issue, per se, as Casper FFG is fulfilling the highest fault tolerance possible in a partially synchronous model, it is very much a limit, and as we will see, DAGKNIGHT can do better.

Lastly, Gasper introduces high levels of complexity, which harms the base layer's security, immutability, and the ability for ETH to act as money. This endless pursuit of complexity has created a Frankenstein.

Screenshot 2024 05 22 at 1 52 22 Am Cfd3eff71c.png

Recap: Bitcoin, Classical BFT, and Ethereum

Bitcoin can achieve a high fault-tolerant bound of ½ but does so at the expense of modeling the real world — such as internet latency (e.g., network outages and attacks of unexpected durations) and achieving scalability. Because of this, Bitcoin cannot become a medium of exchange, as lowering its upper bound on delay for faster transactions rates would harm safety and security. Classical BFT models, on the other hand, favor safety at the expense of achieving dynamic availability and self-healing during network partitions or attacks. Moreover, because classical BFT models rely on partially synchronous communication models, they can only achieve a fault-tolerant bound of ⅓. Ethereum attempted to merge the better qualities of the longest-chain-based models and classical BFT models, but it did so by creating an unnecessary Frankenstein. Let’s see how DAGKNIGHT, the future consensus model of Kaspa, resolves these issues.

DAGKNIGHT: The Master of Time

DAGKNIGHT achieves a fault-tolerant bound of ½ as a partially synchronous model. Remember, Cynthia Dwork (Dwork, Lynch, and Stock) formally proved that it is impossible to achieve a fault-tolerant bound higher than ⅓. Shi and Pass also further formalized this further in their work, Hybrid Consensus.

Let’s explain how DAGKNIGHT achieved this. First, a preface: an understanding of how GHOSTDAG works and how DAGKNIGHT builds off of it, is required — this information is available in the subsections SPECTRE, PHANTOM and GHOSTDAG, GHOSTDAG Issues Resolved by DAGKNIGHT  in my article: How to Solve the Blockchain Trilemma: A BlockDAG and Nakamoto Consensus Friendship

Achieving the Impossible: DAGKNIGHT and Partial Synchronicity

DAGKNIGHT works around Dwork’s impossibility result by utilizing proof-of-work railings to their maximum potential. Proof-of-work decouples the transaction ordering protocol from the finality protocol. The consensus model is how transactions are ordered (e.g., Bitcoin’s longest chain rule and DAGKNIGHT’s ordering rule) and is the canonical algorithm all participants run in the same way (including adversarial nodes). Transaction finality, on the other hand, is a non-binding procedure each user configures or calculates according to their local beliefs about the system.

In Bitcoin, this default time is based on the assumption, α, that an attacker has 10% of the network’s total hash rate and a risk, ε, of <0.1% a node is willing to absorb,²⁶ but the nodes are restricted by synchronicity. However, nodes also bear the consequences of their local system beliefs. For example, suppose a Bitcoin node believes a malicious miner possesses less than ⅓ of the hash rate. In that case, the node will confirm a transaction faster than a node believing the bound to be 49%, allowing a 34% attacker to harm the former but not the latter.

Another example of proof-of-work decoupling ordering from finality is SPECTRE,²⁷ a partially synchronous, proof-of-work BlockDAG ordering algorithm. Within SPECTRE, nodes must also specify their belief on the latency bound via a configured parameter configured individually unbeknownst to the rest of the system. Moreover, each node is responsible for its choice in D, as an overly inaccurate choice can harm them. For example, if a node overestimates D, it cannot recognize an irreversible transaction, and if a node underestimates D, it will accept transactions prematurely. SPECTRE, however, could not achieve strong liveness. 

DAGKNIGHT utilizes this proof-of-work aspect by allowing its transaction ordering rule to be agnostic to latency while its transaction finality depends on a latency bound (configured by the user locally). It is, in this sense, partially synchronous. Responsiveness is a key characteristic in classical partially synchronous communication models. There are two forms of responsiveness according to how a protocol’s consensus model responds to latency. A strong form of responsiveness arises from a protocol that can confirm transactions according to the observable latency of the network. A weak form of responsiveness within a protocol is one that performs tightly with the current maximum latency caused by an adversary. DAGKNIGHT works according to the latter and is how it can bypass Dwork’s Impossibility Result. As the DAGKNIGHT paper states,

“Indeed, in KNIGHT, it is not enough for the client to set a local bound on the observable latency, rather the bound should reflect the maximum latency that may be caused by the attacker. That is, even if messages currently propagate fully within 1 or 2 seconds, if an attacker may disrupt the network and cause messages to take up to 30 seconds to go through, 𝐷 should be set by the client to 30 seconds.”

While this may seem like a limitation, the ability to implement a weak form of responsiveness creates the first consensus model to achieve partial synchronicity with a fault-tolerant bound of ½.

Breaking the Impossibility Result: What does this Entail?

As a generalization of the longest-chain rule (Nakamoto Consensus), DAGKNIGHT achieves dynamic availability, probabilistic finality, and self-healing. However, DAGKNIGHT achieves finality faster than Bitcoin, and therefore, achieves safety faster as well — that is, the probability of a reorganization converges to 0 faster. With faster finality (i.e., 100 blocks per second), the rate at which miners receive their subsidy and translation free block rewards is faster as well. In turn, this decentralizes the system as solo mining becomes more efficient.

Unlike classical BFT models, DAGKNIGHT will not come to a halt during partitions while achieving the highest fault-tolerant bound possible — ½, compared to BFT models that can only achieve ⅓. 

Moreover, unlike Bitcoin, DAGKNIGHT can model the real world of asynchronicity, thwarting long timeouts and performance degradation while ensuring safety at low delay bounds. Thus, Bitcoin cannot become a medium of exchange, as lowering its upper bound on delay to increase transaction speed would harm safety and security. DAGKNIGHT, however, can achieve the first stateless, future world reserve currency.

Additionally, DAGKNIGHT achieves this framework as a non-dual consensus model. Instead of adding layers (such as Ethereum’s Casper FFG), DAGKNIGHT solely improved its base-layer fork-chain rule and thereby achieved simplicity — a significant move away from a complex Ethereum-like Frankenstein. Simplicity allows for faster and more secure development, and DAGKNIGHT achieves simplicity with (a) stronge economic value (assets backed by the unforgeable costliness of proof-of-work), (b) strong objectivity — i.e., eliminating trust as seen in Ethereum’s weak subjectivity model. 

Lastly, DAGKNIGHT further decentralizes proof-of-work as α, ε, and Δ are locally decided and determined by the nodes rather than some overarching central rule (such as Bitcoin’s upper bound delay), indirectly solving what Friedrich Hayek called the local knowledge problem,²⁹ which tackles the knowledge of the particular circumstances of time and place,

“Every individual has some advantage over all others because he possesses unique information of which beneficial use might be made, but of which use can be made only if the decisions depending on it are left to him or are made with his active cooperation.”

Because individuals (or nodes in our case) possess unique information according to their particular circumstances of time and place, planning (particularly economic planning for Hayek, but this can also relate to α, ε, and Δ) is best performed in a similarly distributed fashion by individual actors. Planning according to a central rule lacks this information because it cannot accurately account for the universe of local knowledge.

Conclusion. Kaspa’s future consensus model, DAGKNIGHT, solves an impossibility result unachievable by Bitcoin, Ethereum, and Classical BFT models. As a partially synchronous model with the highest fault tolerance bound possible, DAGKNIGHT provides stronger security, scalability, and decentralization than any other protocol in distributed systems theory. No other protocol has formally achieved such a feat, and perhaps no other protocol will. Kaspa, therefore, is the Master of Time.

Gestorum Rhodie Obsidionis Commentarii Bnf Lat6067 F3v 9e78365a5e.jpg

Grand Master Pierre d’Aubusson with senior knights, wearing the “Rhodian cross” on their habits. Dedicatory miniature in Gestorum Rhodie obsidionis commentarii (account of the Siege of Rhodes of 1480), BNF Lat 6067 fol. 3v, dated 1483/4.

References and Footnotes

[1] Lamport, Leslie. Distribution, 28 May 1987: https://lamport.azurewebsites.net/pubs/distributed-system.txt

[2] Bashir, Imran. Blockchain Consensus an Introduction to Classical, Blockchain, and Quantum Consensus Protocols. Apress, 2022.

[3] Lamport, Leslie. “Proving the Correctness of Multiprocess Programs.” IEEE Transactions on Software Engineering, Vol. SE-3, №2 , Mar. 1977, https://lamport.azurewebsites.net/pubs/proving.pdf.

[4] See Miller for a rigorous proof: Miller, Andrew, and Joseph J. LaViola. Anonymous Byzantine Consensus from Moderately-Hard Puzzles: A Model for Bitcoin, socrates1024.s3.amazonaws.com/consensus.pdf.

[5] Fischer, Michael J., et al. Impossibility of Distributed Consensus with One Faulty Process , Journal of the Association for Computing Machinery, Vol. 32, №2, April 1985, Sept. 1983, groups.csail.mit.edu/tds/papers/Lynch/jacm85.pdf.

[6] Brewer, Eric A., and Armando Fox. Harvest, Yield, and Scalable Tolerant Systems, s3.amazonaws.com/systemsandpapers/papers/FOX_Brewer_99-Harvest_Yield_and_Scalable_Tolerant_Systems.pdf.

[7] Dwork, Cynthia, and Nancy Lynch. Consensus in the Presence of Partial Synchrony , Journal of the Association for Computing Machinery, Vol. 35, №2, April 1988. , Oct. 1985, groups.csail.mit.edu/tds/papers/Lynch/jacm88.pdf.

[8] Pass, Rafael, and Elaine Shi. Hybrid Consensus: Efficient Consensus in the Permissionless Model, eprint.iacr.org/2016/917.pdf.

[9] See Lamport et al for a rigorous proof: Pease, M., et al. Reaching Agreement in the Presence of Faults , Journal of the Agsoctatton for Computing Machinery, Vol 27, No 2, April 1980, Nov. 1978, lamport.azurewebsites.net/pubs/reaching.pdf.

[10] Well, it actually works on a heaviest chain rule, but I am sweeping this detail under the rug for simplicity. For simplicity’s sake, it’s assumed that the longest chain by height correlates to the longest chain by aggregate work/hashes computed.

[11] Kwon, Jae. Tendermint: Consensus without Mining, 2014, tendermint.com/static/docs/tendermint.pdf.

[12] Castro, Miguel, and Barbara Liskov. Practical Byzantine Fault Tolerance, Proceedings of the Third Symposium on Operating Systems Design and Implementation, New Orleans, USA, February 1999, pmg.csail.mit.edu/papers/osdi99.pdf.

[13] Yin, Maofan, et al. HotStuff: BFT Consensus in the Lens of Blockchain, 23 July 2019, arxiv.org/pdf/1803.05069.pdf.

[14] Neu, Joachim, et al. “Resolving the Availability-Finality Dilemma.” Decentralized Thoughts, 1 Nov. 2020, decentralizedthoughts.github.io/2020–11–01-ebb-and-flow-protocols-a-resolution-of-the-availability-finality-dilemma/.

[15] It’s misguided to think of proof-of-work as wasting electricity, power, and hardware. Many people believe that proof-of-work should be “useful” — this is a misoner. Proof-of-work is already useful as it provides an additional layer of security to a distributed system. It not only protects against sybil resistance but also improves consensus security, as proof-of-work should be thought of as a weapons defense system. This idea is well formed by Jason Lowery in his book, Softwar: A Novel Theory on Power Projection and the National Strategic Significance of Bitcoin. Lowery argues how the ability to convert energy and hardware (e.g., ASICs or GPUs) into computational power, and subsequently into the potential to add new blocks to the blockchain, is a form of power projection, similar to a national military. Furthermore, proof-of-work provides unforgeable costliness in the creation of bitcoins — in other words, it creates stronger, harder money. 

Moreover, the idea that proof-of-work cannot scale isn’t correct -as we see later in the article, DAGKNIGHT, as a proof-of-work model can achieve scalability with up to 100 blocks per second.

Optical ASICS, which use very little energy to run, are a possible future solution to mercenary mining once Kaspa moves away from a subsidy reward model to a fully translation fee-based model. Optical technology has been widely adopted for AI servers and is compatible with the hashing algorithm, the kHeavyHash, utilized by GHOSTDAG and DAGKNIGHT. It’s important to note that a CapEx paradigm does not mean it’s better to have higher CapEx costs; instead, miners can focus less on operational costs. 

For example, the cost of entering oPoW hardware manufacturing could be much lower than ASICs since old Silicon Photonics, such as CMOS nodes (~90 or 220 nm vs. ~7 nm for transistors), are sufficient to create For the original OPOW paper, see Optical Proof of Work by Michael Dubrovsky, Marshall Ball, and Bogdan Penkovsky. For Stanford lab papers on OPOW, see LightHash: Experimental Evaluation of a Photonic Cryptocurrency by Sunil Pai, et al, and Experimental evaluation of digitally-verifiable photonic computing for blockchain and cryptocurrency, by Sunil Pai, et al.

1 a Ug Z Gwv R Vurr29 O4 Lk Xzw D1bda40424.png

From a slideshow on the Physical Limits of Store of Value Cryptocurrencies: https://www.powx.org/opow

[16] Zamfir, Vlad. “The History of Casper — Chapter 1.” Ethereum Foundation Blog, 6 Dec. 2016, blog.ethereum.org/2016/12/06/history-casper-chapter-1.

[17] Zamfir, V. (2016b, December 7). The History of Casper — Chapter 2. Ethereum Foundation Blob. https://blog.ethereum.org/2016/12/07/history-casper-chapter-2

[18] Gasper is very complex. Explaining its design in detail (i.e, the intricacies of checkpoints and checkpoint trees) isn’t necessary for the scope of this article.

[19] Buterin, Vitalik, et al. Combining GHOST and Casper, 11 May 2020, arxiv.org/pdf/2003.03052.pdf.

[20] Sompolinsky, Yonatan, and Aviv Zohar. Secure High-Rate Transaction Processing in Bitcoin, eprint.iacr.org/2013/881.pdf.

[21] Zamir, V. (n.d.). Casper the Friendly Ghost A “Correct-by-Construction” Blockchain Consensus Protocol. https://github.com/ethereum/research/blob/master/papers/CasperTFG/CasperTFG.pdf

[22] Rosenfeld, Meni. Analysis of Hashrate-Based Double-Spending, 11 Dec. 2012, bitcoil.co.il/Doublespend.pdf.

[23] Sompolinsky, Yonatan, et al. SPECTRE: Serialization of Proof-of-Work Events: Confirming Transactions via Recursive Elections, 2016, eprint.iacr.org/2016/1159.pdf.

[24] Friedrich A. Hayek (1945). “The Use of Knowledge”. American Economic Review. XXXV: 4. pp. 519–530, https://www.kysq.org/docs/Hayek_45.pdf.

Enjoyed reading this article?

More articles like this

Comments

G

Gilad

Fantastic! thank you.

P

Peter Matzek

Unglaublich, wie weit Kaspa ist. Ich hoffe wir gehen mit Kaspa in eine bessere Welt

Post a comment

KasMedia logo