Understanding Projects’ Investment Outlook from the Technical Point-of-View
Abstract– The emergence of technology, particularly in its application to build decentralized platforms, instigated the question of how to design a better consensus protocol. Existing protocols have limitations which prevented scalability and . In this paper, we dissected key components of existing consensus protocols and analyzed possible methods to increase their performance based on its application. This review outlines the necessary aspects for consensus protocols to thrive as value transfer mediums in the future.
Index Terms– , consensus, , review, applications
Introduction
is a distributed technology that operates its network through multiple nodes, providing a transparent and immutable value transaction record. At its core, technology requires multiple nodes in a distributed system to reach an agreement prior to executing any action.
The underlying mechanism responsible is called consensus protocol. A consensus protocol provides sets of rules that govern the method in which the transfer of values occurs. The main difference between a and traditional consensus protocol is that it requires trustless consensus to occur between the participants. This notion brings forth the importance of robust incentivization in the protocol to prevent malicious behaviour within the network.
Consensus Protocol
The term consensus itself refers to the general agreement that was achieved by multiple actors. In the traditional computer science context, consensus is a fundamental aspect to ensure distributed computing/multi-agent systems functionality. A real-world example of a system that requires consensus is Google’s PageRank. Traditional consensus within systems is governed by a central authority (Google governs PageRank). This aspect is the key differentiator of a consensus. The concept of distributed ledgers requires trustless consensus without any central authority to govern the network. In this scenario, game theory properties specifically the Nash Equilibrium, becomes a pivotal part in designing a consensus.
A consensus needs to ensure absolute protection against double-spend problem from malicious actors. To achieve this, a robust consensus protocol is necessary to incentivize honest behaviour. At its core, a is only as good as its consensus protocol.
The purpose of developing a better consensus protocol is to improve security, scalability (transactions per second), and decentralization. Nonetheless, most of the existing consensus faces a trilemma of sacrificing 1 aspect for the other 2 in order to achieve improvements (Buterin V. Sharding FAQs, 2018).
In this study, Directed Acyclic Graph will be referred under the umbrella of “” for the sake of the discussions. Albeit the mechanics behind the two are different, the goals and challenges of projects that adopted either mechanics are similar.
Security
The fundamental technology of an immutable distributed makes a “secure” by its own definition. In essence, what makes a secure are its unique cryptographic fingerprint and consensus protocol. For instance, ’s Proof-of-Work (PoW) utilizes previous block headers and Merkle Root technology to strengthen its security. Thus, improving a ’s security means improving its consensus protocol. Each consensus protocol has its own weaknesses. Nonetheless, the most common type of attack that occurs on a is the 51% attack, a situation when malicious actors control the majority of a network computational power. It is paramount that a can withstand the attack, as unfavorable activities can be executed on the network when it happens. This is a critical issue as the number of 51% attacks happening on blockchains are increasing, and smaller blockchains/ are more vulnerable (Crypto51, 2018).
The design of a consensus protocol contributes significantly to its susceptibility against a 51% attack. For instance, ’s PoW utilizes enormous computing power, which resulted in a large hashrate, making it extremely costly and difficult to launch a 51% attack on the network. Depicted below is the approximate cost of a 51% attack on the network as of September 9th, 2018.
However, malicious actors can still launch a temporary double-spend attack without controlling the majority of the network. The analysis below showcased the probability of successfully launching a double-spend attack. It is assumed that the malicious actor controls 10% of the network capability. As a reference, each of the 4 largest pools controls more than 10% of the network computational power.
As it can be seen, the success probability decreased exponentially depending on the number of confirmations. The general number of confirmations picked by the community is 6 as it is considered to render the probability to be low enough. It is important to clarify that other consensus protocols will have different analytical approaches regarding its security, depending on how the protocol works. With PoW, the hashrates represent control over the network, whereas Proof-of-Stake (PoS) utilizes wealth to represent control.
Designing a better consensus protocol’s means improving on its security by instilling a mechanism that can halt various kind of attacks. Possibly implementing solutions such as waiting for higher number of confirmations and a more robust Nash Equilibrium system in general.
Scalability
The most important aspect to promote is scalability. The term often relates to a ’s ability to process transactions per second (TPS). As of now, most blockchains are considered unusable due to the inability to process a large amount of throughput. VISA claims that it can handle 24,000 TPS, with an average data of 1700 TPS. and respectively have a TPS of approximately 7 and 13. These data show that blockchains must increase its TPS before it can be effectively utilized for large scale operations such as /Internet-of-Things.
A TPS is determined by its block size and block creation rate. However, solving the scalability/TPS problem is not as simple as increasing the block size/creation rate because this solution sacrifices security (as the trilemma mentioned). Larger block size/creation rate decreases communication speed between blocks as more bandwidth is required. Making it slower to propagate new blocks across the network; thus, the longest chains grow at a slower pace. Presented below is a mathematical analysis of the notion.
Let us state that a security threshold is the required network processing power to (in percentage) to launch a double-spend attack.
· H > M Honest chain will grow at the fastest rate, safe from double-spend
· M > H or m > h Double-spend is launched successfully
· m + h = 1 All nodes are either honest or malicious
Relationships between variables:
The original Satoshi Nakamoto’s whitepaper assumes a “synchronous” network, which means that network participants are instantaneously knowledgeable about new blocks and the longest chain. In reality, this assumption does not always hold. For instance, when the internet is slow, there will be a delay in communication that might cause network participants to incorrectly add a new block to a chain that is no longer the longest chain. This situation is described as a “fork” and it wastes computational power on unused blocks.
When a fork happens, less computational power is being utilized on the honest nodes, essentially reducing the variable h.
Thus, instead of m > h, malicious actors only need to achieve m > (a/b)h, in which a and b are the fraction of h. For instance, if 20% of the blocks are wasted, malicious actors only need to obtain m > (1–0.2)h to successfully launch a double-spend attack. Moreover, malicious actors generally will not waste their computational power on the wrong block as they act in coordination with one another. This rationalization is one of the community’s main reason to conclude that PoW is NOT scalable.
However, other existing consensus protocols often sacrificed decentralization in order to achieve scalability. Some examples include Delegated Proof-of-Stake (dPoS) and Proof-of-Authority (PoA). There are also various discussions regarding off-chain scalability solutions such as ’s Lightning Network and ’s Plasma, which also sparked decentralization-centralization argument due to their mechanisms. Nonetheless, a new consensus protocol (or a combination of multiple protocols) is necessary for scalability purposes.
Decentralization
As can be seen from previous analyses, decentralization is often sacrificed in order to improve scalability or security. However, it is the trickiest aspect to discuss as decentralization is somewhat “relative”. Its definition depends on the vision/point-of-view of the users and developers. Arguments could be made, but at the end of the day, there is no black and white rule to determine whether a consensus protocol is “decentralized” enough.
There is an ongoing debate regarding how decentralized does a needs to be. Perhaps developers of consensus protocol need to look at the purpose of the in order to assess the degree of decentralization needed. For instance, projects such as and Hashgraph achieve significantly higher TPS compared to other projects due to their rather centralized nature, allowing the founders of the project significant control over the supply and demand of the through method such as the assignment of Unique Node Lists (UNLs), which essentially brings control back to entities that are close with the founding company.
An interesting argument that is often brought up by the community during the discussion of decentralization is “ without ”. The proponent of the idea argues that a cryptographically generated distributed is a breakthrough on the field of database regardless whether it utilizes a or not. The opposing view argues that taking the / aspect out of a is essentially defeating the original purpose of technology, which is to eliminate the need of a 3rd party authority, and that a “ without ” simply means a secure, refined database system.
To conclude, the study of mechanism design combined with non-cooperative game theory properties are important to ensure that a consensus protocol incentivizes participants to act in an honest way and to prevent giving too much power to sub-centralized individuals/groups. Unless a new consensus protocol breakthrough happens, an ideal way to improve a consensus protocol is to balance security, scalability, and decentralization according to its purposes.
Performance Assessment
The industry currently has various consensus protocols derived from thousands of projects. Presented below are the performance assessments of the more popular consensus protocols.
First, the study reviews independently the rationalization behind each selected consensus protocol including its current advantages, disadvantages, and use-cases based on the previously explained trilemma (Appendix A). The study then took a macro-level view on the foundational building block of each consensus by creating a Venn Diagram. Lastly, a chart correlating the trade-offs between security, scalability, and decentralization is presented to give a better understanding of the current challenges that consensus developers face to improve overall performances (Appendix B).
The results showcased that decentralization has a very important role in determining the depth of security and scalability of a . Centralized projects such as XRP and Hashgraph yielded very strong scores of (4,4) and (5,5) whereas decentralized projects yielded lower scores. The question becomes;
”How decentralized does a project needs to be to achieve its goals?”
The following section of the paper provides elaborate case studies on 3 different projects and how its consensus protocol affected the use-cases.
Published at Tue, 09 Apr 2019 21:03:01 +0000