The Shoal framework significantly drops the latency of Bullshark Consensus on Aptos by 40%-80%.

Shoal Framework: How to Reduce Latency of Bullshark on Aptos

Overview

Aptos Labs has solved two important open problems in DAG BFT, significantly reducing latency and eliminating the need for timeouts in deterministic practical protocols for the first time. Overall, Bullshark's latency has been improved by 40% in failure-free scenarios and by 80% in failure scenarios.

Shoal is a framework that enhances the Narwhal-based consensus protocol ( through pipelining and leader reputation, like DAG-Rider, Tusk, and Bullshark ). Pipelining reduces DAG sorting latency by introducing an anchor point in each round, while leader reputation further improves latency by ensuring that anchor points are associated with the fastest validating nodes. Additionally, leader reputation allows Shoal to utilize asynchronous DAG constructions to eliminate timeouts in all scenarios. This enables Shoal to provide a property we call universal responsiveness, which encompasses the optimistic responses typically required.

The technology is very simple, involving multiple instances of the underlying protocol running one after another in sequence. Therefore, when instantiated with Bullshark, we get a group of "sharks" participating in a relay race.

Detailed Explanation of the Shoal Framework: How to Reduce Bullshark latency on Aptos?

Motivation

In the pursuit of high performance in blockchain networks, there has been a consistent focus on reducing communication complexity. However, this approach has not led to a significant increase in throughput. For example, the Hotstuff implemented in early versions of Diem only achieved 3500 TPS, far below the target of 100k+ TPS.

The recent breakthrough stems from the realization that data propagation is the main bottleneck based on leader protocols, which can benefit from parallelization. The Narwhal system separates data propagation from core consensus logic, proposing an architecture where all validators propagate data simultaneously, while the consensus component merely orders a smaller amount of metadata. The Narwhal paper reports a throughput of 160,000 TPS.

Previously introduced Quorum Store, which is the implementation of Narwhal that separates data propagation from consensus, and how to use it to scale the current consensus protocol Jolteon. Jolteon is a leader-based protocol that combines Tendermint's linear fast path and PBFT-style view changes, reducing Hotstuff latency by 33%. However, it is clear that leader-based consensus protocols cannot fully leverage Narwhal's throughput potential. Despite separating data propagation from consensus, as throughput increases, the leader of Hotstuff/Jolteon is still constrained.

Therefore, it was decided to deploy Bullshark on the Narwhal DAG, a consensus protocol with zero communication overhead. Unfortunately, compared to Jolteon, the DAG structure that supports Bullshark's high throughput incurs a 50% latency cost.

This article introduces how Shoal significantly reduces Bullshark latency.

Detailed Explanation of the Shoal Framework: How to Reduce Bullshark latency on Aptos?

DAG-BFT Background

Each vertex in the Narwhal DAG is associated with a round. To enter round r, a validator must first acquire n-f vertices that belong to round r-1. Each validator can broadcast one vertex per round, and each vertex must reference at least n-f vertices from the previous round. Due to the asynchronous nature of the network, different validators may observe different local views of the DAG at any given time.

A key property of DAG is not ambiguous: if two validator nodes have the same vertex v in their local view of the DAG, then they have exactly the same causal history of v.

Detailed Explanation of the Shoal Framework: How to Reduce Bullshark latency on Aptos?

Total Order

Consensus on the total order of all vertices in the DAG can be achieved without additional communication overhead. To this end, the validators in DAG-Rider, Tusk, and Bullshark interpret the structure of the DAG as a consensus protocol, where vertices represent proposals and edges represent votes.

Although the logic of community intersection on the DAG structure is different, all existing Narwhal-based consensus protocols have the following structure:

  1. Anchor Point: Every few rounds there will be a pre-determined leader, and the peak of the leader is called the anchor point;

  2. Sorting anchor points: Validators independently but deterministically decide which anchor points to order and which to skip;

  3. Causal History Ordering: Validators process their ordered anchor point list one by one and sort all previously unordered vertices in the causal history of each anchor point.

The key to ensuring security is to ensure that in step (2), all honest validating nodes create an ordered anchor point list so that all lists share the same prefix. In Shoal, the following observations can be made regarding all the protocols mentioned above:

All validators agree on the first ordered anchor point.

Detailed Explanation of Shoal Framework: How to Reduce Bullshark latency on Aptos?

Bullshark latency

The latency of Bullshark depends on the number of rounds between the ordered anchors in the DAG. Although the synchronous version of Bullshark is more practical and has better latency than the asynchronous version, it is far from optimal.

Question 1: Average block latency. In Bullshark, every even round has an anchor point, and the vertices of each odd round are interpreted as votes. In common cases, two rounds of DAG are needed to order the anchor points; however, the vertices in the causal history of the anchor require more rounds to wait for the anchor to be sorted. In typical cases, the vertices in the odd round need three rounds, while the non-anchor vertices in the even round require four rounds.

Question 2: Fault case latency, the above latency analysis applies to fault-free situations. On the other hand, if a round's leader fails to broadcast the anchor point quickly enough, the anchor point cannot be ordered ( and is thus skipped ). Therefore, all unordered vertices from previous rounds must wait for the next anchor point to be ordered. This significantly reduces the performance of the geographical replication network, especially since Bullshark timeout is used to wait for the leader.

In-depth explanation of the Shoal framework: How to reduce Bullshark latency on Aptos?

Shoal Framework

Shoal enhances Bullshark( or any other Narwhal-based BFT protocol) through pipelining, allowing for an anchor point in each round and reducing the latency of all non-anchor vertices in the DAG to three rounds. Shoal also introduces a zero-cost leader reputation mechanism in the DAG, which biases the selection towards fast leaders.

Challenge

In the context of the DAG protocol, pipeline and leader reputation are considered challenging issues for the following reasons:

  1. The previous pipeline attempted to modify the core Bullshark logic, but this seems to be impossible in essence.

  2. The introduction of leader reputation in DiemBFT and its formalization in Carousel is based on dynamically selecting future leaders according to the past performance of validators, as in the idea of anchors in (Bullshark. Although there is disagreement on leader identity, it does not violate the security of these protocols. However, in Bullshark, it may lead to completely different orderings, which raises the core issue that dynamically and deterministically selecting round anchors is necessary for achieving consensus, and validators need to reach agreement on the ordered history to select future anchors.

As evidence of the problem's difficulty, Bullshark's implementation ), including the current implementations in the production environment (, does not support these features.

![Detailed Explanation of the Shoal Framework: How to Reduce Bullshark latency on Aptos?])https://img-cdn.gateio.im/webp-social/moments-859e732e16c3eee0e2c93422474debc2.webp(

Protocol

In Shoal, we rely on the ability to execute local computation on the DAG and achieve the capability to save and reinterpret information from previous rounds. With the core insight that all validators agree on the first ordered anchor point, Shoal sequentially combines multiple Bullshark instances for pipelining, making ) the first ordered anchor point a switching point for instances, and ( the causal history of the anchor point used to calculate the reputation of leaders.

Pipeline

V that maps rounds to leaders. Shoal runs instances of Bullshark one after another, so for each instance, the anchor is pre-determined by mapping F. Each instance orders an anchor, which triggers the switch to the next instance.

Initially, Shoal launched the first instance of Bullshark in the first round of DAG and ran it until the first ordered anchor point was determined, such as in round r. All validators agreed on this anchor point. Therefore, all validators could confidently agree to reinterpret the DAG starting from round r+1. Shoal simply launched a new Bullshark instance in round r+1.

In the best case, this allows Shoal to order an anchor in each round. The anchor points of the first round are sorted by the first instance. Then, Shoal starts a new instance in the second round, which itself has an anchor point, and that anchor is sorted by that instance. Then, another new instance orders anchor points in the third round, and the process continues.

![A Comprehensive Explanation of the Shoal Framework: How to Reduce Bullshark latency on Aptos?])https://img-cdn.gateio.im/webp-social/moments-9f789cb669f6fcc244ea7ff7648e48b4.webp(

Leader Reputation

During the Bullshark sorting period, skipping anchor points increases latency. In this case, pipeline technology is powerless because a new instance cannot be started until the previous instance orders the anchor point. Shoal ensures that it is less likely to choose the corresponding leader for handling missing anchor points in the future by assigning a score to each validator based on the recent activity history of each validation node using a reputation mechanism. Validators that respond and participate in the protocol will receive high scores; otherwise, validation nodes will be assigned low scores as they may crash, be slow, or act maliciously.

The concept is to deterministically recalculate the predefined mapping F from rounds to leaders each time the score is updated, favoring the higher-scoring leaders. In order for validators to reach consensus on the new mapping, they should agree on the score, thereby achieving consensus on the history used to derive the scores.

In Shoal, pipelines and leadership reputation can naturally integrate, as they both use the same core technology, which is to reinterpret the DAG after reaching consensus on the first ordered anchor point.

In fact, the only difference is that after sorting the anchors in the r-th round, the validators only need to calculate the new mapping F' starting from the (r+1)-th round based on the causal history of the ordered anchors in the r-th round. Then, the validating nodes execute a new instance of Bullshark using the updated anchor selection function F' starting from the (r+1)-th round.

![Detailed Analysis of Shoal Framework: How to Reduce Bullshark latency on Aptos?])https://img-cdn.gateio.im/webp-social/moments-1baf540693f376d93cb18ef3193593cc.webp(

No more timeouts

Timeout plays a crucial role in all leader-based deterministic partially synchronous BFT implementations. However, the complexity they introduce increases the number of internal states that need to be managed and observed, which adds to the complexity of the debugging process and requires more observability techniques.

Timeouts can significantly increase latency, as it is important to configure them properly, and they often require dynamic adjustments since it heavily depends on the environment ) network (. Before transitioning to the next leader, the protocol pays the full timeout latency penalty for the faulty leader. Therefore, timeout settings cannot be overly conservative, but if the timeout period is too short, the protocol may skip good leaders. For example, we observe

APT-3.67%
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • 8
  • Repost
  • Share
Comment
0/400
ZenZKPlayervip
· 07-28 17:03
The improvement is very real.
View OriginalReply0
DaoResearchervip
· 07-25 20:32
Latency indicators are worth studying.
View OriginalReply0
Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)