Aptos Labs' new Move feature, Aggregators, enables smart contracts on the Aptos blockchain to execute in parallel even in the presence of read-write conflicts. Since going live on the mainnet, the Aptos network has successfully utilized Aggregators to track the total supply of its native tokens in real-time. Although each transaction modifies this value, the parallel processing of transactions is not affected by the aggregator. According to the latest AIP-43 protocol, aggregators can also be applied to a limited number of digital asset collections (e.g., NFTs), making their minting much faster. The latest preview network data shows that it takes less than 90 seconds to mint 1 million NFTs on Aptos using aggregator technology.
#1.
Background on parallel execution
Block-STM -- the parallel execution engine supported by Aptos -- utilizes optimistic concurrency and multi-version data structure techniques to perform speculative execution of transactions in parallel. If there is an error in the speculative execution of a transaction, Block-STM discovers it and terminates the transaction and re-executes it later.
The performance of a parallel engine like Block-STM depends heavily on the workload, especially the frequency of read-write conflicts. As an example, consider two sequential transactions: a transfer from Alice to Bob, followed by a transfer from Carol to David. These two transactions can be executed in parallel because they do not affect each other and there is no overlap of read and write sets.
Let's look at another pair of transactions: a transfer from Alice to Bob, followed by a transfer from Bob to Carol. in this case, Block-STM could first try to execute the second transaction, but it would find that it conflicted with the previous transaction (the transfer from Alice to Bob) because the second transaction read Bob's balance, but the first one had not yet been updated. As a result, Bob's transfer to Carol will be interrupted and will be executed again after Alice's transfer to Bob is complete.
#2.
favorableProblems with sequential workloads
Workloads with a large number of read and write conflicts cannot be processed in parallel efficiently. Sequential workloads are an extreme case where all transactions conflict with each other. In this case, sequential execution of transactions is clearly the most desirable option. However, unfortunately, workloads can be sequential in multiple blockchain application scenarios, here are some examples.
Supply and Balance Counter Read/Write Conflict Problems
Many blockchain systems burn some of the transaction fees. As a result, each transaction indirectly updates the total supply of native tokens, causing all transactions to conflict.
This means that even simple peer-to-peer non-conflicting transfers can become sequential workloads due to conflicting reads and writes to the supply counter. This is a common problem that has led some blockchain systems to stop tracking the total supply of the native currency in smart contracts. In order to maintain the ability to process in parallel, these systems keep track of the supply through protocol implementations (e.g., using atomic counters).
For digital asset collections, such as non-homogenized tokens (NFTs), the size of the collection is usually tracked as well. For example, these collections may serve as tickets to a music festival, and since there are a limited number of tickets, the corresponding number of NFTs is also limited.
Similar to tracking the total supply of native tokens, each transaction that mints or destroys tokens requires an update to the size of the NFT collection. Also, it is very important that the number of NFTs minted does not exceed the collection limit set by its creator.
With these limitations, NFTs can only be minted sequentially one by one, which significantly reduces the throughput of the network. To the best of our knowledge, many blockchains face challenges in casting large numbers of NFTs quickly, which often takes hours to accomplish.
Finally, another source of reading and writing conflicts is the issue of account balances, as each transaction requires the payment of a fee. This fee is deducted from the account balance, so all transactions using the same paid account will conflict. This hinders parallel processing by a single sender, such as an account sending a large number of transactions in a short period of time, which is especially important in applications such as gaming.
Indexing and Naming Read/Write Conflicts in NFT
NFTs are usually indexed sequentially based on minting time, and their names will include the index, e.g. "Star #17". Therefore, even though the size of the NFT collection is not tracked, thus avoiding read/write conflicts with the collection size counter, tokens cannot be minted in parallel due to indexing, as the indexing process itself introduces another bottleneck of sequential execution.
Innovations from Aptos
To solve this problem, Aptos Labs engineers introduced a Move feature in AIP-47 called "Aggregators". This feature allows the Aptos network to parallelize workloads that would otherwise be executed sequentially.
#3.
Our views on the issue
We have found that some workloads that appear to execute sequentially do not actually contain many read-write conflicts. Moreover, even if there are conflicts, they have little impact on the final outcome of the transaction. We can circumvent these conflicts by delaying reads from the blockchain state and delaying writes.
When processing a transaction, conflicting parts can be captured and speculative predictions can be made about the outcome of their execution. In this way, the main part of the transaction can be processed in full parallel. Some specific examples are given below.
Parallel Aggregation by Deferred Reads
Instead of updating a counter directly (e.g., the supply of native tokens or the size of the NFT collection), we can record this update with the help of the multi-version data structure employed by Block-STM, but not execute it immediately.
Take a simple sequence of such transactions, all of which are updating some counter stored in the blockchain state, but do not conflict with each other.
In this example, the first transaction subtracts 10, so the change is recorded as -10. The second transaction adds 20 and subtracts 17. Instead of reading the counter and writing the new value, we can record two changes, +20 and -17, so that the total change caused by this transaction is +3. Similarly, the changes for the third and fourth transactions are -10 and -20, respectively.
If there are no other read/write conflicts between these transactions, they can be executed in parallel, and the counters can be updated in any order. For example, if the counter initially has a value of 100, it will be 63 after the four transactions have been executed.Overall, read-write conflicts that might be caused by updating the global counter can be avoided, allowing for fully parallel execution of the workload.
Sequential execution is used only when conflicts are unavoidable. For example, if the third transaction reads the counter, then this read operation forces all preceding updates (from the first and second transactions) to be pre-computed, which limits the possibility of parallel processing.
Handling integer overflow and underflow
When updating global counters, you may encounter problems with overflow or underflow. For example, in the following example, the counter overflows in the second transaction. However, as long as we can correctly predict this overflow, we can still process the counter update in parallel. If the prediction fails, this transaction is re-executed by Block-STM, just as any other conflicting transaction would be processed. Our hope is that such speculative predictions are usually accurate and that overflows or underflows are not common.
Snapshots -- conflict-free writes to blockchain state
Even though global counter updates have been parallelized, this still does not solve the problem encountered when NFTs are minted. The index (i.e., the time the NFT was minted) and the token name (and possibly the index as well) read the counter value before the NFT persistence is saved to the blockchain state.
However, we can solve this problem by delaying the write to the blockchain state. Instead of writing the token index to the state immediately, we can take a "snapshot" of the current index value and store it when the transaction is committed. Essentially, the snapshot records the current partial update of the global counter variable.
#4.
Aggregator on Aptos
In Aptos Labs, we have developed a generic library to handle the previously mentioned application scenarios, which can be found under aptos-framework.
This library defines "Aggregators", which is an implementation of concurrency counters. It can be utilized by developers to avoid sequential execution bottlenecks encountered when executing transactions in parallel, e.g., for keeping track of the size of their collection of tokens.
An aggregator is essentially a wrapper around an integer that can be initialized to zero. When creating an aggregator, the user can also customize an upper limit, which is the maximum value the aggregator can store. Considering that aggregators are unsigned integers, the minimum possible value is zero.
// The Type parameter allows one to support different integer bit widths, and the
// e.g., u64 or u128.
struct Aggregatorhas store {
value: IntTy, value: u64 or u128.
}
let counter = aggregator::create(/*max_value=*/100);
Like normal integers, aggregators can be updated by addition and subtraction methods (called try_add and try_sub), which operate in parallel! These methods return a boolean value that indicates whether the value of the aggregator has changed (the result is true if there is no overflow, false if there is).
let success = aggregator::try_add(&mut counter, 20);
let success = aggregator::try_sub(&mut counter, 40);
if (success) {
// Counter has been modified.
} else {
// Error handling goes here.
}; }
A key point is that concurrent updates to the aggregator do not lead to conflicts between transactions. This is because the write-write conflict problem is already solved by the multi-version data structure used by Block-STM. Moreover, the behavior of the aggregator is consistent with the results of sequential execution, i.e., the results are the same whether executed sequentially or in parallel.
Aggregators can also perform read operations, which read the values stored in the aggregator. However, this can limit parallelism between multiple transactions and it is best to avoid this operation if possible.
// This limits the parallelism of the system.
let value = aggregator::read(&counter);
In order to read aggregator values without being affected by sequential execution (e.g., for indexing or naming NFT collections), the user can also take a snapshot of the aggregator. An aggregator snapshot records the value of an aggregator at a given moment, but does not reveal the actual value.
struct AggregatorSnapshothas store {
value: Ty,
}
// Snapshot captures the value of the counter.
// This does not affect the parallelism of the system.
This does not affect the parallelism of the system. let snapshot = aggregator::snapshot(&counter); // Snapshot captures the value of the counter.
If the exact value of the snapshot needs to be known, then it can be read like a normal aggregator.
// Limits the parallelism of the system, but returns the actual
// value stored inside of a snapshot.
let value = aggregator::read_snapshot(&snapshot);
In the current implementation, snapshots can also be formatted into string form.
let name = aggregator::concat("star #", &snapshot, "");
We evaluated the aggregator against representative workloads on our latest preview network (master-like), focusing on NFT casting. The NFT collections we used have a limited supply and are sequentially named. We successfully cast 1 million collections in 90 seconds or 5 million collections in 8 minutes, achieving consistently high throughput. Performance was significantly improved by a factor of about 10 compared to an execution without the aggregator.
In addition, we used synthetic benchmarking to examine the performance of the aggregator. We found that updating an uncapped global counter in Move via the aggregator is nearly 9x faster. This is attributed to its better parallelism.
#5.
Aggregators in production environments
Since the Aptos mainnet went live, aggregators have been used in its codebase to keep track of the total supply of Aptos native tokens, which means that the system's ability to process in parallel is not compromised, even in the case of burning transaction fees. While the total supply is updated with each transaction, the workload can still be processed in parallel.
AIP-47 proposes newer and more powerful APIs such as aggregators and snapshots and makes them publicly available. With AIP-43, aggregators can also be deployed within the framework. This feature allows collections of digital assets on the Aptos platform to be minted and destroyed in parallel, even while tracking their size. In addition, these collections can be indexed and have formatted names. If these AIPs are approved, operations on digital assets will become parallelizable and are expected to be implemented on the mainnet in the first quarter of 2024.
#6.
Follow-up article
So how is the aggregator implemented? How does it reduce conflicts between transactions in Block-STM and scale as threads increase? The answers to all these questions will be revealed in an upcoming article. Stay tuned!
All of the above content is reproduced from the Internet, does not represent the position of AptosNews, is not investment advice, investment risk, the market need to be cautious, such as infringement, please contact the administrator to delete.