Skip to main content

Documentation Index

Fetch the complete documentation index at: https://cosmos-docs-sync-security-docs.mintlify.app/llms.txt

Use this file to discover all available pages before exploring further.

A mempool (a contraction of memory and pool) is a node’s data structure for storing information on uncommitted transactions. It acts as a sort of waiting room for transactions that have not yet been committed. CometBFT currently supports three types of mempools: flood, nop, and app.

1. Flood

The flood mempool stores transactions in a concurrent linked list. When a new transaction is received, it first checks if there’s space for it (size and max_txs_bytes config options) and that it’s not too big (max_tx_bytes config option). Then, it checks if this transaction has already been seen before by using an LRU cache (cache_size regulates the cache’s size). If all checks pass and the transaction is not in the cache (meaning it’s new), the ABCI CheckTxAsync method is called. The ABCI application validates the transaction using its own rules. If the transaction is deemed valid by the ABCI application, it’s added to the linked list. The mempool’s name (flood) comes from the dissemination mechanism. When a new transaction is added to the linked list, the mempool sends it to all connected peers. Peers themselves gossip this transaction to their peers and so on. One can say that each transaction “floods” the network, hence the name flood. Note there are experimental config options experimental_max_gossip_connections_to_persistent_peers and experimental_max_gossip_connections_to_non_persistent_peers to limit the number of peers a transaction is broadcast to. Also, you can turn off broadcasting with the broadcast config option. After each committed block, CometBFT rechecks all uncommitted transactions (can be disabled with the recheck config option) by repeatedly calling the ABCI CheckTxAsync.

Transaction ordering

Currently, there’s no ordering of transactions other than the order they’ve arrived (via RPC or from other nodes). So the only way to specify the order is to send them to a single node. valA:
  • tx1
  • tx2
  • tx3
If the transactions are split up across different nodes, there’s no way to ensure they are processed in the expected order. valA:
  • tx1
  • tx2
valB:
  • tx3
If valB is the proposer, the order might be:
  • tx3
  • tx1
  • tx2
If valA is the proposer, the order might be:
  • tx1
  • tx2
  • tx3
That said, if the transactions contain some internal value, like an order/nonce/sequence number, the application can reject transactions that are out of order. So if a node receives tx3, then tx1, it can reject tx3 and then accept tx1. The sender can then retry sending tx3, which should probably be rejected until the node has seen tx2.

2. Nop

The nop (short for no operation) mempool is used when the ABCI application developer wants to build their own mempool. When type = "nop", transactions are not stored anywhere and are not gossiped to other peers using the P2P network. Submitting a transaction via the existing RPC methods (BroadcastTxSync, BroadcastTxAsync, and BroadcastTxCommit) will always result in an error. Because there’s no way for the consensus to know if transactions are available to be committed, the node will always create blocks, which can be empty sometimes. Using consensus.create_empty_blocks=false is prohibited in such cases. The ABCI application becomes responsible for storing, disseminating, and proposing transactions using PrepareProposal. The concrete design is up to the ABCI application developers.

3. App

The CometBFT app mempool is distinct from the Cosmos SDK’s application mempool. The SDK’s application mempool controls transaction ordering at block proposal time. The CometBFT app mempool delegates the entire transaction lifecycle (storage, gossip, and rechecking) from CometBFT to the application. The CometBFT app mempool is currently implemented in Cosmos EVM.
The app mempool (also known as the Krakatoa mempool) is used when the ABCI application wants a middle ground between the flood and nop mempool. The app mempool delegates transaction storage, validation, and rechecking entirely to the ABCI application. CometBFT acts as a thin proxy — receiving transactions from RPC and P2P, forwarding them to the application via ABCI, and broadcasting application reaped transactions to peers.

Motivation

The traditional flood mempool architecture has several limitations: ABCI lock contention: In the flood mempool, CheckTx calls hold the ABCI connection lock. This lock is shared with consensus-critical operations like PrepareProposal and FinalizeBlock. Since CheckTx volume is directly proportional to network load and fully driven by external actors submitting transactions, this means an externally influenced workload can hold up block building and finalization. During rechecking after a committed block, the problem compounds — all incoming transactions and consensus operations must wait for the full recheck pass to complete. Limited application control: The application has no control over when rechecking occurs, how transactions are prioritized during recheck, or how the mempool interacts with block building. CometBFT drives the entire lifecycle. Redundant state management: CometBFT maintains its own transaction storage (the concurrent linked list) even though the application often needs its own mempool for ordering and prioritization. This leads to duplicated state and synchronization overhead. The app mempool eliminates these issues by making the application the single source of truth for mempool state. CometBFT no longer holds the ABCI lock for mempool operations: InsertTx and ReapTxs are called concurrently and the application is responsible for its own synchronization.

Quick Start

To enable the Krakatoa app mempool, set the mempool type in your CometBFT config.toml:
[mempool]
type = "app"
This switches CometBFT from the default flood mempool to the application delegated model. CometBFT will forward transactions to your application via InsertTx and pull validated transactions back via ReapTxs, rather than managing mempool state itself. Your application must implement the InsertTx and ReapTxs ABCI handlers.

New ABCI Methods

Two new methods are added to the ABCI Application interface as part of the mempool connection:

InsertTx

service ABCIApplication {
  rpc InsertTx(RequestInsertTx) returns (ResponseInsertTx);
}

message RequestInsertTx {
  bytes tx = 1;
}

message ResponseInsertTx {
  uint32 code = 1;
}
InsertTx is called when CometBFT receives a transaction, either from an RPC client (BroadcastTxSync, BroadcastTxAsync) or from a peer via P2P gossip. The application is expected to validate and store the transaction in its own mempool. Response codes:
CodeMeaningCometBFT Behavior
0 (OK)Transaction acceptedTransaction is marked as seen and will not be re-inserted
1 - 31,999Transaction rejectedTransaction is marked as seen and will not be retried
>= 32,000 (Retry)Temporary rejectionTransaction is removed from the seen cache so it can be retried later
The retry mechanism is useful when the application’s mempool is temporarily at capacity. By returning a retry code, the application signals that the transaction is not inherently invalid — it simply cannot be accepted right now. When the transaction is received again (from a peer or resubmitted via RPC), it will be forwarded to the application again. Concurrency guarantee: InsertTx calls are thread-safe from CometBFT’s perspective. Multiple goroutines may call InsertTx concurrently (e.g., transactions arriving from different peers simultaneously). The application is responsible for its own internal synchronization. No ABCI lock: Unlike CheckTx in the flood mempool, InsertTx does not hold the ABCI connection lock. This means InsertTx calls do not block consensus operations, and consensus operations do not block InsertTx.

ReapTxs

service ABCIApplication {
  rpc ReapTxs(RequestReapTxs) returns (ResponseReapTxs);
}

message RequestReapTxs {
  uint64 max_bytes = 1;
  uint64 max_gas   = 2;
}

message ResponseReapTxs {
  repeated bytes txs = 1;
}
ReapTxs is called periodically by the AppReactor to retrieve new, validated transactions from the application for p2p broadcast. The application should return transactions that are ready for gossip — typically transactions that have been validated and are eligible for block inclusion. When max_bytes and max_gas are both zero, the application should return all available transactions without limits.

AppMempool

The AppMempool is the CometBFT side implementation that fulfills the Mempool interface while delegating all real work to the application.

What AppMempool does

  • Proxies incoming transactions to the application via InsertTx
  • Maintains a seen cache (LRU, 100k entries) to avoid re-inserting duplicate transactions
  • Validates transaction size against max_tx_bytes before forwarding
  • Handles retry semantics by removing retryable transactions from the seen cache

What AppMempool does NOT do

  • Store transactions — the application owns all mempool state
  • Call Update after blocks — rechecking is the application’s responsibility
  • Provide transactions for ReapMaxBytesMaxGas — always returns nil, since the application builds blocks via PrepareProposal

AppReactor

The AppReactor replaces the traditional mempool Reactor for P2P transaction gossip.

Broadcasting

The reactor runs a background loop that:
  1. Calls ReapTxs on the application every reap_interval duration (default 500ms).
  2. Chunks the returned transactions into batches (up to MaxBatchBytes)
  3. Broadcasts each batch to all connected peers

Receiving

When a peer sends transactions, the reactor:
  1. Deserializes the transaction batch from the P2P envelope
  2. Calls InsertTx on the AppMempool for each transaction
  3. Logs and discards transactions that fail insertion (already seen, too large, or rejected by the application)

Supporting BroadcastTx... methods

In an effort to support existing chains and CometBFT tx broadcast RPC methods, compatibility with BroadcastTxSync, BroadcastTxAsync, and BroadcastTxCommit has been maintained when using the app mempool. Transactions ingested via these RPC call CheckTx in the hot path without the ABCI connection lock. If the ABCI application would like to support these methods, they must wire a CheckTxHandler into their application and manage the locking relative to other ABCI state changes themselves. It is highly recommended for applications to not use these methods when using an app mempool. Applications should implement application side RPC methods for tx ingestion and insert these txs directly into their app mempool implementation (or other comparable data structure), only relying on CometBFT to inform them of transactions that are received over the p2p network.

Transaction Lifecycle

With the app mempool, the transaction lifecycle changes significantly:

Previous Lifecycle (flood mempool)

  1. Transaction arrives via RPC or P2P
  2. CometBFT validates size and checks the seen cache
  3. CometBFT calls CheckTx on the application (holds ABCI lock)
  4. If valid, CometBFT stores the transaction in its linked list
  5. CometBFT broadcasts the transaction to all peers
  6. At block proposal time, CometBFT calls ReapMaxBytesMaxGas and passes transactions to PrepareProposal
  7. After block commit, CometBFT rechecks all remaining transactions via CheckTx (holds ABCI lock for the entire recheck)

Updated P2P Lifecycle (app mempool)

  1. Transaction arrives via P2P
  2. CometBFT validates size and checks the seen cache
  3. CometBFT calls InsertTx on the application (no ABCI lock)
  4. The application validates and stores the transaction in its own mempool
  5. The AppReactor periodically calls ReapTxs and broadcasts returned transactions to peers
  6. At block proposal time, PrepareProposal receives no transactions from CometBFT — the application builds the block from its own mempool
  7. After block commit, the application runs its own recheck logic on its own schedule

Updated application RPC Lifecycle (app mempool)

  1. Transaction arrives to application via application side RPC
  2. Application validates and inserts the tx into its app mempool implementation. Note that validating can happen after insert, it’s up to the application!
  3. The application provides the tx to CometBFT once validated by returning its bytes via the ReapTxs ABCI method.
  4. CometBFT gossips the validated transaction to peers.
  5. At block proposal time, PrepareProposal receives no transactions from CometBFT: the application builds the block from its own mempool
  6. After block commit, the application runs its own recheck logic on its own schedule

Updated (broadcast_tx_...) RPC Lifecycle (app mempool)

  1. Transaction arrives via RPC
  2. CometBFT validates size and checks the seen cache
  3. CometBFT calls CheckTx on the application (no ABCI lock, it is up to the application to perform any necessary locking here). CheckTx is used here to maintain API compatibility with existing clients. Applications implementing their own application side mempool should strongly consider implementing their own application side RPC methods to directly handle transaction ingestion, rather than relying on CometBFT’s.
  4. The application validates and stores the transaction in its own mempool
  5. The AppReactor periodically calls ReapTxs and broadcasts returned transactions to peers
  6. At block proposal time, PrepareProposal receives no transactions from CometBFT — the application builds the block from its own mempool
  7. After block commit, the application runs its own recheck logic on its own schedule

Block Building

Since the AppMempool returns nil from ReapMaxBytesMaxGas, the block executor passes no mempool transactions to PrepareProposal. The application’s PrepareProposalHandler is expected to select transactions directly from its own mempool. This gives the application full control over transaction ordering, prioritization, and inclusion.

Application Guarantees and Responsibilities

When implementing InsertTx and ReapTxs, applications should be aware of the following:

CometBFT guarantees to the application

  • InsertTx will not be called with empty transactions
  • InsertTx will not be called with transactions exceeding max_tx_bytes
  • Transactions returning a retry code will be removed from the seen cache and may be re-submitted
  • ReapTxs will be called periodically (every 500ms) regardless of whether new transactions have arrived
  • InsertTx and ReapTxs will not hold the ABCI connection lock

Application responsibilities

  • Concurrency: The application must handle concurrent InsertTx calls safely
  • Rechecking: The application must (optionally) implement its own transaction revalidation after blocks are committed
  • Block building: The application must select transactions for blocks in its PrepareProposalHandler — CometBFT will not provide mempool transactions
  • Storage: The application must manage its own transaction storage and eviction

Configuration

To enable the app mempool, set the mempool type in config.toml:
[mempool]
type = "app"
The following options apply to the app mempool:

Shared options

These options are shared with other mempool types:
OptionDescriptionDefault
max_tx_bytesMaximum size of a single transaction (checked before InsertTx)1048576 (1 MB)
max_batch_bytesMaximum size of a broadcast batch0 (no limit)
broadcastEnable or disable P2P transaction broadcastingtrue

App mempool options

These options only apply when type = "app":
OptionDescriptionDefault
seen_cache_sizeSize of the LRU cache for deduplicating seen transactions. Prevents re-inserting transactions already forwarded to the application.100000
check_tx_retry_delayDelay after which a tx is removed from the seen cache after forwarding to the application via CheckTx. If a non-retryable error code is returned, the full delay is used before removing from the cache (allowing a retry). If a retryable error code is returned, 1/10th of the delay is used."500ms"
reap_max_bytesInforms the application the maximum amount of bytes it should return from each call to ReapTxs. 0 means no limit.0
reap_max_gasInforms the application the maximum amount of gas it should return from each call to ReapTxs. 0 means no limit.0
reap_intervalInterval between ReapTxs calls."500ms"

Example

[mempool]
type = "app"
max_tx_bytes = 1048576

# App mempool options
seen_cache_size = 100000
reap_max_bytes = 0
reap_max_gas = 0
reap_interval = "500ms"
check_tx_retry_delay = "500ms"
Options specific to the flood mempool (size, max_txs_bytes, cache_size, recheck, etc.) have no effect when using the app mempool.