Originally posted to Mirror.
Thank you to Rajiv Patel-O'Connor, Tarun Chitra, getsqt, Ben Fisch, Evan Forbes, Christopher Goes, and apriori for incredibly valuable insights, feedback, and review of this post.
Thank you to Robert Miller, Hasu, Shea Ketsdever, Josh Bowen, and many others for great discussions around some of the topics here as well.
This post further analyzes SUAVE and shared sequencers as a follow up to my last:
This is the complete guide to:
- Sequencer decentralization
- Shared sequencers
- X-chain atomicity
- MEV-awareness for rollups
- Where SUAVE fits in
- Decentralizing provinghttps://t.co/WRiDrkG3BL— Jon Charbonneau (@jon_charb) March 19, 2023
Part I - SUAVE: I dive into its contemplated architecture, uses, limitations, and interactions. I won't focus on the specific PET implementations (i.e., SGX, MPC, FHE, etc.). I'll mostly treat that as a black box here.
Part II - Anoma: Anoma is built on many of the same underlying ideas as SUAVE, but it takes a fundamentally opposite approach.
Part III - Shared Sequencers:
SUAVE will be an independent network which allows any chain to outsource two roles:
The goal is to be the mempool and block builder for any domain. Aggregating preferences in a single auction has several benefits:
Flashbots describes SUAVE as having three components:
The SUAVE protocol itself has two logical layers:
SUAVE Chain is primarily intended to process transactions for:
However, SUAVE is a permissionless EVM chain, so anything could be deployed. Native transaction activity outside of its "intended" use is unclear.
The incentives to participate are:
The exact security model of SUAVE Chain is still TBD (e.g., L1, rollup, re-staking, etc.).
TLDR - SUAVE has its own chain and mempool. Anyone can express preferences to SUAVE for any executor to fulfill. Whatever executor fulfills the preference can claim the associated bid (payment). Together, these pieces allow any chain to outsource their own mempool and block building.
I think of the typical SUAVE transaction lifecycle in four high level stages:
Now let's walk through it in detail.
You can think of users as having two kinds of transactions:
1. Smart Preferences - I have a preference I want fulfilled, and I'm willing to pay a bid to any executor that can get it done for me. I can express any arbitrary preference I want by writing a smart contract on SUAVE.
Example - I want to close an ETH/USDC arbitrage between Optimism and Arbitrum. I'll pay 3 ETH to whoever gets it done for me.
2. Dumb Preferences - I'm just a regular user who wants to buy a shitcoin without 100% slippage. I just set my wallet RPC to rpc.flashbots.net and send my transaction:
Today - Your transaction is routed to Flashbots Protect, where the centralized builder keeps your transaction private and includes it.
Future - Your transaction is conditioned for you behind the scenes such that it'd be directed to an OFA like MEV-Share which feeds into SUAVE. The conditions of the OFA can be encoded as part of your preference for you.
There isn't a clear technical distinction, but the point is that "regular" unsophisticated users wouldn't need to personally code some complex smart contract logic to express their preferences to use SUAVE in a basic way.
Preferences are messages that a user signs to express a particular goal. They unlock a payment (bid) if the user's conditions are met. Preferences can range from:
SUAVE is introducing a new transaction type that provides a decentralized way to pass preferences. These will build on the existing properties of bundles (e.g., pre-confirmation privacy and no reverts) and allow for richer expression.
We could see innovation around preference expression to serve users' needs. For example, executors could specialize to "pre-process" transactions in ways that make them more valuable, such as:
As more preferences are aggregated in one unified mempool, executors can better maximize all users' welfare. For example, batch clearing is more efficient when more users have offsetting preferences. This is similar to the batching that CoW Swap solvers do today. However, the batching in SUAVE here could be more decentralized and completely generalized (vs. centralized and app-specific).
Note that SUAVE can still build blocks with transactions from any publicly available mempool as well, just like any centralized builder:
Users can express whatever preference they want via smart contracts on SUAVE Chain. This requires depositing funds on SUAVE Chain and placing a bid that an executor can claim if they fulfill your preference.
SUAVE is a stateful system where:
s_1
→ s_2
→ s_3
...s
is the current SUAVE stateS
is the set of all historical SUAVE states confirmed by the SUAVE Chain consensusexec(bid,s')
outputs b,e
b
is the bid that is paid to address e
(the executor who fulfills the preference)Considering a few examples:
In all cases - I have something I want done. If an executor gets it done for me, they can claim the 3 ETH bid against the oracles. Note that the example bids above are flexible in who the bid specifies as the executor able to claim the reward:
With this in mind, will SUAVE Chain's block times present a bottleneck in preference expression? Consider an example:
There are two possible scenarios here:
If this is a meaningful bottleneck, then SUAVE Chain would be pressured to have ultra-fast block times in order to service preferences for other domains with low block times. You'll get dragged down to the lowest common denominator.
Let's walk through an example preference lifecycle to see the flow:
Let's take a step back and understand how searchers operate today. MEV bots deploy custom onchain smart contracts to execute their strategies:
Smart contracts cannot execute by themselves (an EOA must trigger their code to initiate a transaction). So, searchers run offchain logic and real-time monitoring (e.g., mempool transactions, CEX prices, chain state, etc.) then decide when to make transactions/bundles and trigger their onchain contract logic based on that information.
They can push bundles to them, expressing how they want something executed. For new strategies, they deploy new contracts. As opposed to just sending vanilla EOA transaction bundles on the fly, searchers often use smart contracts as well to provide finer control over execution:
Downsides include:
Let's consider a simple sandwich bot example. I'm a searcher with a smart contract SC
deployed onchain to execute my strategy. The transaction flow is as follows:
tx_user
(SushiSwap trade with a high slippage limit).tx_frontrun
, tx_user
, tx_backrun
], and I offer my bid via direct coinbase.transfer
to the fee recipient (it's also possible to bid via gas price). Using SC
to help manage on-chain execution, I can make the coinbase.transfer
conditional upon my validity conditions (e.g. only pay if I'm able to earn at least as much as I expected from the sandwich). I signed my transactions with my EOA. However, note that SC
will actually execute the trade (i.e., SC
has my funds in it, and that's what will execute the txs against SushiSwap, not my EOA). I'm just signing with my EOA to call the SC
.SC
. If it reverts upon simulation (e.g., if someone gets the opportunity ahead of me, my conditions aren't met, etc.), then the builder won't include it. My transaction is successful though, and they accept my bid.SC
. SC
checks specific conditions in the bytecode around reversion. (E.g., checks that the sender interacting with it, my EOA, is authorized to do so and other criteria for execution are fulfilled. Note that adding more checks also increases my gas cost.). Then it has some branching logic to execute the strategy. It now successfully executes, and the chain state is updated.Overall, searchers already express preferences via smart contracts in a sense today, but they do not bid with them in the same manner as they would with SUAVE.
For a unified auction to interpret a wide range of preferences, there needs to be some form of unified "language" that everyone can use. In SUAVE's case:
The Turing-complete EVM provides the requisite expressivity. However, one question is whether the EVM is too expressive. Classical impossibility theorems exist in this area, showing how a bidding language which allows for arbitrary expression can result in unbounded computational complexity, rendering your auction impractical. In a world where SUAVE is looking to express arbitrarily complex preferences for arbitrarily many domains, that combinatorial explosion seems unclear.
There may also be concerns with griefing attacks here. For example, I might express a preference via a contract that always overflows the gas limit unless one particular input is provided. Then, nobody could simulate the actual condition for that contract until they have that input. The EVM doesn't provide great DoS prevention here.
The need for EVM smart contracts to express all preferences also seems suboptimal. If you're trying to implement complex logic such as:
Then writing custom smart contracts that implement this as a decision tree appears to be necessary. You obviously won't be writing out new smart contracts every time you want to express something to SUAVE - you'll have template contracts to execute strategies similarly to what was described earlier.
However, not every abstract preference a user could have will be serviced by existing contracts. It would appear beneficial to have a more native way to express preferences at the user intent level.
Next up, executors compete to fulfill user preferences across any domain.
You've probably heard of MEV-Share by now - Flashbots recently announced their beta version. This is the (trusted) v1 of the OFA which will eventually be worked into SUAVE. It'll take multiple iterations to improve it and reduce the trust in operators such as Flashbots.
Those details aren't the focus here though. The abstraction is that users send orders, and searchers (executors) competitively bid to give users best execution. Some variation of this would be a central part of SUAVE.
Bids b
are composed of:
k
e: S → A
P(s)
c
Q(s)
∑
In which:
e
maps from SUAVE states s ∈ S
to the set of SUAVE accounts A
P(s)
and Q(s)
are likewise over states of SUAVEThe result of executing a bid b
:
s_cur
satisfies P
→ transfer the deposit k
to the e(s_cur)
account, modifying SUAVE state to s_next
s_cur
does not satisfy P
→ SUAVE state remains unalteredWithin the bid, a user has three programmable privacy control knobs:
Programmable privacy = allowing the user to selectively decrypt and reveal as much (or as little) of their data as they wish, and under what conditions. You can partially decrypt information such that executors can fulfill your preferences while also keeping various aspects of your economic preferences private for example.
A user can enforce conditions such as:
SUAVE also requires credible commitments - we need to enforce certain conditions for execution. This is already part of MEV-Share with its "validity conditions." They're passed along with the user's transactions, stipulating conditions such as "the user must be paid ≥1 ETH for this bundle to be executed."
In the initial stages of MEV-Share, these conditions will not be enforceable - they'll rely on trusting the builder who's including them. Later iterations of the OFA will look to reduce trust in the enforcement of validity conditions by leveraging SGX.
Preferences can also include fee escalators - transactions which increase their gas price over time. They can even start with a negative value (i.e., an executor must pay the user for the right to execute it). This allows users to permissionlessly conduct an implicit Dutch auction for the right to fulfill their preference.
As an example:
Fee escalators can be powerful when combined with the programmable privacy described. Users are free to decrypt and reveal as much or as little as they wish.
If you keep this fee curve and your transaction information private, searchers could try to brute force optimize this against other bundles. You get the guarantee here that the user gets optimal MEV execution while giving permissionless real-time access to their orderflow. This just relies on a competitiveness assumption around searchers and validators.
The DBB network aggregates preferences (many of which now have their execution paths optimized by executors) and turns them into blocks across domains.
DBB is described as one of the three logically separate components within SUAVE because not all domains even have a notion of block building and PBS. However, this DBB role is really a specialized instance of an executor within the execution marketplace.
We're used to a clear distinction shown between searchers and builders today, but in reality the lines are a bit fuzzy. Executors are an umbrella term for the actors who fulfill these preferences, and that can include the roles we see today in searchers and builders.
Today, builders build entire blocks for Ethereum. In the long-run, SUAVE's ultimate goal is to have this network of executors building blocks collaboratively, "snowballing" into building a full block. One executor can build a portion of an encrypted block, then another can add on more transactions, and so on. This collaboration is how to get fully DBB.
Now it's time for the user's destination chain validators to accept (or reject) what the SUAVE executors are trying to fulfill.
SUAVE does not replace the mechanism by which a chain selects its blocks. They retain full control. Ethereum, Arbitrum, Optimism, etc. wouldn't be changing their fork-choice rules to opt into SUAVE. There are no protocol-level changes required for SUAVE to build a block for a chain.
For example:
Destination chain proposers may or may not be "SUAVE-aware" and natively integrated. For example, an Ethereum validator could profit switch between SUAVE bids and centralized builders, or it could simply not pay attention to SUAVE. It's more efficient if these validators are themselves SUAVE executors, but it's not required. Other actors could work to fulfill these preferences regardless.
Let's consider the transaction flow:
tx_1
, tx_2
, tx_3
] mined. If they are, I'll pay a 3 ETH bid b
to the executor e
If Validator is SUAVE-aware - Validator can profit switch b
against its best known mempool block. SUAVE will have native plugins for this where validators can directly listen to bids and automatically profit switch over bids they're able to parse and control. Other actors translate these to bids that validators can control.
If Validator is SUAVE-unaware - Executor gets the validator to include the bundle over PGAs or some other third-party channel (e.g., executor makes best effort PGA bids to get them included, or relays them as a bid via a third-party plug-in such as MEV-Boost).
As I've written previously, this means that SUAVE cannot guarantee atomic inclusion of X-chain transactions by itself. You need the proposers of respective chains to agree on atomic inclusion for that guarantee to be enforced.
SUAVE allows you to express preferences for X-domain transactions. Let's consider an example. A user communicates their preference to SUAVE that they want two trades executed at the current block height to close an arbitrage:
T_1
) - Buy ETH at $2,000 on Rollup 1 (R_1
) in Block 1 (B_1
)T_2
) - Sell ETH at $2,100 on Rollup 2 (R_2
) in Block 2 (B_2
)These rollups may even have completely different length unsynchronized block times. R_1
could have 1s blocks and R_2
has 10s blocks. It's then perfectly reasonable that T_1
closes its leg of the arbitrage, but then T_2
fails.
That's why SUAVE is not prescriptive about how preferences are actually achieved. SUAVE looks to be maximally flexible such that it can coordinate for any domain and user. Executing on different domains will therefore present different challenges. Different users will accept different outcomes and levels of risk.
If the user is only willing to take execution if both legs are filled, then the executor must hold the risk for them that only one leg executes. For example, the user can require the executor to fulfill both legs with their own capital, then they can exchange the funds for the user's locked capital only if both legs are filled. If they fail to fully meet the preference, then the executor is stuck holding the risk.
This requires sophisticated executors to price the execution risk of conducting the statistical arbitrage, which means less executors will likely bid for the opportunity (they may not have the upfront capital or want to warehouse the risk).
SUAVE can support either desired path. Users and executors can all have their own risk tolerance and interact however they wish. SUAVE can't provide "technical X-domain atomicity" on its own, but it can provide "economic X-domain atomicity" in this sense from the user perspective (though the executor may get stuck holding the risk).
Now that the destination chain confirmed their own block, SUAVE Chain needs to become aware of that result. As mentioned earlier, bids are only unlocked for executors who meet users' preferences. Oracles are required to prove that these preferences were fulfilled elsewhere.
These oracles can be implemented however desired. In any case, these oracle contracts are responsible for importing external events into SUAVE's state.
A simple example would be an oracle contract which allows SUAVE bids to query Ethereum's history. I might want to bid for an empty block in 10 blocks. I can submit a bid by creating a transaction, and the transaction will output a payment if the oracle tells it that the Ethereum state has transitioned such that the payment should be made.
SUAVE Chain plays an important role in preference expression, and is needed for payment settlement after the fact. Recalling the simplified transaction flow:
You might be asking a few questions here:
SUAVE has two requirements for transmitting preferences:
A chain is needed to settle payments for X-domain preferences. Consider the example earlier where a user submits a preference for X-domain arbitrage, but is only willing to pay if both legs execute. There must be some oracle verification after the fact to settle only if the entire preference was met. You can't just pay out on each leg of the trade (e.g., you could then pay for one leg of the trade, but then the other leg fails).
There needs to be some domain which provides this all-or-nothing payment option. Bridging assets to SUAVE Chain and settling everything there is one such option. That may be unacceptable to some users, but it needs to be a possibility as SUAVE is trying to support all potential models of preference expression and settlement.
This is best served as its own independent chain (versus an existing chain) to remain neutral when settling preferences across multiple domains. This will also be important as SUAVE moves towards providing services for all domains (not just Ethereum). It can have independent ownership and participation separate from any existing chain, which is likely needed to get buy-in from other domains, whether that be Cosmos, Solana, CEXs, etc.
SUAVE also needs to implement a wide variety of optimizations that would be unacceptable to existing domains. It needs to control the full stack chain to implement these. This could be as simple as relatively higher gas limits and lower block times - these are of course unacceptable to Ethereum.
Another example relates to the mempool. Flashbots has considered replacing Ethereum's current mempool within Geth with a different structure that's optimized for faster communication.
Flashbots has also considered making SUAVE Chain a rollup in the long-term. This could allow SUAVE Chain to use the rollup's derivation function to trustlessly access the state of Ethereum L1 and its other rollups. This is valuable in reporting state transitions of other domains back to SUAVE Chain so that the conditional payments from bids can be unlocked to executors.
SUAVE of course isn't live yet, and there isn't even a spec out. It's an ambitious vision which will take years to realize. The plan is to ship it in phases:
In the longer-term, Flashbots intends to look into crypto-economics, custom secure enclaves, MPC, and FHE to further reduce the trust guarantees in the system.
So, who's going to use SUAVE and why? A few high level users could be:
Some of the biggest open questions include:
Overall, SUAVE is dope. It's an ambitious and fascinating abstraction of how to express and fulfill any generic preference.
In case SUAVE wasn't confusing and fascinating enough, I've got another one:
Anoma is another protocol under development which is very reminiscent of the core ideas behind SUAVE. For starters, Anoma:
Any chain can implement the Anoma architecture. You can have an Anoma L1, Anoma L2, whatever. A chain implementing the Anoma architecture is referred to as a "fractal instance" of Anoma, and all share certain homogeneous standards.
Confusingly, one of the planned fractal instances is also currently called "Anoma." For simplicity, I'll call it "Anoma L1" when I refer to this specific instantiation. It'll be an IBC-enabled L1 PoS chain. If I just say "Anoma," I'm talking about the architecture.
Remember, Anoma and Anoma L1 are completely separate ideas. Think of "Anoma" a bit like you think of "Cosmos". You can have a Cosmos L1, Cosmos L2, whatever. It's just referring to chains that share some set of standards. The standards that Anoma chains and Cosmos chains implement aren't exactly analogous, but it's a helpful simplification.
Every blockchain today provides settlement - consensus agrees upon and finalizes some updated state of the world.
However, they're not directly optimized for counterparty discovery (CD). For simple stuff, it doesn't really matter. If I want to send you some ETH, I don't need CD. You just give me your address. I submit a transaction to the Ethereum mempool which authorizes sending ETH from my address to yours.
That's not the case for more complex interactions like trading. This requires CD to:
I know what I want (e.g., I want to swap 1 ETH for 1000 USDC), but I don't know how to get it. Just having that "intent" of what I want isn't enough to settle on-chain. I need a fully formed state transition.
You're probably thinking, "Ok cool, so what? I can just go to Uniswap or whatever other application for CD. That's what they're there for." And you're correct!
Let's consider a CLOB as a simple example. You could put every bid/ask on-chain, but this is generally prohibitively expensive (and unnecessary). As a result, you'll see constructions like dYdX:
So there's two parts to the solution here:
AMMs like Uniswap arose largely due to the constraints and gas inefficiencies of CLOBs. Everyone can just use the AMM contract as the central point of counterparty discovery. Current AMMs still have some obvious drawbacks here in terms of efficiency, and broadcasting your trade naively has problems:
The Anoma architecture attempts to alleviate this problem by unifying both:
Both of these tasks are required to move from a user wanting to do something → blockchain settles a state transition. The difference is just that Anoma blockchains vertically integrate settlement and CD. Most applications require CD, and building it into the core architecture provides them with some interesting benefits which we'll see shortly.
The key part of Anoma here is their notion of "intents." Conceptually, they try to achieve a very similar goal as SUAVE's notion of "preferences."
Overall, SUAVE appears at least somewhat constrained in its approach of preference expression vs. a system such as Anoma which is built from the ground up on the notion of "intents." Anoma doesn't plan for users to end up actually submitting "transactions."
First, let's look at how you actually interact with a blockchain today. I want to swap 1 ETH for 2000 USDC, so I send a transaction to the Ethereum mempool:
You never actually defined what you want. You defined how you want to get it.
If the code is clean, I'll send 1 ETH and receive 2000 USDC. If not and I didn't audit every opcode I signed (lol), maybe a proxy contract steals my money.
Existing protocols are designed with transactions as their fundamental unit, but transactions are completely non-intuitive. They don't match how anyone actually thinks. You always think in terms of some state change you want (e.g., I want a future state where I own 1 less ETH and 2000 more USDC).
In Anoma, intents are the fundamental unit by which users express their preferences, and intents work the exact opposite way. Intents match what users are actually thinking:
Concretely, intents are off-chain signed messages that describe a partial state transition. They authorize some state preference that I want (e.g., I want a future state where I own 1 less ETH and at least 2000 more USDC). Intents are fully programmable - you can express any arbitrarily complex state preference. Maybe I only want to swap my ETH if it's sunny in New York and the Yankees won today. You get the idea. Intents are just arbitrary code that's evaluated at runtime by the settlement layer.
You can think of intents as "partial transactions", as they require some other parts to form complete transactions that satisfy users' constraints and enact a state change. Specialized middlemen called "solvers" (akin to executors) look through these intents to fulfill them. Solvers facilitate CD and route the intents to eventually be executed onchain by validators.
Users sign these binding intents via some client (e.g., wallet, dApp UI) then broadcast them to intent gossip nodes. This can be directed to certain node types, or undirected where it gets broadcast to as many nodes as possible. The intent gossip layer is just a p2p networking layer with a bunch of intents floating around (kind of like a mempool, but they're "intents" instead of "transactions").
Validators don't need to have any view of these intents. Most of them probably won't ever be settled onchain even. You could have billions of them floating around, and solvers can specialize in subscribing to certain "topics" they care about. Some form of specialization is likely to occur as solving becomes an NP hard problem, so a generalized solver would become intractable at scale.
A solver might only subscribe to process intents for:
The intent gossip network can span over all Anoma fractal instances. One globally connected intent gossip network can handle the intents for any and all Anoma fractal instances. UIs for applications can support deployment and order fulfillment across different security models if they wish. Intents are able to specify which security models they're willing to settle to.
As noted earlier with SUAVE, a global p2p layer could have some DoS concerns. In practice, solvers could operate much as builders do today (e.g., have a reputation system for known peers, blacklist as needed, etc.). It's unclear how robust this can be under extreme scenarios.
While having a blockchain such as SUAVE Chain allows you to impose some fees in the pessimistic case, it still doesn't prevent the need to do the computational search. This may still present some attack vectors (e.g., as described earlier with the contract overflow example).
Solvers transform user intents into fully formed state transitions which meet users' desires. This achieves similar properties to what I described earlier with SUAVE batching as one generalized example:
Similar to SUAVE, they also don't need to find perfectly offsetting intents. Solvers can route intents in complex ways with many intents that end up offsetting in aggregate, or they can fill them themselves if attractive. It's completely generalized.
Intents can specify settlement-conditional fees - they're only paid out if the intent is satisfied, settled, and confirmed onchain by consensus. This can be split amongst nodes involved in the gossiping and solving processes.
Anoma is also looking to implement various PETs to coordinate trustless operation on user data, which we can again treat as a black box for the purposes of this post:
Note that users can also send intents which describe a full state transition. For example, if Alice just wants to send 1 ETH to Bob, she doesn't need CD. If she doesn't need solvers to do any work, she can skip them and submit a full transaction herself (in a sense, acting as her own solver).
Right before solvers send transactions to the transaction mempool, they can encrypt them. Validators don't actually see the transactions that solvers send to them. They receive the transaction ciphertext:
The particular scheme used is Ferveo - a distributed key generation (DKG) and threshold public key encryption protocol. Note that Ferveo is just a framework - different chains can flexibly implement different rules around it (e.g., requiring all transactions to be encrypted vs. auctioning off the top of the block in the clear). You may have heard that Osmosis also plans to use Ferveo.
Here's the basic transaction flow:
This provides temporary privacy. The privacy here is a means to an end. Because the validators don't see the transactions in the clear, they shouldn't be able front-run or censor them (if they are not colluding).
Note that implementing threshold decryption today alongside CometBFT consensus (Tendermint) would require ABCI++ to enable "vote extensions''. ABCI++ isn't finalized yet, but it's expected in the near to medium-term.
With Anoma, you could potentially have MMs continuously send bid/ask limit orders as intents. You have an entire order book living as public binding intents.
Obviously MMs need to update their stale prices when information changes:
With Anoma, MMs could potentially just intermittently update their intents. E.g., "I'm willing to buy X for Y, but this order is only good for block height Z." They can then continuously update their orders every block. They could become stale intra-block, as there's no method to cancel. There's a continuously updated order book of intents that users can execute on.
Additionally, this DEX isn't bounded by a single domain. You don't need to think in terms of one DEX on one chain. Intents are fully composable, programmable, and global. I can specify exactly where and how I'm willing to get filled on my order.
As a quick TLDR, CoW Swap is an Ethereum application which matches trades via batch auctions using a variety of liquidity sources. Users submit trades to CoW Swap over some predefined batch auction period. Then:
The protocol continuously runs these batch auctions. Within batches, this optimal order matching and routing is facilitated by many competing "solvers" (yes, that's actually what they're called here too). The winning solver is the one that can maximize traders' surplus either by the most optimal CoW, finding the best liquidity sources, or combining both in a single settlement.
However, you're relying on a central party to coordinate here in the case of CoW Swap today. Anoma could potentially decentralize this type of batching:
Anoma and SUAVE appear similar in their notions of generalizing user intents/preferences. A market of solvers/executors compete to fulfill them, grounded in privacy technology to facilitate trustless collaboration. However, they have fundamentally opposite approaches.
Anoma's vision is for many fractal instances with a homogeneous architecture. Many chains with shared standards get a lot of benefits as described above.
SUAVE takes a very different view - it's built in realization of the fact that many chains are likely to have completely heterogeneous architectures. It's another layer in the stack to service this role for any chain. It's agnostic to what your chain looks like. It just wants to optimize as the universal preference layer, outsourcing key components for any domain.
They're built with very different and possibly even complementary design goals.
I'll assume general familiarity with shared sequencers (SSs) such as Espresso or Astria here. If you're unfamiliar, please refer to my previous post.
As mentioned above, SUAVE can provide a best-effort "economic atomicity." This is the case with any builder. Only proposers (validators) have the power to guarantee transaction inclusion. SUAVE executors are not necessarily validators of other chains, and they can't guarantee atomic inclusion of X-chain transactions.
Conversely, SSs can guarantee the atomic inclusion and execution of transactions across rollups opted into it. SSs act as the shared "proposer" for all of their rollups. However, SSs such as Espresso and Astria do not execute transactions, so they can't guarantee that a transaction won't revert upon execution.
That's why SSs will need stateful builders to sit in front of them.
SUAVE could be such a builder. In the earlier example, a user wanted two trades executed at the next block height to close an arbitrage:
T_1
) - Buy ETH at $2,000 on Rollup 1 (R_1
) in Block 1 (B_1
)T_2
) - Sell ETH at $2,100 on Rollup 2 (R_2
) in Block 2 (B_2
)A SS with PBS can now get stronger economic guarantees throughout that whole process:
SUAVE anticipates that different X-chain atomicity approaches will arise, and it looks to support preference expression for all of them. Domains don't need to integrate with SUAVE directly, adopt a particular ordering model, or even have a notion of PBS. SUAVE is just a global "bulletin board" for preferences that sophisticated actors who understand the risk can compete to execute on across domains.
SUAVE can be seen as a demand-side aggregator for X-domain preferences. It exposes the tools necessary to securely express these preferences. In doing so, this incentivizes the development of solutions such as SSs. They can provide the "supply" of X-domain transactions, allowing for more efficient capture of X-domain MEV.
It's well understood that X-domain MEV exerts centralizing pressure on block producers (e.g., validators/sequencers). There's an incentive for the same actor to control X-chain block production to more efficiently internalize X-chain MEV.
As I just described above, a SS can offer exactly these guarantees. And they don't need to carry heavy state on hand or execute transactions, so they can hopefully be lightweight and decentralized.
But we've just shifted the centralizing force to the builder. We've done that on L1 Ethereum with MEV-Boost though, so what's the big deal? Keep validators decentralized, and shift the centralizing force to builders.
However, there's a much stronger centralizing force here compared to something like L1 Ethereum. Building a block for L1 Ethereum is one thing, but doing it for arbitrarily many high throughput domains at low latency is another.
Overall, this stack may drive up the resource requirements for builders in terms of software/hardware level transaction execution, X-chain inventory management, balance sheet size, and risk taking. To be a competitive block builder, it may become a requirement to fulfill these conditions.
To be clear, centralizing forces from X-domain MEV exist for builders whether for a SS or any other domain. However, there's a distinction here:
As mentioned, SSs need some form of PBS. It seems likely that this takes the simple form of one auction for one mega auction across all SS rollup blocks. If the PBS interface here is building one mega block for all SS rollups, then that solidifies the most extreme requirements.
In theory, a SS could run more granular auctions for each individual rollup it sequences for. Then it would need to interpret this arbitrarily high number of auctions, checking for conflicting preferences. For example:
B_1
& B_2
for R_1
& R_2
B_2
& B_3
for R_2
& R_3
The SS proposer would have to comb through all the bids, checking for conflicts, and optimizing the merging of them. That probably sounds familiar - that's what a builder does.
So in practice, there will likely be a builder aggregating across all domains and interfacing with the SS proposer in a single large PBS-style auction.
Basically, SSs appear to be speedrunning the Endgame. There's a fair argument that this is inevitable anyway, but it's an open question still.
We'll likely see both SSs and "traditional" rollups decentralizing their sequencer (e.g., implementing a simple consensus set) prior to SUAVE. However, decentralizing sequencers (proposers) is not enough. We also need to consider rollup block building.
It seems likely that some variation of proposer-builder separation (PBS), protocol-owned building (POB), etc. is destined to arise on rollups in the near to medium-term. This should be an area of focus as they consider how to decentralize.
SUAVE is a decentralized block builder, so does it solve the problem here? Not entirely. It's still unavoidable that if you're building a mega block for all the rollups on a SS, you have to meet the high requirements. SUAVE executors would be stuffing every rollup on the SS into their SGX to end up building a full block for it (or use other forms of cryptography down the road).
The technical challenge is one obvious question, but the point is that SUAVE isn't just about lowering the hardware requirements for builders regardless. What it could help with is making the block building process more collaborative and trustless. You could have many builders each contributing a piece to the blocks they output.
Now for the really hard part - incentives.
The real alignment problem is alignment of incentives across components of the modular stack
— jill gunter (@jillrgunter) March 31, 2023
Figuring out the economics of SSs may prove to be the most challenging component. First, the nice part - SSs should allow their rollups to capture more aggregate value from MEV.
Let's consider a simple example. We have 10 equal (isolated) rollups whose validators capture $1mm each per year. In aggregate, they capture $10mm per year.
Now those same rollups all decide to opt into a shared sequencer. There's no reason that $10mm should go away, that's still there. Maybe it's even $11mm due to better bridging and interoperability → more activity overall. But now there's even more value for them to internalize from X-chain MEV. Let's break that down.
Basic single-chain MEV strategies are commoditized. If you're running atomic arbitrage between ETH/USDC pairs on Ethereum L1 SushiSwap and Uniswap, you're bidding almost all of that value back to the Ethereum proposer. It's effectively riskless profit.
X-chain MEV is the opposite. Validators can't make X-chain atomic commitments. X-chain MEV can only be captured probabilistically by sophisticated searchers running statistical arbitrage. They have to manage inventory across chains and warehouse risk → market is riskier and less competitive → a lower % of the MEV gets bid back to validators.
Now validators (SSs) can make X-chain atomic commitments. X-chain MEV can be captured with high confidence by searchers running atomic arbitrage. Extraction becomes highly efficient and competitive → a higher % of the MEV gets bid back to validators (SSs).
That's great for the rollups! They were capturing $10mm in aggregate before, and now let's say they're capturing $12mm. But wait - how do we split that up?
The SS has a lot of power here. They now decide the transaction inclusion and ordering for all rollups by default → they get first dibs on MEV. So we need to decide how to divvy that up. Having a "perfect" allocation mechanism which would need to simulate all possible outcomes is likely an outright impossibility.
You appear to broadly have two paths then:
I certainly don't have the answer here today. This is a fascinating area of open research.
If the above is figured out, will rollups be ok with sharing the pie in a "good enough" way? Or do they go full Game of Thrones on each other? Honestly, we've seen a bit of both, so I'm not sure. It'll be an interesting political and social experiment to watch play out regardless.
When rollups opt into a SS, they're opting into somewhat of a "one-size-fits-all" model. Some even think they look like one big rollup. It's a realization that every chain is destined to be influenced for the sequencing of other domains.
Even Ethereum is influenced by Binance. Ethereum ←→ Binance is indeed the largest MEV source in crypto today. Several of the largest builders engage in this statistical arbitrage, and this has centralizing spillover effects into Ethereum.
That's the reality of X-domain MEV. There are tradeoffs in how you choose to address this influence.
Opting into a SS allows a rollup to mitigate the centralizing force of X-domain MEV on their validator set (because they can operate X-domain). Its nodes can hopefully be lightweight relative to the number of chains they service, and they can effectively internalize the X-domain MEV. Better interoperability also reduces the impact of X-chain UX.
But, this may come with some tradeoffs:
In my view, it wouldn't make sense to implement another validator set within your rollup in many cases (as another round of processing before or after the SS) for several reasons:
The tradeoff - your rollup won't be able to implement features which require a validator set to enforce with discretion. Examples could include:
To varying degrees, the tradeoffs and associated complexities are just so high that it's not worth implementing separately as a rollup on top of a SS in my view.
Let's consider the example where you want threshold decryption for your rollup. Let's say a SS includes encrypted transactions to which it does not have the decryption key (your rollup's own nodes have the decryption keys). This becomes problematic - you have an entirely separate consensus set with the power to arbitrarily halt your rollup if they choose not to decrypt. There's also a challenging timing mismatch with the SS.
Your validators should ideally be the same group with the threshold decryption key shares, and they should have the same quorum as your consensus. Then, they can just include their key shares as part of their consensus votes, automatically decrypting as they sign off on a block.
For this reason, you'd want the SS itself to implement threshold decryption natively if you want it for your rollup. To be clear, a SS could do this! But, the tradeoff is that it still leans closer to a "one-size-fits-all" approach in many cases, leaving SSs room to differentiate on various features.
As an example, this seems somewhat against the grain of the Cosmos mindset. There, we see applications that want their own customized chains, and they want their own validators to enforce those rules. It could be infeasible to implement many of these customizations if opted into a SS.
While these rollups may look like one big chain in some sense, there's a key distinction here in that rollups opted into a SS can always fork away (e.g., if the SS is extracting too much value). Valuable shared state is what's hard to fork away, but SSs don't have this on their own. They're effectively just a service provider. A rollup can always just swap out their SS for some other sequencing mechanism with only a minor hardfork.
Additionally, rollups certainly aren't handing over total control. They still have complete flexibility to customize their VM as they wish. They can even enforce complex transaction ordering rules by encoding them deterministically in the state transition function (STF) of their rollup. For example, "the first transaction in my block must have oracle update XYZ to be valid" or "this type of batch must be right after it" etc. This could even include customizations such as Osmosis' ProtoRev Module to automatically internalize cyclic arbitrages.
As discussed above, you wouldn't want to have each rollup do a full round of non-deterministic reordering of transactions after the SS. So long as the VM rules are deterministic though, builders can create blocks within the bounds of these rules. Then rollup nodes can interpret the SSs output according to their rollup's own deterministic rules. This doesn't require a discretionary validator set to do anything.
SUAVE is an ambitious attempt to unbundle the mempool and builder role for any chain. It looks to enable the expression and execution of any arbitrary preference in a trustless and collaborative manner.
Anoma is another idea to address some of the same underlying challenges. However, it takes a radically different approach by rebuilding the entire blockchain stack, trying to get many chains to share the same standards.
SSs could enable many chains to feel and act more like one chain again. However, they come with tradeoffs, including somewhat reduced flexibility, challenging value attribution, and potentially higher-resourced builders.
In any case, rollups are coming, and so are decentralized sequencers (hopefully). They're likely to arrive before something like SUAVE, so rollups need to think more about how to responsibly incorporate the builder role into their designs.
Otherwise, Vitalik might just be right about the Endgame after all:
Block production is centralized, but block validation is trustless and highly decentralized.
Disclaimer: The views expressed in this post are solely those of the author in their individual capacity and are not the views of DBA Crypto, LLC or its affiliates (together with its affiliates, "DBA"). The author of this report has material personal positions in ETH and Skip Protocol Inc.
This content is provided for informational purposes only, and should not be relied upon as the basis for an investment decision, and is not, and should not be assumed to be, complete. The contents herein are not to be construed as legal, business, or tax advice. References to any securities or digital assets are for illustrative purposes only, and do not constitute an investment recommendation or offer to provide investment advisory services. This post does not constitute investment advice or an offer to sell or a solicitation of an offer to purchase any limited partner interests in any investment vehicle managed by DBA.
Certain information contained within has been obtained from third-party sources. While taken from sources believed to be reliable, DBA makes no representations about the accuracy of the information.