Introduction
audience: all
This book proposes a block-building topology for EVM chains built as a composition of mosaik-native organisms. The topology is designed for:
- Anonymity at submission. Authors of transactions and intents cannot be linked to the content they publish by any single party in the pipeline.
- Sealed-bid auctions over order flow. Searchers bid on bundles without learning each other’s offers until a round commits.
- Verifiable co-building across many operators. Candidate blocks are assembled inside TDX-attested committees; no single operator can unilaterally reorder or censor.
- MEV-aware refund accounting. Value captured on a winning block is attributed back to the order-flow providers that contributed to it, under a public-verifiable attestation.
- Cross-chain coordination. Lattices on different chains coexist on the same mosaik universe and can cross-subscribe for cross-chain bundles and shared order-flow markets.
The product of this repository is a design book, not a shipped
binary. Follow-on crates — one per organism — will land in later
pull requests. Where code samples appear, they are specifications,
not library calls you can cargo add today.
Why another topology
The block-building pipeline across EVM chains today is either a handful of centralized monoliths (one builder owns the order flow, the building, the relay, and the refund accounting) or a stitched- together mesh of services from different authors with incompatible trust models, opaque interfaces, and no shared identity story. Flashbots Writings — decentralized building: wat do? names the pattern of decentralization this topology targets: Phase 1 — replicated privacy inside TEE-attested committees, Phase 2 — co-built blocks across permissionless participants, Phase 3 — globally parallel building. The lattice proposed here is a Phase 1 to Phase 2 shape, chosen so every organism in it can decentralize at its own pace without forcing the others.
This book assumes familiarity with the block-building vocabulary (PBS, order flow, bundles, MEV-Share, TDX, relays, sequencers on L2); it does not re-derive them. It does assume the reader is new to mosaik and, if they care about submission-layer anonymity, also to zipnet — links are provided the first time each concept appears.
What a lattice is, in one paragraph
A lattice is one end-to-end block-building deployment for one
EVM chain. It is identified by a short, stable, namespaced instance
name (e.g. ethereum.mainnet, unichain.mainnet, base.testnet)
and composes six mosaik-native organisms under that name:
- zipnet — anonymous broadcast of sealed transactions and intents (the existing organism; see the zipnet book).
- unseal — a threshold-decryption committee that unseals zipnet broadcasts for the next organism without revealing cleartext to any single operator.
- offer — a sealed-bid auction committee where searchers place bundle bids over an unsealed order-flow pool.
- atelier — a TDX-attested co-building committee that assembles candidate blocks from the unsealed pool and the winning bids.
- relay — a PBS-style fanout committee that ships header + bid pairs to proposers (or, on L2, to sequencers).
- tally — a refund accounting committee that attributes MEV captured on the winning block back to the order-flow providers whose transactions and bids contributed to it.
Every organism follows the mosaik pattern zipnet shipped with: a
single Config struct that folds every signature-altering input
into a deterministic UniqueId; a narrow public surface of one or
two primitives; a TicketValidator composition gating bonds; typed
Organism::<D>::* free-function constructors that hide raw
StreamId / StoreId / GroupId values from integrators. Multiple
lattices coexist on the shared universe builder::UNIVERSE; an
integrator compiles in a LatticeConfig for each one it cares about
and binds with one Arc<Network>.
The full rationale — why six organisms, why this decomposition, why shared universe — is in Designing block-building topologies on mosaik.
What this topology provides
- Unlinkability of transactions to senders up to the trust boundary of the zipnet and unseal committees. See threat model.
- Sealed-bid auctions where searcher bids are invisible to competing searchers and to the builder until the auction commits.
- Multi-operator block assembly where the builder is a committee, not an entity. No single operator controls the winning block.
- Verifiable ordering and attribution via Raft-replicated state machines inside every organism. Every decision — which bundle wins, which transaction got refunded — is a committed command in an auditable log.
- One
Arc<Network>across the whole pipeline. Integrators bind every organism they care about off one mosaik endpoint; operators run one process per organism role per host, scheduled however their ops stack prefers.
What it does not provide (yet, or by design)
- Byzantine fault tolerance. Each organism uses mosaik’s crash-fault-tolerant Raft variant. Deliberately compromised committee members can DoS liveness; anonymity, auction integrity, and block validity all retain the properties promised under the organism’s specific trust assumption (any-trust for anonymity, majority-honest for auction commit, t-of-n for unseal).
- A canonical implementation. The proposal is a shape. Operators are expected to implement — or consume implementations of — the organism crates, and teams with different operational preferences can ship distinct implementations of the same organism surface.
- An on-network lattice registry. Lattices are operator-scoped;
discovery is a compile-time
Configreference, same as zipnet. See topology-intro — A fingerprint convention, not a registry.
Three audiences, three entry points
Every page in this book declares its audience on the first line and respects that audience’s tone. Pick the one that matches you:
- Integrators — external devs (searchers, wallets, rollups) whose agent publishes into or reads from a lattice somebody else operates. Start at Quickstart — submit, bid, read.
- Operators — teams deploying and maintaining a lattice. Start at Lattice overview then Quickstart — stand up a lattice.
- Contributors — engineers extending the topology itself, or standing up a seventh organism that reuses the pattern. Start at Designing block-building topologies on mosaik then Architecture of a lattice.
See Who this book is for for the conventions each audience is held to.
Relationship to existing Flashbots work
- BuilderNet is the closest existing system to the lattice. The
atelierorganism is a mosaik-native restatement of BuilderNet’s TDX co-building pattern;tallyis a restatement of its refund accounting. The lattice generalises by making the other four organisms first-class, composable, and deployable independently. - Rollup-Boost and L2 TEE block builders map cleanly onto a
single-operator lattice with
offerandrelayswapped for the L2’s sequencer interface. See Cross-lattice coordination. - Flashnet / zipnet is the submission organism. This proposal does not re-specify it; it consumes it.
- Mosaik is the substrate. This proposal does not re-specify mosaik primitives; it uses them.
Layout of this book
book/src/
introduction.md this page
audiences.md tone and conventions per audience
integrators/ for external devs binding to a lattice
operators/ for teams running a lattice
contributors/ for engineers extending the topology
appendix/ glossary, env vars, metrics
Who this book is for
audience: all
This book has three audiences, the same three the zipnet book is written for, adjusted for the scope of a full block-building lattice rather than a single organism:
- Integrators — external devs whose mosaik agent binds into a running lattice.
- Operators — teams standing up and running a lattice for a chain.
- Contributors — engineers extending the topology itself, or building a new mosaik-native organism that composes with the existing six.
Every chapter declares its audience on the first line
(audience: integrators | operators | contributors | both | all) and
respects that audience’s conventions. New pages must pick one.
Mixing audiences wastes readers’ time. When content genuinely serves
more than one group, use both (integrators + operators, integrators
- contributors, …) or
all, and structure the page so each audience gets the answer it came for in the first paragraph.
Integrators (external devs)
Who they are. External Rust developers whose mosaik agent publishes into — or reads from — a running lattice operated by somebody else. Typical roles:
- Searchers — agents that place bundle bids on
offerand read the winning bid on the committed round. - Wallets — agents that submit sealed transactions to
zipnetand read refund attestations fromtally. - Rollup / sequencer teams — agents that consume candidate
blocks from
atelier+relay, or ship sequencer-authored transactions intozipnet. - Analytics consumers — agents that subscribe to
tallyfor public attribution data.
They do not run committee servers; that’s the operator’s job. They do not modify the organism crates; that’s the contributor’s job. They are integrators.
What they can assume.
- Comfortable with async Rust and the mosaik book.
- Already have a mosaik application in mind; the lattice is a dependency, not the centre of their work.
- They bring their own
Arc<Network>and own its lifecycle. - If they care about submission-layer anonymity, they have read the zipnet book.
What they do not need.
- Protocol theory for the organisms they aren’t using. A searcher
integrating against
offershould not be forced to read theunsealthreshold-decryption spec. - An operator’s view of keys, rotations, TDX image builds.
- A re-exposition of mosaik primitives.
What they care about.
- “Which organisms does my use case touch?”
- “What do I import? What
LatticeConfigdo I compile in?” - “How do I bind to the operator’s lattice?”
- “What does the operator owe me out of band — universe, instance
name, MR_TDs, the six organism
Configs?” - “What does an error actually mean when it fires?”
Tone. Code-forward and cookbook-style. Snippets are
rust,ignore, self-contained, meant to be lifted. Public API
surfaces are listed as tables. Common pitfalls are called out
inline. Second person (“you”) throughout.
Canonical integrator page. Quickstart — submit, bid, read.
Operators
Who they are. Teams deploying and maintaining a lattice. In the
common case a single operator runs every organism in a lattice; in
the Phase 2 shape multiple operators co-run the atelier organism
(each contributes committee members) while one operator drives the
rest. The book treats both cases; page headers note when a procedure
only applies to one.
What they can assume.
- Familiar with Linux ops, systemd units, cloud networking, TLS, Prometheus.
- Comfortable reading logs and dashboards.
- Not expected to read Rust source. A Rust or protocol detail that is load-bearing for an operational decision belongs in a clearly marked “dev note” aside that can be skipped.
- Familiar with the block-building vocabulary — PBS, order flow, relays, sequencers on L2, TDX, MR_TD. Not expected to have read the mosaik book; the operator pages link it when needed.
What they do not need.
- Organism internals. They care what a binary does, not which module it lives in.
- Integrator-side ergonomics. That’s the integrators’ book.
- The paper-by-paper cryptographic derivations. Link, don’t re-derive.
What they care about.
- “What do I run, on what hardware, with what env vars?”
- “How many committee members per organism? What happens if one dies?”
- “How do I know my lattice is healthy?”
- “How do I rotate secrets / retire an instance / upgrade an image?”
- “What page covers the alert that just fired?”
Tone. Calm, runbook-style. Numbered procedures, parameter tables, one-line shell snippets. Pre-empt the obvious “what if…” questions inline. Avoid “simply” and “just”. Every command should either be safe to run verbatim or clearly marked as needing adaptation.
Canonical operator page. Quickstart — stand up a lattice.
Contributors (internal devs)
Who they are. Senior Rust engineers with distributed-systems + cryptography background, extending the topology itself, implementing an organism crate, or standing up a seventh organism that composes with the existing six.
What they can assume.
- Have read the mosaik book, the zipnet book, and the zipnet CLAUDE.md conventions.
- Comfortable with async Rust, modified Raft, threshold cryptography, TDX attestation flows, and PBS-adjacent block- building vocabulary.
- Familiar with at least one existing block-building system (BuilderNet, vanilla MEV-Boost, a rollup sequencer).
What they do not need.
- Re-exposition of mosaik primitives or zipnet conventions. Link and move on.
- Integrator ergonomics unless they drive a design choice.
- Motivation for why we want decentralized block building. The Flashbots Writings cover that; we’re downstream.
What they care about.
- “Why this decomposition into six organisms and not four, or ten?”
- “What invariants must each organism hold? Where are they enforced?”
- “How does composition happen across organisms without creating cross-Group atomicity that mosaik does not support?”
- “What breaks if I change an organism’s
StateMachine::signature()?” - “Where do I extend this — which organism, which trait, which test?”
- “How does the shape generalise to an L2 sequencer? To cross-chain bundles?”
Tone. Dense, precise, design-review style. ASCII diagrams,
pseudocode, rationale. rust,ignore snippets and structural
comparisons without apology.
Canonical contributor page. Designing block-building topologies on mosaik.
Shared writing rules
- No emojis anywhere in the book.
- No exclamation marks outside explicit security warnings.
- Link the mosaik and zipnet books rather than re-explaining their primitives.
- Security-relevant facts are tagged with a visible admonition, not hidden inline.
- Keep the three quickstarts synchronised. When the lattice shape, an organism’s public surface, or the handshake model changes, update integrators, operators, and contributors quickstarts together, not “this one first, the others later”.
- Use the terms in Glossary consistently. Do not coin synonyms for “lattice”, “organism”, “universe”, “deployment” mid-page.
What a lattice gives you
audience: integrators
A lattice is one end-to-end block-building deployment for one
EVM chain. As an external developer you bind into a lattice by
pinning its LatticeConfig and opening typed handles on the
organism(s) you need. One Arc<Network> serves every organism and
every lattice you care about.
This page is the index: what lattices can do for you, which organisms you touch for which use case, and where to go next for code.
Why you might want this
The lattice gives you four things at once, each from a different organism:
- Anonymous submission — a sealed, ordered broadcast channel
where nothing in the pipeline can link your transaction to you
up to the trust bound of
zipnetplusunseal. See Submitting transactions anonymously. - Sealed-bid auctions — bid on a slot’s unsealed order flow without other searchers learning your bid until the auction commits. See Placing bundle bids.
- Verifiable candidate blocks — read blocks committed by a TDX-attested co-builder committee and verify the committee’s collective signature over them. See Reading built blocks.
- Auditable refund accounting — public
tallyattestations prove which order flow contributed to which winning block and how the refund is allocated. See Receiving refunds and attributions.
You do not have to use all four. A wallet that only wants
anonymous submission can bind to zipnet alone. A searcher that
only wants to bid on existing pools can bind to offer alone. A
rollup operator that consumes the full pipeline binds to all
six.
Who this audience page is for
External Rust developers running their own mosaik agent. You:
- Own your own
Arc<Network>and its lifecycle. - Have read the mosaik book.
- If you care about submission-layer anonymity, have also read the zipnet book.
- Do not operate a lattice; that is a different team whose runbook is For operators.
If you are the team running the lattice, you want Lattice overview instead.
Use case to organism matrix
Pick the row that matches your application; bind the organisms listed.
| Use case | Organisms you bind |
|---|---|
| Wallet / dapp submitting sealed tx | zipnet |
| Wallet / dapp tracking its refunds | zipnet, tally |
| Searcher bidding on a lattice’s order flow | offer, tally (for outcome verification) |
| Cross-chain searcher spanning N lattices | offer on each lattice, one Arc<Network> |
| Proposer / sequencer consuming candidate blocks | atelier, relay |
| Analytics consumer reading block attribution | tally |
| Audit / compliance agent | every organism, read-side only |
Each bound organism is one Organism::<D>::verb(&network, &Config)
call; you pay for one mosaik endpoint regardless.
What you never touch
- Any organism’s internal consensus — committee Raft, unseal share gossip, atelier’s TDX bundle simulation. Those are operator-managed internals. Your read-side handles give you committed facts; you never need to reason about quorum, apply order, or replica membership.
- Raw mosaik
StreamId,StoreId,GroupIdvalues. TheOrganism::<D>::verbconstructors derive everything from theConfigfingerprint you compile in. - The
LatticeConfiginternals. You receive aLatticeConfigfrom the operator (either as a literal Rustconstin a published crate, or as a hex fingerprint youConfig::from_hex); you do not construct one yourself.
What the operator owes you
Three items, same handshake pattern zipnet uses:
- Universe
NetworkId. Almost always the baked-inbuilder::UNIVERSEconstant. - Lattice
Config. A fullLatticeConfigstruct (or its serialised hex fingerprint) covering the six organism configs. This is what defines the lattice identity. - MR_TDs for every TDX-gated organism in the lattice
(
unseal,atelier, optionallyrelay).
Everything else — peer bootstraps, retry policy — you manage locally the same way you manage any mosaik agent. See What you need from the operator.
What happens when a lattice misbehaves
Zero-trust integrations are not currently feasible — the lattice is a permissioned block-building pipeline and integrators accept the operator’s trust model by binding. What the lattice does guarantee:
- Public commit logs. Every organism’s commits are mosaik-replicated and signed by committee members. You can replay them and detect divergence from on-chain reality.
- On-chain settlement enforcement.
tallyattestations are meant for on-chain verification. A lattice that commits dishonest attestations finds them rejected by the settlement contract. - Graceful degradation. A lattice whose upstream organism is down still produces valid downstream commits where possible; see the failure table in Composition.
When those are not enough for your use case — e.g. if you need Byzantine liveness guarantees not provided by mosaik’s Raft variant — you either switch to a different lattice or wait for the BFT roadmap item.
Ready to code
Start at Quickstart — submit, bid, read.
Quickstart — submit, bid, read
audience: integrators
You bring a mosaik::Network; the organism crates layer a lattice
on top of it as a composition of mosaik-native services on a
shared universe. Every lattice is a LatticeConfig you compile
in; every organism inside it is a typed free-function constructor.
This page assumes you have read What a lattice gives you. If you only care about anonymous submission, the zipnet quickstart is a shorter path to the same code; this page is for integrators touching two or more organisms in one agent.
One-paragraph mental model
A mosaik universe is a single shared NetworkId. Many lattices
and many other mosaik services live on it. An operator stands up
a lattice by publishing a LatticeConfig — instance name, chain
id, and the six organism configs — plus any TDX MR_TDs their
lattice pins. You compile the LatticeConfig in. Each organism
exposes a tiny typed surface: Zipnet::<D>::submit,
Offer::<B>::bid, Atelier::<Block>::read, etc. Open a handle
against whichever organisms your use case touches; your
Arc<Network> serves all of them at once, and the same handle can
serve several lattices side by side.
Cargo.toml
[dependencies]
builder = "0.1" # meta-crate; re-exports UNIVERSE and LatticeConfig
zipnet = "0.1"
offer = "0.1"
atelier = "0.1"
relay = "0.1"
tally = "0.1"
mosaik = "=0.3.17"
tokio = { version = "1", features = ["full"] }
futures = "0.3"
anyhow = "1"
Only pull the organism crates you actually use. A pure wallet
needs zipnet and tally; a pure searcher needs offer and
tally; a rollup operator consuming candidate blocks needs
atelier and relay.
builder re-exports mosaik::{Tag, UniqueId, unique_id!} and
NetworkId, so you rarely reach for mosaik directly in small
agents, but you will usually keep mosaik as a direct dep since
you own the Network.
Pin the lattice in a const
Operators publish a LatticeConfig the same way zipnet operators
publish a zipnet::Config. The canonical shape is a Rust const
in a published deployment crate; the from_hex route exists for
agents that cannot take a compile-time dep on that crate.
use builder::{LatticeConfig, UNIVERSE};
const ETH_MAINNET: LatticeConfig = LatticeConfig {
name: "ethereum.mainnet",
chain_id: 1,
// Each organism's own Config folds in its own content +
// intent + acl. Operators publish every field; integrators
// compile them in verbatim.
zipnet: zipnet::Config::new("ethereum.mainnet")
.with_window(zipnet::ShuffleWindow::interactive())
.with_init([0u8; 32]),
unseal: unseal::Config::new("ethereum.mainnet")
.with_threshold(5, 7)
.with_share_pubkeys(&[/* 7 BLS12-381 points */]),
offer: offer::Config::new("ethereum.mainnet")
.with_auction_window(std::time::Duration::from_millis(800))
.with_offer_pubkey(/* BLS12-381 point */),
atelier: atelier::Config::new("ethereum.mainnet")
.with_mrtd(/* 48-byte MR_TD */)
.with_block_template_schema(atelier::BlockSchema::L1Post4844),
relay: relay::Config::new("ethereum.mainnet")
.with_policy(relay::Policy::L1MevBoost),
tally: tally::Config::new("ethereum.mainnet")
.with_settlement_addr(/* 20-byte addr */),
};
LatticeConfig::lattice_id(Ð_MAINNET) is a pure function
returning the 32-byte UniqueId that every organism in this
lattice derives from. Print it and compare against the
operator’s published fingerprint to verify your build matches
theirs without any wire round-trip. Mismatched configs produce
different organism GroupIds; you silently do not bond and get
ConnectTimeout on any verb() call instead.
Build the network
use std::sync::Arc;
use mosaik::Network;
use builder::UNIVERSE;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let network = Arc::new(Network::new(UNIVERSE).await?);
// ... open handles below ...
Ok(())
}
One Arc<Network> serves every organism and every lattice you
bind. Bring your own mosaik builder if you need specific
discovery, TLS, or Prometheus configuration — see
Connecting to a lattice.
A wallet: submit + track refunds
use futures::StreamExt;
use zipnet::Zipnet;
use tally::Tally;
let submitter = Zipnet::<Tx2718>::submit(&network, Ð_MAINNET.zipnet).await?;
let mut refunds = Tally::<Attribution>::read(&network, Ð_MAINNET.tally).await?;
// Fire and forget.
let receipt = submitter.send(tx).await?;
println!("submitted, tracking id = {receipt:?}");
// Watch attributions as they land.
while let Some(attr) = refunds.next().await {
if attr.concerns(&receipt) {
println!("refund {} wei on slot {}", attr.amount, attr.slot);
break;
}
}
submit returns when zipnet’s round accepts the envelope — a
few hundred ms in the interactive window. refunds may land
seconds to minutes later depending on the chain’s block cadence
and whether any of your submissions made it into a winning
block.
A searcher: bid + verify outcome
use offer::Offer;
use tally::Tally;
let bidder = Offer::<Bundle>::bid (&network, Ð_MAINNET.offer).await?;
let mut wins = Offer::<Bundle>::outcomes(&network, Ð_MAINNET.offer).await?;
let mut refunds = Tally::<Attribution>::read(&network, Ð_MAINNET.tally).await?;
let bid_id = bidder.send(Bundle { slot: target_slot, bid: 1_000_000, txs: vec![/* */] }).await?;
// Wait for the auction to commit for this slot.
while let Some(outcome) = wins.next().await {
if outcome.slot == target_slot {
println!("auction for slot {} won by {:?}", outcome.slot, outcome.winner);
break;
}
}
// Later: check whether tally paid you.
while let Some(attr) = refunds.next().await {
if attr.slot == target_slot && attr.recipient == self_addr {
println!("tally credited {} wei", attr.amount);
break;
}
}
Offer::<B>::outcomes is a Stream<Item = AuctionOutcome>; every
committed auction lands on it in slot order. Reads filter by slot
on the consumer side; the organism does not push filtered
subscriptions in v0.
A proposer / sequencer: consume candidate blocks
use atelier::Atelier;
use relay::Relay;
let mut candidates = Atelier::<Block>::read (&network, Ð_MAINNET.atelier).await?;
let mut accepted = Relay::<Header>::watch (&network, Ð_MAINNET.relay ).await?;
while let Some(candidate) = candidates.next().await {
// Verify the atelier committee's collective signature before trusting.
if !candidate.verify_against(Ð_MAINNET.atelier) {
eprintln!("invalid candidate signature, skipping slot {}", candidate.slot);
continue;
}
// Ship the header to your proposer / sequencer endpoint.
your_proposer_endpoint.submit(&candidate.header).await?;
}
// Relay commits the proposer ack once it lands.
while let Some(ack) = accepted.next().await {
println!("slot {} accepted by proposer {:?}", ack.slot, ack.proposer);
}
The verify_against call validates the block’s BLS aggregate
signature under the atelier committee’s published public keys.
See Reading built blocks for the full
verification path.
Share one Network across lattices
Because every constructor only takes &Arc<Network>, one handle
can serve many lattices:
const ETH_MAINNET: LatticeConfig = /* ... */;
const UNICHAIN_MAINNET: LatticeConfig = /* ... */;
let eth_offer = Offer::<Bundle>::bid(&network, Ð_MAINNET.offer ).await?;
let uni_offer = Offer::<Bundle>::bid(&network, &UNICHAIN_MAINNET.offer).await?;
// ... and any unrelated mosaik services on the same universe ...
Every lattice derives its organism IDs disjointly from its own
LatticeConfig, so they coexist on the shared peer catalog
without collision. You pay for one mosaik endpoint, one DHT
record, one gossip loop — not one per lattice. See
Cross-lattice coordination for
the full cross-chain integrator shape.
Error model
Every organism’s constructor returns the same small error set (carried over from zipnet):
pub enum Error {
WrongUniverse { expected: mosaik::NetworkId, actual: mosaik::NetworkId },
ConnectTimeout,
Attestation(String),
Shutdown,
Protocol(String),
}
ConnectTimeout is the one you will hit in development — usually
a mismatched LatticeConfig (different instance name, wrong
chain id, or stale organism config) or an operator whose lattice
is not up yet. Compare LatticeConfig::lattice_id(&YOUR_CFG)
against the fingerprint the operator published before debugging
anything else.
Shutdown
drop(submitter);
drop(refunds);
// network stays up as long as any handle holds it
Handles are independent. Dropping one closes that handle; the
others stay live. The Arc<Network> stays up while any handle
holds it.
Next reading
- What you need from the operator — the exact fact sheet the operator hands you before you code.
- Submitting transactions anonymously — the zipnet side of the wallet quickstart, with cover traffic, retry, and size discipline.
- Placing bundle bids — offer auction cadence, bid sealing, and withdrawal.
- Reading built blocks — committee signature verification.
- Receiving refunds and attributions —
tally::Attestationsverification against a settlement contract. - TEE-gated lattices — when the lattice requires your own agent to run inside TDX.
- Designing block-building topologies on mosaik — the underlying pattern if you are curious or if you are planning to extend the topology.
What you need from the operator
audience: integrators
Before you write code against a lattice, the operator owes you a small, finite fact sheet. This page is the complete list. If you do not have every item, you are not ready to build; push back on the operator before spending the time.
The handshake is identical in shape to zipnet’s — a Config plus
optional MR_TDs — generalised to six organisms instead of one.
The three-bullet handshake
- Universe
NetworkId. In almost every case the constantbuilder::UNIVERSE— do not override unless the operator explicitly runs an isolated federation. LatticeConfig. Either as a Rustconstexported from a deployment crate on crates.io (preferred), or as a hex-encodedLatticeConfig::from_hexfingerprint, or as six individual organismConfigfingerprints you reassemble.- MR_TDs for every TDX-gated organism in the lattice. At
minimum
atelierandunseal; optionallyrelay. These are 48-byte hex strings the operator publishes out of band.
That is the entire on-the-wire-discovery-free handshake. Every
other piece of information you might think you need — committee
member count, relay endpoint URL, settlement contract addresses —
either falls out of the LatticeConfig or is not your concern
as an integrator.
LatticeConfig, in detail
A LatticeConfig is a plain struct the operator publishes:
pub struct LatticeConfig {
pub name: &'static str, // e.g. "ethereum.mainnet"
pub chain_id: u64, // EIP-155 chain id
pub zipnet: zipnet::Config,
pub unseal: unseal::Config,
pub offer: offer::Config,
pub atelier: atelier::Config,
pub relay: relay::Config,
pub tally: tally::Config,
}
You do not construct one yourself. A lattice is operator-scoped; integrators compile in the operator’s published struct.
Typos in any field surface as ConnectTimeout on the next
verb() call. Because every field folds into the
LatticeConfig::lattice_id(...) fingerprint, a mis-copied 32-byte
init salt changes every organism’s GroupId and your agent fails
to bond with the operator’s infrastructure. Compare fingerprints
out of band before opening a support ticket.
Preferred distribution: a deployment crate
The cleanest handoff is an operator-published crate whose
entire contents is a single const and a minimum of types:
// Cargo.toml of the operator's crate
[package]
name = "eth-mainnet-lattice"
version = "2026.04.0"
// lib.rs
pub const ETH_MAINNET: builder::LatticeConfig = builder::LatticeConfig { /* ... */ };
pub const ATELIER_MRTD: [u8; 48] = [ /* ... */ ];
pub const UNSEAL_MRTD: [u8; 48] = [ /* ... */ ];
You depend on it:
[dependencies]
eth-mainnet-lattice = "=2026.04.0"
and use it:
use eth_mainnet_lattice::ETH_MAINNET;
let submitter = zipnet::Zipnet::<Tx>::submit(&network, Ð_MAINNET.zipnet).await?;
Benefits:
- Compile-time pinning; the version-number convention tells you when to expect an upgrade.
- The operator can bump the crate’s minor version when they rotate parameters without changing your own crate’s logic.
- If the operator deletes or yanks the crate, your build breaks loudly.
Fallback: hex fingerprint
When a deployment crate is not practical (closed-source integrator, external contractor, short-term agent), the operator publishes a hex string:
LATTICE_CONFIG_HEX = "7f3a9b1c..." (some hundreds of bytes)
You decode it:
let cfg: builder::LatticeConfig =
builder::LatticeConfig::from_hex(LATTICE_CONFIG_HEX)?;
Hex fingerprint is strictly less convenient than a crate dep:
- You lose compile-time types for the organism configs.
- You have to re-check the fingerprint at startup.
- Operators that rotate often burn you with stale hex strings.
Use it when you have to. Prefer a deployment crate.
MR_TDs
Every TDX-gated organism in the lattice has a reproducible image build with a precomputed MR_TD (the measurement register that binds the image to the hardware root of trust). The operator publishes:
- The MR_TD hex string — 48 bytes, lowercase — per gated organism.
- The reproducible image build instructions, so you can check the MR_TD yourself if you do not trust the operator’s published value.
In the default pattern the operator publishes MR_TDs alongside
the LatticeConfig in the deployment crate (see above). If you
compile with the tee-tdx feature on the organism crate, the
organism’s TicketValidator pins the MR_TD into the admission
ticket.
If you do not compile with tee-tdx, you are not verifying
MR_TDs yourself — you are trusting the committee’s self-
attestation via mosaik ticket validation. That is an acceptable
posture for read-only analytics; it is not an acceptable posture
for wallets whose anonymity depends on the unseal committee.
What the operator does not owe you
- Bootstrap peers. Peer discovery is universe-level, via
mosaik’s standard gossip + pkarr + mDNS stack. You do not need
an explicit bootstrap list to reach an operator’s lattice; any
reachable peer on
builder::UNIVERSEsuffices. If you need an explicit bootstrap for a cold-start agent, the operator’s aggregator addresses are fine — but they are a convenience, not a requirement. - A status endpoint. The public commit logs in every organism’s collections are the status. If you need a dashboard, build one; mosaik’s Prometheus exporter + the organism’s metrics reference covers the surface.
- A service-level agreement. Lattices are permissioned pipelines with any-trust, threshold, or majority-honest trust models spelled out in threat-model.md. Liveness guarantees, when given, are operator-level commercial commitments, not protocol-level.
Checking your handshake
Before writing any application code, verify you have everything:
let cfg: LatticeConfig = /* from the operator */;
// 1. Universe matches.
let network = Arc::new(Network::new(builder::UNIVERSE).await?);
assert_eq!(network.network_id(), builder::UNIVERSE);
// 2. Fingerprint matches the one the operator published.
assert_eq!(
hex::encode(cfg.lattice_id()),
"7f3a9b1c....", // from operator release notes
);
// 3. Per-organism fingerprints also match, if the operator
// published them individually for independent rotation.
assert_eq!(hex::encode(cfg.atelier.fingerprint()), "....");
assert_eq!(hex::encode(cfg.unseal .fingerprint()), "....");
// ... for each organism you bind ...
fingerprint() is a pure function on each organism’s Config.
It never touches the network; any divergence is a compile-time
or release-notes-time bug.
If something does not add up
A partial or inconsistent handshake is a red flag. Common patterns and their fix:
- Operator gives you only zipnet’s
Config, not the fullLatticeConfig. Ask for the rest; if they cannot produce it, they are running a zipnet deployment, not a lattice. The zipnet book covers that case. - Operator gives you a
LatticeConfigbut says some organisms are “not deployed yet”. Acceptable for a development lattice; unacceptable for production. Clarify the schedule in writing. - Operator will not publish MR_TDs for TDX-gated organisms.
Do not compile with
tee-tdx. You are implicitly trusting the operator’s committee to self-attest. Acceptable for read-only uses; not acceptable for wallets that rely on submission anonymity.
Next reading
- Connecting to a lattice — what you do with the handshake once you have it.
- TEE-gated lattices — the MR_TDs side in detail.
Submitting transactions anonymously
audience: integrators
Submission is entirely the zipnet organism’s job. The lattice
consumes zipnet unchanged; the zipnet developer book
is the authoritative reference. This page covers only the bits
that are specific to submitting into a lattice rather than a
standalone zipnet deployment.
Which Config you pass
Integrators pass the lattice’s zipnet config (ETH_MAINNET.zipnet),
not a standalone zipnet config:
use zipnet::Zipnet;
let submitter = Zipnet::<Tx2718>::submit(&network, Ð_MAINNET.zipnet).await?;
The lattice’s zipnet config is content + intent addressed under
the lattice’s instance name (e.g. "ethereum.mainnet"). Two
lattices sharing a zipnet committee is not a supported pattern;
each lattice derives its own zipnet GroupId from its own root.
Picking a datum type
Every EVM lattice’s zipnet organism shuffles a chain-native
sealed envelope type. The reference datum is Tx2718 for EIP-
2718 encoded transactions, sealed to the lattice’s unseal
committee public key:
use zipnet::{DecodeError, ShuffleDatum, UniqueId, unique_id};
pub struct Tx2718(pub [u8; 1024]);
impl ShuffleDatum for Tx2718 {
const TYPE_TAG: UniqueId = unique_id!("builder.tx2718-v1");
const WIRE_SIZE: usize = 1024;
fn encode(&self) -> Vec<u8> { self.0.to_vec() }
fn decode(bytes: &[u8]) -> Result<Self, DecodeError> {
<[u8; 1024]>::try_from(bytes).map(Self).map_err(|e| DecodeError(e.to_string()))
}
}
TYPE_TAG folds into the zipnet instance’s fingerprint, so a
lattice that ships a Tx2718 v2 with a different size is a
different zipnet GroupId and is not cross-compatible with the
v1 lattice. This is by design; see the zipnet wire
invariants.
Sealing the payload to unseal
Zipnet’s DC-net construction provides sender anonymity; it does
not by itself provide payload confidentiality against the
zipnet committee. The lattice achieves payload confidentiality
by sealing the payload to the unseal committee’s threshold
public key before writing it into the Tx2718 buffer.
Pseudocode:
let ct = unseal::seal(Ð_MAINNET.unseal, &tx_rlp_bytes)?;
let padded = pad_to(ct, 1024);
submitter.send(Tx2718(padded)).await?;
unseal::seal is a pure function parameterised on the lattice’s
unseal config; it encrypts the payload to the committee’s
published threshold public key, producing a ciphertext of
deterministic size (plaintext + AEAD overhead). pad_to
right-pads to Tx2718::WIRE_SIZE. Zipnet rejects non-exact sizes.
Bytes you never put into the Tx2718 buffer:
- Your wallet address in cleartext.
- Any metadata linking your submission to prior on-chain activity.
- Unpadded payloads; a variable-size envelope leaks sender identity.
Cover traffic
A Submitter<Tx2718> with no messages in flight sends a cover
envelope per zipnet round. That is the default and you should
not turn it off:
let submitter = Zipnet::<Tx2718>::submit(&network, Ð_MAINNET.zipnet)
.await?;
// submitter sends cover traffic every round while idle.
// drop(submitter) to stop.
The zipnet publishing page covers the tuning knobs — polling cadence, cover payload — which are unchanged in the lattice context.
Retry policy
Zipnet’s send returns SubmissionId once the envelope is
queued. The envelope may or may not land in the round’s broadcast
vector (deterministic slot assignment; collisions are possible
until footprint scheduling ships). A searcher or wallet that
needs guaranteed inclusion polls its Reader<Tx2718> for
byte-equality on future rounds until the envelope lands:
let mut reader = Zipnet::<Tx2718>::read(&network, Ð_MAINNET.zipnet).await?;
let expected = tx.clone();
let mut deadline = tokio::time::Instant::now() + Duration::from_secs(30);
loop {
tokio::select! {
Some(got) = reader.next() => {
if got == expected { break; }
}
_ = tokio::time::sleep_until(deadline) => {
submitter.send(expected.clone()).await?;
deadline = tokio::time::Instant::now() + Duration::from_secs(30);
}
}
}
In production, wrap this in your agent’s retry backoff logic and surface failures to your operator.
What happens after submission
Your envelope lands in a zipnet::Broadcasts[S] entry for some
slot S. The unseal committee produces the cleartext for that
slot into unseal::UnsealedPool[S]. atelier includes
transactions from the unsealed pool in its candidate block for
slot S. If you care whether your transaction made it on-chain
under this lattice, watch tally::Refunds[S] or the chain’s own
state; see Receiving refunds and attributions.
Next reading
- Placing bundle bids — the searcher-side surface zipnet feeds into.
- Reading built blocks — checking whether your submission landed in a candidate block.
- zipnet publishing messages — the full submission reference.
Placing bundle bids
audience: integrators
Bidding is the offer organism’s job. As a searcher, you submit
sealed bids over bundles that reference the lattice’s current
UnsealedPool[S], and you read the committed auction outcome
from offer::AuctionOutcome.
Open the handles
use offer::Offer;
let bidder = Offer::<Bundle>::bid (&network, Ð_MAINNET.offer).await?;
let mut outcomes = Offer::<Bundle>::outcomes (&network, Ð_MAINNET.offer).await?;
bidder is a Submitter<Bundle>; outcomes is a
Stream<Item = AuctionOutcome>. You need both for a closed loop.
The Bundle datum
Each lattice pins a Bundle type its offer organism auctions
over. The reference shape:
pub struct Bundle {
/// Target slot. offer auctions one outcome per slot.
pub slot: u64,
/// Your bid in the lattice's accounting currency (wei on L1).
pub bid: u128,
/// The transactions you want included, in the order you want them.
pub txs: Vec<Tx2718>,
/// Optional reference into zipnet's UnsealedPool slot content
/// that your bundle depends on (e.g. a backrun target).
pub depends_on: Option<UnsealedRef>,
}
The exact fields are the lattice’s offer::Config
responsibility; offer::Bundle::SCHEMA_TAG folds into the
organism fingerprint, so a version bump is a new organism id,
not a silent upgrade.
Sealing the bid
Bids ride an Offer::<Bundle>::bid stream that is threshold-
encrypted to the offer committee’s published public key before
any committee member sees it. The encryption happens inside
bidder.send; from your point of view the call is just:
let bid_id = bidder.send(Bundle {
slot: target_slot,
bid: 2_500_000_000_000u128, // 2.5 gwei
txs: vec![signed_backrun_tx],
depends_on: Some(UnsealedRef::Slot(target_slot)),
}).await?;
The offer committee unseals the bid inside its state machine at auction close. Until close, no party — not the committee members, not competing searchers, not the lattice operator — sees your bid value.
Bid withdrawal
You can withdraw a bid while the auction is still open, by
sending a BundleWithdraw { bid_id } on the same stream. Once
CloseAuction commits, withdrawals for that slot are rejected.
Outcome
The auction commits one AuctionOutcome per slot. Watch the
stream until you see your target slot:
use futures::StreamExt;
while let Some(outcome) = outcomes.next().await {
if outcome.slot == target_slot {
if outcome.winner == self_addr {
println!("you won slot {} at {} wei", outcome.slot, outcome.bid);
} else {
println!("you lost slot {}; winner {:?}", outcome.slot, outcome.winner);
}
break;
}
}
AuctionOutcome is a committed fact; every offer committee member
has applied the same decision. You do not need to vote or ack;
you only read.
Verifying the auction’s honesty
Offer’s state machine enforces:
- Monotonic slots.
AuctionOutcome[S+1]cannot commit beforeAuctionOutcome[S]. - Unique winner per slot. Exactly one outcome per slot.
- Bid decryption inside apply. Losing bids are never materialised outside the apply step; they are discarded after the winner is picked.
A majority-malicious offer committee can still commit a
non-max-bid winner (see
threat-model — offer).
To detect it, watch tally::Refunds[S]: a legitimate winner
receives a refund proportional to their bid; an offer-committee-
chosen non-winner would receive nothing, and a searcher who
expected to win can escalate. The lattice does not automate
escalation; tally is an audit surface.
Dependencies on unsealed order flow
Backrun-style bundles reference transactions the lattice has
already unsealed for the same slot. UnsealedRef::Slot(S)
instructs the atelier to include your bundle’s txs after any
transactions in UnsealedPool[S]. If the pool is empty for that
slot, the bundle still applies; if your bundle depends on a
specific tx in the pool that is not present, offer’s state
machine rejects the bid at apply time and your send returns
Error::Protocol.
Next reading
- Reading built blocks — verifying the block your winning bid landed in.
- Receiving refunds and attributions — claiming your share of the block’s MEV.
- threat-model — offer — what majority-malicious committees can and cannot do to your bids.
Reading built blocks
audience: integrators
Reading candidate blocks is the atelier organism’s job; reading
which of those blocks a proposer accepted is the relay
organism’s. This page shows what each surface gives you and
how to verify it.
Who reads from atelier and relay
- Proposers / sequencers that consume candidate blocks from
the lattice and ship them on-chain. They read
atelierfor the block body and userelayto track whether the proposer (or the sequencer internally) accepted it. - Analytics agents that replay or index the lattice’s output. They read both organisms read-only.
- Audit agents that cross-check the lattice’s public commit logs against on-chain state. Same.
Searchers who want to confirm their bundle landed typically do
not need atelier directly — they read tally::Refunds for the
slot and trust the tally committee’s signature. atelier is for
consumers that need the block body itself.
Open the handles
use atelier::Atelier;
use relay::Relay;
let mut candidates = Atelier::<Block>::read (&network, Ð_MAINNET.atelier).await?;
let mut accepted = Relay::<Header>::watch (&network, Ð_MAINNET.relay ).await?;
Both are read-only Streams. You do not need committee
membership to open them; a ticket admitting you to the lattice’s
read-side surface is sufficient.
What an atelier Block carries
pub struct Block {
pub slot: u64,
pub header: Header, // standard chain header
pub body: Vec<Tx2718>, // canonical tx list
pub builder_sig: BlsAggSig, // see below
pub committee_roster: Vec<BlsPub>, // at-commit-time committee pubkeys
pub hints_applied: Vec<HintId>, // co-builder hints folded in
}
builder_sig is the atelier committee’s BLS aggregate signature
over blake3(slot ‖ header ‖ body ‖ hints_applied).
committee_roster is the set of committee public keys that
signed, as of the commit moment.
Verifying the committee signature
atelier::Config carries the expected committee public keys and
the expected TDX MR_TD. The verification function is pure:
if !atelier::verify(&candidate, Ð_MAINNET.atelier) {
eprintln!("invalid atelier sig on slot {}", candidate.slot);
continue;
}
verify checks:
builder_sigaggregates to a valid signature undercommittee_roster’s keys over the hash above.committee_rosteris a majority-subset of the pubkey list pinned inETH_MAINNET.atelier.- The hash in
builder_sigmatches the actual slot / header / body / hints_applied fields.
If you additionally compile with tee-tdx, the organism’s
ticket validator has already checked TDX quotes on every
committee member’s PeerEntry before the bond was formed; you do
not verify MR_TDs in verify itself (they are enforced at
admission time).
A verify failure means either the committee rotated its
roster and you are on stale pubkeys, or the block was committed
by a committee that does not match the pinned config. Both are
cause to abort the consuming action.
Reading relay acknowledgements
Relay::<Header>::watch gives you the committed
AcceptedHeaders collection as a stream:
pub struct AcceptedHeader {
pub slot: u64,
pub header: Header,
pub bid: u128,
pub proposer: ProposerId,
pub ack_evidence: Vec<u8>, // proposer-signed payload
}
while let Some(ack) = accepted.next().await {
println!("slot {}: proposer {:?} accepted at bid {}",
ack.slot, ack.proposer, ack.bid);
}
ack_evidence is what the relay committee recorded from the
proposer. On L1 with MEV-Boost, it is a proposer-signed payload
the relay committee collected over the standard MEV-Boost
submission API; on an L2, it is the sequencer’s equivalent.
Relay commits AcceptedHeaders[S] when a majority of its
committee agrees the proposer acknowledged. A majority-malicious
relay committee can forge an acceptance; tally’s on-chain
inclusion watcher is the ground truth that cross-checks.
What a “slot” means on L1 vs L2
- L1 PBS. Slot is the proposer’s slot number; one header per
slot;
AcceptedHeadercorresponds to the proposer’s MEV-Boost acceptance for that slot. - L2 with centralized sequencer. Slot is the sequencer’s
block number (or sub-block if the lattice ships at a finer
cadence);
AcceptedHeaderis the sequencer’s internal accept of the candidate. - L2 with decentralized sequencer. Slot is the chain’s
leader-rotation index;
AcceptedHeaderis the elected leader’s accept for that slot.
atelier::Block::header is the chain’s native header type in
every case; your code does not need to branch on chain type
unless you are decoding ack_evidence directly.
Handling gaps
A lattice that fails to commit a Candidates[S] for some slot —
because the slot’s window expired with insufficient hints, or
because the committee was degraded — simply does not emit an
entry. You will see a gap in the stream. Do not assume the
lattice has retried; the next slot’s entry is the next item in
the stream. Fill gaps from the chain itself when your use case
requires a dense sequence.
Next reading
- Receiving refunds and attributions — where
tallygives you the “which submissions contributed to this block” side. - atelier organism spec — the full public-surface + state-machine spec.
- threat-model — atelier — what TDX attestation gets you and what it does not.
Receiving refunds and attributions
audience: integrators
Refund accounting is the tally organism’s job. Tally commits
one Refunds[S] entry per slot once the chain has included the
lattice’s winning block, and publishes an Attestations[S] entry
that is presentable to an on-chain settlement contract for
independent verification.
Open the handles
use tally::Tally;
let mut refunds = Tally::<Attribution>::read (&network, Ð_MAINNET.tally).await?;
let mut attestations = Tally::<Attestation>::attestations(&network, Ð_MAINNET.tally).await?;
Both are read-only Streams and world-readable by any lattice
ticket holder. Attestations are specifically designed to be
safe to publish — they carry no cleartext sender identity, only
on-chain-presentable signatures over commitment-hash inputs.
What an Attribution carries
pub struct Attribution {
pub slot: u64,
pub block_hash: [u8; 32], // included on chain
pub recipients: Vec<Recipient>, // who gets paid what
pub evidence: Evidence, // refs into upstream commits
}
pub struct Recipient {
pub addr: [u8; 20],
pub amount: u128, // in chain native units
pub kind: RecipientKind,
}
pub enum RecipientKind {
/// Wallet whose zipnet submission made it into the block.
OrderflowProvider { submission: SubmissionRef },
/// Searcher whose offer bid won the slot's auction.
BidWinner { bid: BidRef },
/// Co-builder operator whose atelier hint made it into the block.
CoBuilder { member: BlsPub },
}
SubmissionRef, BidRef, and the BlsPub committee member are
all opaque handles into the lattice’s upstream commit logs.
Recipients prove their claim by matching one of these references
against their local record (your wallet’s SubmissionId from
zipnet; your searcher’s BidId from offer; your co-builder’s
public key).
Filtering attributions concerning you
A wallet or searcher typically only cares about attributions referencing their own prior activity:
use futures::StreamExt;
while let Some(attr) = refunds.next().await {
for recipient in &attr.recipients {
if recipient.addr == self_addr {
println!("slot {} block {:x?}: {} wei for {:?}",
attr.slot, attr.block_hash, recipient.amount, recipient.kind);
}
}
}
There is no server-side filter in v0; agents scan every attribution and match locally. For a high-volume analytics use case, index attributions into your own store keyed by recipient address.
Claiming on-chain
An Attestation is the cryptographic proof you present to the
lattice’s settlement contract to claim the refund:
pub struct Attestation {
pub slot: u64,
pub block_hash: [u8; 32],
pub recipient: [u8; 20],
pub amount: u128,
pub kind_digest: [u8; 32], // blake3 of the RecipientKind payload
pub signatures: Vec<(TallyMemberId, Secp256k1Sig)>,
}
Submit the Attestation to the settlement contract’s claim
function. The contract verifies:
- The signature set is at least
t-of-nof the lattice’s tally committee (wheretis pinned in the contract at deployment). - The signatures cover
blake3(slot ‖ block_hash ‖ recipient ‖ amount ‖ kind_digest). - The
block_hashmatches an on-chain block at the givenslot.
A tally committee that attempts to mis-attest — signs a claim
for a block that was never included, or to a recipient that has
no upstream reference — is rejected by the contract. The
contract is the ground truth for claim validity; tally’s
Refunds collection is the authoritative history.
Verifying the upstream evidence yourself
For wallets or searchers that do not want to trust tally’s
attestation at face value, the evidence field in Attribution
names the upstream commits. You read them out of the upstream
organisms and reconstruct the attribution yourself:
use zipnet::Zipnet;
use offer::Offer;
let mut zipnet_reader = Zipnet::<Tx2718>::read(&network, Ð_MAINNET.zipnet).await?;
let mut offer_outcomes = Offer::<Bundle>::outcomes(&network, Ð_MAINNET.offer).await?;
// For each Attribution's evidence, go read the referenced slot in
// the upstream organism and check it matches.
This is work you only need to do if tally’s trust assumption
(majority-honest) is inadequate for your use case. Most
integrators treat Attestations as authoritative.
If your submission did not produce a refund
Not every submission produces a refund. Reasons:
- Your zipnet envelope was a cover packet or a collision — nothing was in the winning block.
- The winning block did not include your transaction (fee market, gas limit, etc.).
- The lattice had a liveness failure for the slot (tally did not commit); the chain used a different builder.
- The tally committee is majority-malicious and dropped your attribution. You escalate to the lattice operator; the on-chain record is your ground truth.
tally::Refunds[S] is an append-only collection. The absence of
a recipient record for your address in a given slot is a
negative fact — the lattice is saying “no refund for you on
this slot”. Lattice operators SLA around liveness (percentage of
slots where Refunds commits within a deadline); missed slots
beyond the SLA are an operator concern, not a protocol one.
Next reading
- tally organism spec — the full state-machine contract.
- Reading built blocks — the upstream block whose inclusion tally verifies.
- threat-model — tally — what the majority-honest assumption buys.
Connecting to a lattice
audience: integrators
You connect to a lattice the same way you connect to any mosaik
service: build a Network against builder::UNIVERSE, bring
your own discovery + transport config, and open organism handles
against your compiled-in LatticeConfig.
This page is the reference for the connection side. The application side lives in Quickstart — submit, bid, read.
Default shape
use std::sync::Arc;
use mosaik::Network;
use builder::UNIVERSE;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let network = Arc::new(Network::new(UNIVERSE).await?);
// ... open organism handles ...
Ok(())
}
Network::new(UNIVERSE) applies the mosaik defaults: pkarr /
Mainline DHT bootstrap, /mosaik/announce gossip, iroh QUIC
transport, mDNS off, no explicit bootstrap peers. It works for
most production agents on the open internet.
Bring-your-own-config
When you need specific discovery, transport, or metrics
configuration, use the Network::builder:
use std::{net::SocketAddr, sync::Arc};
use mosaik::{Network, discovery};
use builder::UNIVERSE;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let network = Arc::new(
Network::builder(UNIVERSE)
// Bootstrap off a known peer for cold starts (optional).
.with_discovery(
discovery::Config::builder()
.with_bootstrap(vec![ /* known peer ids */ ])
)
// Expose prometheus metrics for your agent.
.with_prometheus_addr("127.0.0.1:9100".parse::<SocketAddr>()?)
// Turn mDNS on for local development.
.with_mdns_discovery(cfg!(debug_assertions))
.build()
.await?,
);
// ... open organism handles ...
Ok(())
}
Bootstrap peers are universe-level. Any reachable peer on
builder::UNIVERSE — not specific to any lattice — is a valid
bootstrap. Once bonded, the organism’s own discovery locates the
specific committee peers via the shared catalog.
Identity
Your agent’s PeerId is derived from a secret key you control.
The default — Network::new — generates an ephemeral key per
process. That is fine for short-lived agents; long-running ones
(trading bots, indexers) should pin a stable key so your peer
catalog entries survive restarts:
use mosaik::SecretKey;
let secret = SecretKey::from_hex(env::var("BUILDER_AGENT_SECRET")?)?;
let network = Arc::new(
Network::builder(UNIVERSE)
.with_secret_key(secret)
.build()
.await?,
);
See the mosaik getting-started for key rotation and secret-management guidance.
Ticketed admission
Some organisms in some lattices gate their read-side on a
ticket. atelier is the common case on the read side
(integrators reading block bodies); offer gates its write
side on a searcher ticket. When the lattice operator issues you
a ticket, install it on the network:
use mosaik::TicketValidator;
let network = Arc::new(
Network::builder(UNIVERSE)
.with_ticket(operator_issued_searcher_ticket)
.build()
.await?,
);
Ticket formats are per-organism and per-lattice; follow the
operator’s handoff. Missing tickets surface as Attestation
errors on the relevant organism’s verb() call, not as
ConnectTimeout.
Local development
A lattice’s integration test harness is the deterministic path
for iteration. If a lattice publishes a --test workspace
(equivalent to zipnet’s e2e), prefer running that harness for
integration tests rather than hitting a live lattice. Live
lattices have the usual P2P cold-start latencies.
For agent-level development against a live lattice, expect:
- First bond to any committee member: up to a minute on a cold agent over fresh iroh relays.
- Subsequent
verb()calls: sub-second once bonded. ConnectTimeoutafter roughly 60 seconds if no committee member is reachable.
Troubleshooting
| Symptom | Likely cause | Check |
|---|---|---|
ConnectTimeout | Mismatched LatticeConfig or the organism is not running | Compare lattice_id() hex; ask the operator if the organism is up |
WrongUniverse | You built Network against a non-builder universe | network.network_id() == builder::UNIVERSE |
Attestation | Missing or stale ticket | Verify with the operator; rotate if expired |
Protocol("deferred") | You called an organism verb that is not implemented in this version | Check the organism’s own Roadmap |
Next reading
- TEE-gated lattices — compiling with
tee-tdxto pin MR_TDs. - What you need from the operator — the fact sheet that determines whether any of the above actually works.
TEE-gated lattices
audience: integrators
Some lattices require integrators to run their agent inside a TDX enclave; most do not. This page explains the two postures and when each applies.
Two postures
Open read, TDX-gated write. The common case. Reading the
lattice’s public surface (every organism’s read / watch /
outcomes stream) requires no TEE; writing to it
(Zipnet::submit, Offer::bid) is gated by a lattice-issued
ticket. Integrator agents run on ordinary hardware; the ticket
is the proof the lattice operator has vetted the agent.
TDX-gated write with attestation. For anonymity-sensitive
deployments the lattice operator requires the submitting agent
itself to run inside TDX. The organism’s write-side ticket
validator additionally requires a valid TDX quote on the
agent’s PeerEntry. This is the posture the zipnet v2 TDX path
takes; lattices that want to extend the same property to
bundles adopt it for offer too.
Which posture your lattice uses
The operator’s fact sheet (see handshake-with-operator.md) names the posture. If the operator publishes MR_TDs for any organism’s write-side, that organism is TDX-gated on writes. If MR_TDs are published only for the organism’s committee, writes are still open.
Compiling for TDX
When the lattice requires you to run in TDX, compile your agent
with the tee-tdx feature enabled on the relevant organism
crates and on mosaik itself:
[dependencies]
mosaik = { version = "=0.3.17", features = ["tee-tdx"] }
zipnet = { version = "0.1", features = ["tee-tdx"] }
offer = { version = "0.1", features = ["tee-tdx"] }
# ... as needed ...
[features]
tdx-builder-ubuntu = ["mosaik/tdx-builder-ubuntu"]
In your build.rs:
fn main() {
#[cfg(feature = "tdx-builder-ubuntu")]
mosaik::tee::tdx::build::ubuntu()
.with_default_memory_size("4G")
.build();
}
cargo build --release --features tdx-builder-ubuntu produces a
reproducible initramfs, OVMF, kernel, and a MR_TD hex file.
You publish your MR_TD to your operator (so they can pin it in
the write-side ticket validator) and boot your agent from the
resulting image.
See the mosaik TDX tutorial for the full walk- through; the flow is identical to zipnet’s v2 TDX path.
What MR_TD pinning gets you
When the write-side ticket validator pins your MR_TD:
- Your agent cannot be silently swapped for a different binary without the lattice operator updating the ticket.
- A compromised host that is not running your published image cannot submit as your agent.
- The lattice operator can revoke your ticket by rotating the pinned MR_TD without touching your agent’s secret key.
What it does not get you:
- Protection against your own bugs. TDX measures the image, not the correctness of the code inside it.
- Privacy of your code from the operator — they see the MR_TD, and reproducible builds mean they could build the same image themselves. TDX’s property is integrity, not code confidentiality.
- Cross-organism attestation. Each organism’s ticket pins its own MR_TD; there is no notion of “the agent as a whole is attested”. If you need coupling (e.g. “the same TDX image submits to zipnet and offer”), publish one MR_TD and pin it on both organisms.
When MR_TDs rotate
Lattice operators rotate MR_TDs when they upgrade their
committee image or revoke a compromised key. When your organism
crate’s pinned MR_TD differs from the committee’s current one,
your next verb() call returns Attestation error. Rotate
your own image or your own build if your published MR_TD
changed; if only the operator’s changed, wait for their new
LatticeConfig release.
You are not required to pin anything
If the lattice is not TDX-gated on your write side (the
default), do not compile with tee-tdx. Compiling with it
when the lattice does not require it is a no-op — the ticket
validator chain simply ignores the unused TDX ticket — but it
adds build-time overhead you do not need.
Next reading
- Connecting to a lattice — where the TDX-enabled build slots in.
- mosaik TDX — the full image-builder reference.
- zipnet TEE-gated deployments — a worked example of the pattern on one organism.
Lattice overview
audience: operators
A lattice is one end-to-end block-building deployment for one EVM
chain. You — the operator — stand up the six organisms that make
up the lattice, publish a LatticeConfig for integrators to
compile in, and keep the whole thing running against the
chain’s cadence. This page is the architectural orientation; the
runbook is Quickstart — stand up a lattice.
One-paragraph mental model
A mosaik universe is a single shared NetworkId
(builder::UNIVERSE = unique_id!("mosaik.universe")) that hosts
every lattice and every other mosaik service. Your job as the
lattice operator is to stand up an instance under a name you
pick (e.g. ethereum.mainnet, base.mainnet) and keep it running.
A lattice is a composition of six organisms — zipnet, unseal,
offer, atelier, relay, tally — each of which is itself a
mosaik-native service. External integrators bind to your lattice
by pinning a LatticeConfig (the six organism configs under one
name) and opening typed handles against it — they compile the
fingerprint in from their side, so there is no on-network
registry to publish to and nothing to advertise. Your servers
simply need to be reachable.
Who runs a lattice
Typical operator shapes:
- A rollup team wanting a builder pipeline for their own L2. One operator runs every organism for one lattice.
- An MEV coalition hosting a shared builder for a group of
rollups or L1. Multiple operators each run committee members in
the
atelierorganism (and optionallyoffer/relay); one of them is the lattice’s authoritative steward of theLatticeConfig. - A chain foundation running a reference lattice for their chain. Same single-operator shape as the rollup team, with public participation in the co-builder role.
- A research / testnet lattice. Small, loose, development- grade. Single operator, all organisms on one or two hosts.
The default pattern in this book is one lattice operator who runs every organism. Multi-operator co-building is covered in Roadmap — Phase 2.
Six organisms, one pipeline
Each organism is a distinct piece of infrastructure with its own trust model, hardware profile, and rotation cadence. The table below is the shortcut reference; per-organism runbooks live on their own pages.
| Organism | Committee size (v1) | TDX required | Hardware | Rotation cadence |
|---|---|---|---|---|
zipnet | 3–7 servers + 1 agg | v2 | modest cloud | quarterly |
unseal | 3–7 members | yes | TDX-enabled hosts | quarterly |
offer | 3–5 members | optional | modest cloud | monthly |
atelier | 3–7 members | yes | TDX hosts, high RAM | monthly |
relay | 3–5 members | optional | well-connected cloud | weekly |
tally | 3–5 members | no | modest cloud | monthly |
“v1” means the first shipped version of the organism crate;
sizes are recommended, not enforced by protocol. A lattice
running 5 unseal members with t=3 threshold is a different
fingerprint from one running 7 with t=4.
What every host in your lattice needs
Regardless of organism role:
- Outbound UDP to the internet (iroh/QUIC transport) and to mosaik relays.
- A few MB of RAM beyond whatever the organism itself consumes.
- A clock within a few seconds of the universe consensus (Raft tolerates skew but not arbitrary drift).
LATTICE_INSTANCE=<name>set to the same instance name on every node in that lattice (e.g.ethereum.mainnet).LATTICE_CHAIN_ID=<id>set to the EIP-155 chain id the lattice services.
See Environment variables for the complete list.
What defines your lattice
Your lattice is identified by a LatticeConfig that folds every
signature-altering input for every organism into one on-wire
fingerprint. When integrators bind to your lattice they compare
LatticeConfig::lattice_id() against the hex you publish;
mismatches produce ConnectTimeout on their side, not silent
disagreement.
The LatticeConfig has:
| Field | Responsibility |
|---|---|
name | Short stable namespaced string you pick. Examples: ethereum.mainnet, base.testnet. |
chain_id | The EIP-155 chain id. Folded into the fingerprint so cross-chain mis-binds surface as ConnectTimeout. |
zipnet | The zipnet config: shuffle window, init salt, ACL. |
unseal | Threshold parameters (t, n), committee share pubkeys, acl. |
offer | Auction window, committee offer pubkey, acl. |
atelier | TDX MR_TD pin, committee pubkeys, block-template schema. |
relay | Policy selector (L1 MEV-Boost, L2 sequencer endpoint), committee pubkeys. |
tally | Settlement contract address, committee secp256k1 pubkeys, refund policy. |
You change any field and the whole lattice fingerprint changes. That is the content + intent addressing discipline from zipnet’s design intro applied to six organisms at once. See topology-intro — Within-lattice derivation for the mathematical layout.
Minimum viable lattice
A minimum instance runs six organism committees. In the common “one operator, one lattice” shape, that is:
- 3 zipnet committee server processes + 1 aggregator.
- 3 unseal committee member processes (TDX required).
- 3 offer committee member processes.
- 3 atelier committee member processes (TDX required).
- 3 relay committee member processes.
- 3 tally committee member processes.
That is 16 processes across however many hosts you choose. A tight layout packs committee members from different organisms onto the same host (one process per organism role, distinct systemd units); a paranoid layout gives each organism its own hosts.
How your nodes find each other
Mosaik’s standard peer discovery — /mosaik/announce gossip
plus the Mainline DHT via pkarr plus optional mDNS for local
development — handles everything. You do not configure streams,
groups, or IDs by hand. Every process starts with
LATTICE_INSTANCE=<name>, derives the organism’s own
GroupId/StreamId/StoreId from the lattice fingerprint, and
bonds to its peer set automatically.
This means you pay no DevOps cost to scale a lattice horizontally
within a single operator (add a host, start the systemd units,
it joins). It also means a typo in LATTICE_INSTANCE on one
host produces a process that does not bond — the process runs,
it does not break anything, it simply does not join. Check
lattice_id() in metrics before concluding a host is joined.
What your nodes do not do
- They do not configure each other. Every organism derives
its identity from the
LatticeConfigyou pin at each host’s environment; no inter-organism handshake discovers config at runtime. - They do not share a database. Each organism holds its own Raft state independently. State machine snapshots are per organism.
- They do not cross-authorise. An
ateliercommittee member does not get to join anoffercommittee just because they are in the same operator’s fleet. Each organism’sTicketValidatorcomposition controls admission independently.
Running many lattices side by side
One operator can run several lattices on the same universe — production, testnet, internal dev, per-chain variants. Each has its own instance name, its own committees, its own MR_TDs, its own ACL. Hosts can run one or many; one systemd unit per (lattice, organism, role) is the standard layout:
systemctl start builder@ethereum.mainnet-zipnet-server
systemctl start builder@ethereum.mainnet-atelier-member
systemctl start builder@base.testnet-zipnet-server
systemctl start builder@base.testnet-atelier-member
Unit names are operator-chosen; each wraps an invocation of the
appropriate organism binary with a distinct LATTICE_INSTANCE.
The lattices share the universe and the discovery layer, and
appear to integrators as distinct LatticeConfig fingerprints.
See also
- Quickstart — stand up a lattice
- Wiring the organisms together
- Rotations and upgrades
- Monitoring and alerts
- Incident response
- Designing block-building topologies on mosaik — the rationale for this decomposition, if you want to understand why the lattice is shaped this way before standing one up.
Quickstart — stand up a lattice
audience: operators
This page walks you from a fresh checkout to a live lattice that
external integrators can bind to with a LatticeConfig you
publish. Read Lattice overview first for the
architectural background; this page assumes it.
Status. This proposal’s organism crates are not yet shipped. The commands below are the target shape; when the crates land they will be runnable verbatim. Flagged with
// proposal:in the text below where the exact invocation is subject to change.
What you will run
Six organism roles, each as its own systemd unit (or k8s deployment, or whatever orchestration you prefer). The reference minimum:
| Role | Processes | TDX required | Hardware profile |
|---|---|---|---|
| zipnet server | 3 | in v2 | small cloud |
| zipnet aggregator | 1 | no | medium cloud |
| unseal member | 3–7 | yes | TDX-enabled hosts |
| offer member | 3–5 | optional | small cloud |
| atelier member | 3–7 | yes | TDX hosts, higher RAM |
| relay member | 3–5 | optional | well-connected cloud |
| tally member | 3–5 | no | small cloud |
Total: 16–33 processes across however many hosts you choose.
Fast path — builder lattice up
If you already have SSH access to a set of hosts that can cover the table above, one command brings up the whole lattice end-to-end. The long-form step-by-step below is what this command automates — read it when you need to diverge from the defaults (split operator responsibilities, partial deployments, bespoke orchestration).
Write a manifest that pairs each role in the table with an SSH target, plus the lattice’s two identity inputs (name, chain id) and each organism’s content parameters:
# lattice.toml
[lattice]
name = "acme.ethereum.mainnet"
chain_id = 1
[organisms.zipnet]
window = "interactive" # or "archival" / explicit tuple
[organisms.unseal]
threshold = "5-of-7"
[organisms.offer]
auction_window_ms = 800
[organisms.atelier]
block_schema = "l1-post-4844"
chain_rpc = "https://eth-mainnet.g.alchemy.com/v2/..."
[organisms.relay]
policy = "l1-mev-boost"
proposer_endpoints = ["https://mev-boost.relay-a.example"]
[organisms.tally]
settlement_addr = "0x1234..."
chain_rpc = "https://eth-mainnet.g.alchemy.com/v2/..."
# One entry per host. `roles` is drawn from the table above.
# `tdx = true` is required for hosts that carry unseal-member or
# atelier-member roles; the tool fails closed otherwise.
[[hosts]]
ssh = "ubuntu@tdx-01.acme.com"
tdx = true
roles = ["unseal-member", "atelier-member"]
[[hosts]]
ssh = "ubuntu@tdx-02.acme.com"
tdx = true
roles = ["unseal-member", "atelier-member"]
[[hosts]]
ssh = "ubuntu@tdx-03.acme.com"
tdx = true
roles = ["unseal-member", "atelier-member"]
[[hosts]]
ssh = "ubuntu@cloud-01.acme.com"
roles = ["zipnet-server", "offer-member", "relay-member", "tally-member"]
[[hosts]]
ssh = "ubuntu@cloud-02.acme.com"
roles = ["zipnet-server", "offer-member", "relay-member", "tally-member"]
[[hosts]]
ssh = "ubuntu@cloud-03.acme.com"
roles = ["zipnet-server", "offer-member", "relay-member", "tally-member"]
[[hosts]]
ssh = "ubuntu@agg-01.acme.com"
roles = ["zipnet-aggregator"]
Then:
# proposal: ships as a subcommand of the `builder` meta-crate
builder lattice up --manifest ./lattice.toml
The command performs — in order — exactly the steps documented further down this page:
- Validates the manifest against the role table (minimum
counts per role,
tdx = truewhere required, one aggregator maximum). - Generates the six per-organism admission secrets, the DKG
shares for
unsealandoffer, the BLS keys foratelier, the ECDSA keys fortally, and stable peer identity secrets per member. All secrets land locally under./secrets/<lattice-name>/with 0400 permissions. - Builds the reproducible TDX images for
unsealandatelierand records their MR_TDs. - Runs the DKG ceremonies (in-process coordination over SSH).
- Assembles the
LatticeConfig, hashes it, and stamps every organism’s fingerprint. - SSHes to each host, installs the organism binary, drops the per-unit env file, and enables the systemd unit.
- Waits for the end-to-end pipeline to commit one test slot
— same loop as
tests/e2e.rs, but against the live fleet. - Prints the lattice’s handshake kit:
lattice: acme.ethereum.mainnet
chain_id: 1
lattice_id: 7f3a9b1c...
LatticeConfig: <hex to publish to integrators>
atelier_mrtd: <48-byte hex>
unseal_mrtd: <48-byte hex>
hosts up: 7 / 7
pipeline: one slot committed end-to-end in 42.1s
If any step fails the command exits non-zero and leaves the fleet in its last known state; re-run after fixing the underlying problem and the tool resumes from the step that broke (idempotent per (lattice, host, role) tuple).
Day-2 operations
builder lattice up is also the update command. Re-run it
against a manifest whose identity fields have not changed to
roll updated binaries host-by-host; against one whose identity
has changed, it refuses and points you at
Rotations and upgrades — Lattice retirement.
Companion subcommands in the same tool:
| Command | Purpose |
|---|---|
builder lattice status | Print the health of every host and every organism. |
builder lattice down | Stop every organism in reverse pipeline order. |
builder lattice publish | Re-print the integrator handshake kit. |
builder lattice add-host | Add a host to an existing lattice (non-FP change). |
builder lattice rotate-peer | Rotate a single member’s peer identity (§Rotations). |
The subcommands read the same lattice.toml manifest.
When to skip the fast path
- You are running on Kubernetes, Nomad, or another orchestration
layer that already owns systemd-equivalent lifecycle. Generate
the
LatticeConfigwith the long-form flow below, hand the env files to your orchestrator, and skip the SSH-based host management. - You are a co-builder joining an existing lattice rather than
operating your own. You are an
atelierhost operator only; the lattice’s owning operator runs the tool. - You want to split the organism runs across several operators
(
unsealby one team,offerby another). The tool assumes a single operator; you coordinate per-organism bring-ups out of band.
Everything below is the step-by-step the fast path automates. Read it either because you fall into one of the cases above or because you want to understand what the tool does before you trust it with a production lattice.
Prerequisites
- A Rust toolchain (pinned by each organism crate; expect
>=1.93). - For TDX-gated organisms (
unseal,atelier): Intel TDX hosts. The reference build is Ubuntu 24.04 under Intel TDX; see mosaik TDX subsystem. - Outbound UDP from every host; inbound UDP is recommended but not required when iroh relays are available.
- A chain RPC endpoint for the
tallyinclusion watcher and — on L2 — forrelay’s sequencer handoff. - A configuration management system you already use (systemd, ansible, kubernetes, nomad — any).
Step 1: pick your instance identity
The instance name is the operator-chosen string that folds into every organism’s on-wire identity. Pick one that:
- Is namespaced by your organisation (
acme.ethereum.mainnet, notethereum.mainnet). - Is stable across minor rotations (rotate secrets without changing the name).
- Changes only when you retire the whole lattice identity (major version bump).
Write it down. You will set LATTICE_INSTANCE=<name> on every
process in the lattice.
Step 2: generate root secrets
Every organism has its own committee-admission secret. Generate one per organism and store in your secret manager:
# proposal: replace with per-organism `cargo run -p builder-<org> -- gen-secret`
for org in zipnet unseal offer atelier relay tally; do
openssl rand -hex 32 > "secrets/$LATTICE_INSTANCE.$org.secret"
done
These are the organism-level equivalents of zipnet’s
ZIPNET_COMMITTEE_SECRET. Distribute to the hosts that run
each organism’s committee members; treat them like root
credentials.
Step 3: generate the LatticeConfig fingerprint
Each organism has a gen-config subcommand (proposal:
builder-<org> gen-config --instance <name>) that produces the
organism’s share of the LatticeConfig. Combine them into one
LatticeConfig:
# proposal
cargo run -p builder -- gen-config \
--instance $LATTICE_INSTANCE \
--chain-id 1 \
--zipnet-window interactive \
--unseal-threshold 5-of-7 \
--offer-window 800ms \
--atelier-image ./atelier.tdx.img \
--relay-policy l1-mev-boost \
--tally-settlement-addr 0x....\
> secrets/$LATTICE_INSTANCE.lattice-config.hex
The output is the hex-encoded LatticeConfig you will publish
to integrators (see Step 7).
Step 4: build the TDX images
For every TDX-gated organism:
# proposal
cargo build --release --features tdx-builder-ubuntu -p builder-unseal
cargo build --release --features tdx-builder-ubuntu -p builder-atelier
# ship the resulting images + MR_TDs out-of-band
cat target/release/tdx-artifacts/unseal/mrtd.hex
cat target/release/tdx-artifacts/atelier/mrtd.hex
Both commands are borrowed verbatim from zipnet’s operator
runbook; the pattern is identical. Publish the MR_TDs in your
release notes — integrators compile them in via tee-tdx feature.
Step 5: smoke-test on one host
Before you touch committee hosts, confirm the six organisms run
end-to-end on your laptop. The reference test (proposal:
cargo test -p builder --test e2e lattice_end_to_end)
spins up six in-process committees, submits one envelope
through zipnet, and asserts a Refunds[0] commit on tally.
A green run in roughly 30 seconds tells you the organism crates are sound in your checkout. If it fails, nothing else on this page is going to work — investigate before touching production hosts.
Step 6: bring up the committee hosts
Provision 3–7 hosts per TDX organism; 3–5 per non-TDX. Suggested layout:
- 3 TDX hosts running both
unseal-memberandatelier-member(share the TDX host fleet). - 3 general-purpose hosts running
zipnet-server,offer-member,relay-member,tally-member(one process per organism, same host). - 1 general-purpose host running
zipnet-aggregator.
Example systemd unit (proposal):
# /etc/systemd/system/builder@.service
[Unit]
Description=Builder lattice role %i
After=network-online.target
Wants=network-online.target
[Service]
EnvironmentFile=/etc/builder/common.env
EnvironmentFile=/etc/builder/%i.env
ExecStart=/usr/local/bin/%i
Restart=on-failure
RestartSec=5s
[Install]
WantedBy=multi-user.target
One unit per (lattice, organism, role) pair. Example enable-and-start:
systemctl enable --now builder@zipnet-server
systemctl enable --now builder@zipnet-aggregator
systemctl enable --now builder@unseal-member
systemctl enable --now builder@offer-member
systemctl enable --now builder@atelier-member
systemctl enable --now builder@relay-member
systemctl enable --now builder@tally-member
Per-organism env files set the per-role secrets and the common
LATTICE_INSTANCE / LATTICE_CHAIN_ID / LATTICE_CONFIG_HEX
values. See
Environment variables.
Step 7: publish to integrators
Ship integrators three things:
- The
LatticeConfig— as a deployment crate (eth-mainnet-lattice = "..."on crates.io) or as a hex fingerprint in your release notes. - The MR_TDs for every TDX-gated organism, hex-encoded.
- The universe
NetworkIdif (and only if) you diverge frombuilder::UNIVERSE. Otherwise omit — integrators default tobuilder::UNIVERSE.
No bootstrap peers are required; mosaik discovery handles them.
If you want to give integrators a cold-start bootstrap hint,
publish your aggregator’s PeerId.
Step 8: cut integrators a test submission
Pick one of the integrator use cases (wallet, searcher,
proposer) and walk through
Quickstart — submit, bid, read
against your lattice. Treat this as an end-to-end smoke test:
if an external agent can bind, submit, and see a tally refund,
you are live.
Running many lattices on one fleet
Run one systemd unit per (lattice, organism, role). Each unit
reads a different LATTICE_INSTANCE / LATTICE_CONFIG_HEX from
its own env file. Mosaik’s peer discovery handles the rest; the
lattices share the universe without colliding on organism ids.
What to do next
- Wiring the organisms together — operator-level view of the subscription graph from composition.md.
- Running a committee server — per-organism runbook, one page per organism.
- Rotations and upgrades — how to rotate committee secrets and upgrade TDX images without losing the lattice identity.
- Monitoring and alerts — what to watch.
- Incident response — when things go wrong.
Running a zipnet committee
audience: operators
Zipnet is the submission organism. The lattice consumes the existing zipnet operator book unchanged; this page is a pointer plus the lattice-specific overrides.
Read the zipnet operator book first
Every procedure in the zipnet book — running a committee server, running an aggregator, running a client, rotations, incident response, security posture — applies to the zipnet organism inside a lattice. Treat it as authoritative for operations on this organism.
What this page covers is only the places the lattice wraps zipnet in a different envelope.
What changes in a lattice context
Three things:
- Instance name. Zipnet’s
ZIPNET_INSTANCEis derived from the lattice’s ownLATTICE_INSTANCErather than chosen independently. The reference systemd unit passesZIPNET_INSTANCE="${LATTICE_INSTANCE}"verbatim; the zipnet config derivation lives inside the lattice’sLatticeConfig. - Committee secret.
ZIPNET_COMMITTEE_SECRETis one of the six organism secrets you generated in Quickstart — Step 2. Same semantics as documented in the zipnet book; only the provenance changes. - Shared universe. Zipnet is already designed to run on
zipnet::UNIVERSE = unique_id!("mosaik.universe"), which is identical tobuilder::UNIVERSE. You do not overrideZIPNET_UNIVERSE; the lattice expects zipnet on the shared universe.
systemd unit example
# /etc/builder/zipnet-server.env
LATTICE_INSTANCE=acme.ethereum.mainnet
LATTICE_CHAIN_ID=1
# The same LATTICE_CONFIG_HEX the other organisms consume.
LATTICE_CONFIG_HEX=7f3a9b1c...
# Plus the zipnet-specific admission secret.
ZIPNET_COMMITTEE_SECRET_FILE=/etc/builder/secrets/acme.ethereum.mainnet.zipnet.secret
# Stable peer identity for the server; rotate only on incident.
ZIPNET_SECRET_FILE=/etc/builder/secrets/acme.ethereum.mainnet.zipnet.server-01.peer
The binary reads LATTICE_CONFIG_HEX, decodes the
zipnet::Config fragment, and boots under the derived
GroupId / StreamIds. The zipnet book’s env var reference
describes ZIPNET_* knobs (round period, fold deadline, etc.)
that are now fixed by the lattice Config — operators who want
non-default windows change the LatticeConfig and rebuild, not
the env.
Running the aggregator
No changes from the zipnet book. Run one zipnet aggregator
process per lattice on a well-connected host and point it at the
same LATTICE_CONFIG_HEX.
TDX-gated zipnet
If your LatticeConfig’s zipnet config has a pinned committee
MR_TD (the zipnet v2 TDX posture), build the zipnet image with
--features tee-tdx,tdx-builder-ubuntu and boot the server
processes from the resulting image. Procedure identical to the
zipnet TDX operator example.
What you do not do here
- Do not manage the lattice’s other organisms in the same unit. One systemd unit per (organism, role).
- Do not rotate
ZIPNET_COMMITTEE_SECRETwithout the lattice operator’s coordinated rotation plan (see Rotations and upgrades). - Do not host the zipnet committee on hosts shared with unrelated tenants. Same hygiene as the zipnet book requires.
Observing
Zipnet’s Prometheus metrics are emitted under the zipnet crate’s
namespace and labelled with the instance = LATTICE_INSTANCE.
See Metrics reference for which
zipnet metrics the lattice operator watches (committee size,
round commit latency, agg dropouts).
Related
Running an unseal committee
audience: operators
Unseal is the threshold-decryption organism that unwraps
zipnet::Broadcasts into UnsealedPool for the downstream
organisms. It is TDX-gated in every production deployment and is
the organism whose threshold t-of-n parameter anchors the
lattice’s anonymity budget.
Role in the lattice
Unseal watches zipnet::Broadcasts and commits one
UnsealedRound per zipnet-finalized slot once t of n
committee members contribute their threshold share. See
unseal organism spec
for the cryptographic details.
Committee sizing
The t-of-n threshold pinned in the LatticeConfig’s
unseal::Config is the hard parameter. Recommended sizes:
| n | t | Anonymity posture |
|---|---|---|
| 3 | 2 | Minimum. One committee member going rogue is enough to preserve anonymity. Liveness brittle. |
| 5 | 3 | Sensible dev / testnet shape. |
| 7 | 5 | Recommended production posture. |
| 9 | 7 | For lattices where anonymity matters most; pay the latency overhead. |
Changing t-of-n after deployment changes the unseal
fingerprint, which changes the GroupId, which means no bond.
Treat sizing as a deployment-time decision; retirements and
replacements instead of in-place changes.
Hardware
- TDX-enabled hosts. Reference build: Ubuntu 24.04 on Intel TDX. See mosaik TDX subsystem.
- Moderate RAM. 4 GiB default memory size in the TDX image build is enough; the organism’s state machine is small.
- Stable peer identity. Like zipnet committee servers,
unseal members need stable
PeerIds across restarts. Each member’s secret key is pinned in the operator’s secret store.
systemd unit example
# /etc/builder/unseal-member.env
LATTICE_INSTANCE=acme.ethereum.mainnet
LATTICE_CHAIN_ID=1
LATTICE_CONFIG_HEX=7f3a9b1c...
UNSEAL_COMMITTEE_SECRET_FILE=/etc/builder/secrets/acme.ethereum.mainnet.unseal.secret
# Each member owns a distinct DKG share secret.
UNSEAL_SHARE_SECRET_FILE=/etc/builder/secrets/acme.ethereum.mainnet.unseal.member-03.share
# Stable peer identity.
UNSEAL_SECRET_FILE=/etc/builder/secrets/acme.ethereum.mainnet.unseal.member-03.peer
Distributed key generation
At lattice bring-up time the n committee members run a DKG
ceremony. The resulting aggregate public key is what
integrators seal their payloads against (see
integrators/submitting.md).
The ceremony is a one-off: once complete, each member holds a
share and the aggregate public key is pinned into the
unseal::Config and shipped in the lattice’s
LatticeConfig. Lose a share and you lose a committee member
until the lattice retires and redeploys.
A DKG rerun happens:
- At a scheduled rotation (see Rotations and upgrades).
- When a committee member’s TDX image is compromised.
- When the lattice operator retires the lattice identity.
What this organism does not do
- It does not decide what is in a block.
atelierdoes. - It does not auction.
offerdoes. - It does not attest to any single committee member’s identity
beyond the TDX quote on their
PeerEntry.
Observing
Metrics to watch per member:
unseal_shares_submitted_total{slot=...}— rate of share contributions.unseal_decrypt_latency_seconds— time fromzipnet::Broadcastsappend toUnsealedPoolcommit.unseal_committee_size— how many committee members are bonded to this member.
If any member’s share rate drops to zero, that member is contributing nothing; rotate or investigate before the lattice’s anonymity posture weakens.
Related
- Running an atelier builder — the immediate downstream.
- Rotations and upgrades — DKG rerun procedure.
- unseal organism spec
- threat-model — unseal
Running an offer auction
audience: operators
Offer is the sealed-bid auction organism. It admits searcher
bids over the slot’s UnsealedPool content, runs a threshold-
encrypted auction inside its own state machine, and commits
one AuctionOutcome per slot.
Role in the lattice
Offer subscribes to unseal::UnsealedPool to know when a slot’s
order flow is ready for bidding, accepts sealed bids on its
Bid<Bundle> stream during the auction window, and commits the
winning bundle to AuctionOutcome before the atelier organism
picks up the pool for building.
Committee sizing
Majority-honest trust model. A majority of committee members can commit a wrong winner; bid confidentiality is threshold- protected by a separate DKG (the offer DKG) from unseal’s.
Recommended sizes:
- 3 members for development / testnet lattices.
- 5 members for production.
Changing size after deployment changes the offer fingerprint; retire-and-replace, not in-place.
Hardware
- Modest cloud. 2 vCPU / 4 GiB RAM per committee member is enough.
- TDX optional in v1; a TDX-gated variant lands when the lattice’s threat model requires it.
- Stable peer identity per member.
systemd unit example
# /etc/builder/offer-member.env
LATTICE_INSTANCE=acme.ethereum.mainnet
LATTICE_CHAIN_ID=1
LATTICE_CONFIG_HEX=7f3a9b1c...
OFFER_COMMITTEE_SECRET_FILE=/etc/builder/secrets/acme.ethereum.mainnet.offer.secret
OFFER_SHARE_SECRET_FILE=/etc/builder/secrets/acme.ethereum.mainnet.offer.member-02.share
OFFER_SECRET_FILE=/etc/builder/secrets/acme.ethereum.mainnet.offer.member-02.peer
# Auction window size — soft ceiling before the organism closes
# the auction regardless of outstanding bids.
OFFER_AUCTION_WINDOW_MS=800
OFFER_AUCTION_WINDOW_MS is a per-lattice parameter pinned in
the offer::Config. Changing it changes the fingerprint.
Searcher admission
Searchers write to Bid<Bundle> via a ticket. Three options:
- Open permissionless: any peer on the universe can bid. Not recommended for production — expose the lattice to trivial bid spam.
- JWT-gated: issue JWTs out of band to onboarded searchers; the ticket validator pins the issuer key. Standard mosaik ticket shape.
- TDX-gated: require attested searchers only. Rarely appropriate for offer (searchers are usually trusted under a commercial agreement rather than technically attested).
The choice folds into the offer::Config.acl field and into
the fingerprint.
What this organism does not do
- It does not decrypt the unsealed pool.
unsealdoes. - It does not order the winning bundle’s txs inside the block.
atelierdoes. - It does not pay out refunds to the losing searchers.
tallydoes, according to the lattice’s refund policy.
Observing
offer_bids_received_total{slot=...}— bid rate per slot.offer_auction_commit_latency_seconds— time from auction window open to commit.offer_winner_bid_wei— distribution of winning bids.
Related
Running an atelier builder
audience: operators
Atelier is the TDX-attested co-building organism. It reads the
slot’s UnsealedPool and AuctionOutcome, assembles a
candidate block body inside a TDX enclave, and commits it to
Candidates under a BLS aggregate signature.
Role in the lattice
Atelier is the place where the actual block gets built. In a Phase 1 single-operator lattice, every committee member belongs to one operator’s fleet; in Phase 2 multiple operators contribute members (co-building). In both cases the organism’s public surface is the same.
Committee sizing
- 3 members: minimum functional committee.
- 5 members: recommended production baseline.
- 7+ members: multi-operator Phase 2; each member is a distinct operator’s TDX image.
The n is pinned in atelier::Config and folds into the
fingerprint. A lattice adding a co-builder is a lattice
retirement + replacement, not an in-place n change — see
Rotations and upgrades.
Hardware
- TDX-enabled hosts, higher RAM. 8 GiB default memory in the TDX image; block simulation is the heaviest workload.
- Low-latency networking. Atelier members talk to each other many times per slot on the derived private network.
- Chain RPC access from inside the TDX image, for bundle
simulation. The RPC endpoint is pinned in
atelier::Config(or derivable fromLATTICE_CHAIN_ID).
systemd unit example
# /etc/builder/atelier-member.env
LATTICE_INSTANCE=acme.ethereum.mainnet
LATTICE_CHAIN_ID=1
LATTICE_CONFIG_HEX=7f3a9b1c...
ATELIER_COMMITTEE_SECRET_FILE=/etc/builder/secrets/acme.ethereum.mainnet.atelier.secret
ATELIER_BLS_SECRET_FILE=/etc/builder/secrets/acme.ethereum.mainnet.atelier.member-01.bls
ATELIER_SECRET_FILE=/etc/builder/secrets/acme.ethereum.mainnet.atelier.member-01.peer
# Chain RPC for simulation.
ATELIER_CHAIN_RPC=https://eth-mainnet.g.alchemy.com/v2/...
Building the TDX image
cargo build --release --features tdx-builder-ubuntu -p builder-atelier
# target/release/tdx-artifacts/atelier/ contains:
# initramfs.img
# OVMF.fd
# kernel
# mrtd.hex <-- publish this
Publish mrtd.hex in the lattice’s release notes; pin it in
the atelier::Config.mrtd field. Integrators compiling with
tee-tdx verify it on bond.
Co-building (Phase 2)
To bring a new co-builder operator into the lattice:
- New operator builds the reference atelier TDX image and publishes their MR_TD.
- The lattice’s
atelier::Config.mrtd_aclis updated to include the new MR_TD. This changes the fingerprint. - Lattice retirement + replacement: publish a new
LatticeConfigversion; existing integrators migrate on their schedule.
An in-place add-a-co-builder-without-changing-fingerprint is explicitly not supported. It would silently change the trust model under integrators’ feet.
What this organism does not do
- It does not authenticate searchers (
offer’s job). - It does not ship the block to proposers (
relay’s job). - It does not attribute refunds (
tally’s job). - It does not decide the consensus among simulations — the BLS aggregate signature is after-the-fact proof that a majority of committee members agreed on the final body. The actual agreement is reached in the Raft log.
Observing
atelier_candidates_committed_total— one per successful slot.atelier_candidate_build_latency_seconds— per-slot build time; alert when > slot period.atelier_simulation_divergence_total— when committee members disagree on a simulation. Investigate on any non-zero rate.atelier_member_tdx_attested{member=...}— 0/1 per committee member’s attestation status.
Related
Running a relay
audience: operators
Relay is the PBS-style fanout organism. It ships
atelier::Candidates to the chain’s proposer or sequencer and
records the proposer’s acknowledgement in AcceptedHeaders.
Role in the lattice
Relay members subscribe to atelier::Candidates, each forwards
the header + bid pair to the proposers they are configured to
talk to, and commits an AcceptedHeaders entry when a majority
agrees the proposer acknowledged.
Committee sizing
- 3 members minimum; 5 members for production.
Relay is any-trust on liveness: one honest member suffices to
ship a header. Integrity of AcceptedHeaders is majority-honest;
a majority-malicious relay can forge a proposer-ack record, but
tally’s on-chain inclusion watcher is the ground truth.
Hardware
- Well-connected cloud. Relay members are the organism with the most external connectivity — one socket per proposer in the proposer set, kept warm.
- Modest CPU / RAM. 2 vCPU / 4 GiB is enough.
- Chain-type-appropriate TLS setup. L1 MEV-Boost requires specific TLS ciphers; L2 sequencer endpoints vary.
Policy selection
Relay’s Config carries a Policy enum:
| Policy | Target |
|---|---|
L1MevBoost | Ethereum mainnet MEV-Boost relay endpoints |
L2Sequencer | A single L2 sequencer endpoint (centralized) |
L2LeaderRotation | The L2’s elected leader set per slot |
The policy folds into the relay fingerprint. Switching policy is a lattice retirement; you do not switch in place.
systemd unit example
# /etc/builder/relay-member.env
LATTICE_INSTANCE=acme.ethereum.mainnet
LATTICE_CHAIN_ID=1
LATTICE_CONFIG_HEX=7f3a9b1c...
RELAY_COMMITTEE_SECRET_FILE=/etc/builder/secrets/acme.ethereum.mainnet.relay.secret
RELAY_SECRET_FILE=/etc/builder/secrets/acme.ethereum.mainnet.relay.member-01.peer
# Proposer-side endpoints. Comma-separated; each relay member
# can target a different subset.
RELAY_PROPOSER_ENDPOINTS=https://mev-boost.relay-a.example,https://mev-boost.relay-b.example
What this organism does not do
- It does not choose which candidate wins. Atelier commits one candidate per slot; relay ships that one.
- It does not sign the block. Atelier’s BLS aggregate signature is what the proposer verifies.
- It does not pay out MEV. Tally does.
Observing
relay_headers_sent_total{member=...,proposer=...}.relay_proposer_ack_latency_seconds— round-trip to the proposer endpoint.relay_committee_agreement_rate— fraction of slots where committee members agreed on the ack. Investigate dips.relay_on_chain_mismatches_total— incremented whentallyreports anAcceptedHeaders[S]that does not match on-chain. Any non-zero value is an incident; see Incident response.
Related
Running a tally
audience: operators
Tally is the refund accounting organism. It watches the chain
for inclusion of the lattice’s winning blocks, attributes the
captured MEV back to the order-flow providers and searchers that
contributed, and signs an Attestation that integrators can
present to an on-chain settlement contract.
Role in the lattice
Tally is the last organism in the pipeline. It subscribes to
relay::AcceptedHeaders, watches the chain RPC for inclusion,
joins with zipnet::Broadcasts / unseal::UnsealedPool /
offer::AuctionOutcome / atelier::Candidates to compute
attribution evidence, and commits one Refunds entry per
included block.
Committee sizing
- 3 members minimum; 5 members recommended.
Majority-honest trust model. A majority can misattribute, but
the on-chain settlement contract is the ultimate arbiter — an
Attestation whose evidence does not verify is simply not paid
out.
Hardware
- Modest cloud. 2 vCPU / 4 GiB RAM suffices.
- No TDX required in v1. The settlement contract, not the committee’s hardware, is the ground truth.
- Chain RPC access from every committee member, for the inclusion watcher. Pick a provider with good historical block coverage — missed blocks mean missed attributions.
systemd unit example
# /etc/builder/tally-member.env
LATTICE_INSTANCE=acme.ethereum.mainnet
LATTICE_CHAIN_ID=1
LATTICE_CONFIG_HEX=7f3a9b1c...
TALLY_COMMITTEE_SECRET_FILE=/etc/builder/secrets/acme.ethereum.mainnet.tally.secret
TALLY_ECDSA_SECRET_FILE=/etc/builder/secrets/acme.ethereum.mainnet.tally.member-03.ecdsa
TALLY_SECRET_FILE=/etc/builder/secrets/acme.ethereum.mainnet.tally.member-03.peer
TALLY_CHAIN_RPC=https://eth-mainnet.g.alchemy.com/v2/...
TALLY_SETTLEMENT_ADDR=0x1234...
The settlement contract
Tally’s attestations are only useful if a settlement contract on-chain accepts them. The contract:
- Verifies
t-of-nECDSA signatures from the tally committee’s published public keys. - Checks that the
block_hashin the attestation matches a real on-chain block at the given slot. - Pays out the claimed amount to the recipient.
The contract address is pinned in tally::Config.settlement_addr
and ships in the lattice’s LatticeConfig. The contract’s code
is the operator’s responsibility to deploy and audit; the tally
organism does not deploy it for you.
What this organism does not do
- It does not change the winning block. Atelier committed it; the chain included it or not.
- It does not collect funds. On-chain balances belong to the block builder / proposer; the settlement contract is the mechanism by which some of those funds route to order-flow providers.
- It does not judge whether a refund is “fair”. The attribution is a deterministic function of the upstream commits; the policy lives in the committee’s state machine, the same policy is applied to every block.
Observing
tally_blocks_attributed_total— rate of successful attributions.tally_attestation_latency_seconds— time from on-chain inclusion toAttestationscommit.tally_evidence_failures_total— attribution attempts where the upstream evidence did not line up. Any sustained rate is a cross-organism integration bug and an incident.tally_chain_rpc_lag_seconds— how far behind the head the committee’s RPC feed is.
Related
- Running a relay — immediate upstream.
- tally organism spec
- integrators/refunds.md
- threat-model — tally
Wiring the organisms together
audience: operators
Each organism is standalone; the lattice emerges from how they subscribe to each other’s public surfaces. This page is the operator-level view of that wiring: what flows where, what the per-organism driver expects to see upstream, and what happens operationally when something is not flowing.
The contributor-level version of this page is composition; the subscription graph is identical. This page drops the state-machine jargon and names the knobs you actually tune.
The pipeline at a glance
integrators ─► zipnet ─► unseal ─► offer ─► atelier ─► relay ─► tally ─► integrators
▲ ▲
│ │
integrators ──────────────────► offer (bids) │
(searchers) │
│
on-chain inclusion watcher
Two integrator inputs (zipnet for wallets, offer for
searchers), one integrator output (tally, for everyone).
Every internal arrow is a mosaik collection subscription.
Who subscribes to whom
| Organism | Subscribes to (reads) | Written by this organism |
|---|---|---|
zipnet | - | Broadcasts, LiveRoundCell |
unseal | zipnet::Broadcasts | UnsealedPool |
offer | unseal::UnsealedPool | AuctionOutcome |
atelier | unseal::UnsealedPool, offer::AuctionOutcome | Candidates |
relay | atelier::Candidates | AcceptedHeaders |
tally | relay::AcceptedHeaders, atelier::Candidates, offer::AuctionOutcome, zipnet::Broadcasts, chain RPC | Refunds, Attestations |
All subscriptions are on the shared universe. You do not
configure subscription endpoints; the organism crates derive the
upstream StreamId / StoreId from the lattice fingerprint
each binary sees via LATTICE_CONFIG_HEX.
What “wired” looks like at runtime
For each organism, watch its *_upstream_peers metric. It
reports how many peers the organism’s driver currently has a
bond with on each upstream subscription. Values to expect:
unseal_upstream_peers{source="zipnet::Broadcasts"}— at least 3 (a zipnet committee member).offer_upstream_peers{source="unseal::UnsealedPool"}— at least 3.atelier_upstream_peers{source=...}— same for each of its two upstreams.relay_upstream_peers{source="atelier::Candidates"}— at least 3.tally_upstream_peers{source=...}— at least 3 per source.
A zero on any of these means the organism is not yet bonded upstream; it is a transient state during bring-up and an incident during steady state. See Incident response — missing upstream.
The slot is the foreign key
Every commit in every organism is keyed by the chain’s slot number. Operators rely on this to debug the pipeline end to end: pick a slot, query each organism’s collection for that slot, and check the chain one stops at.
slot 21_000_000
zipnet::Broadcasts[21_000_000] committed ✓
unseal::UnsealedPool[21_000_000] committed ✓
offer::AuctionOutcome[21_000_000] committed ✓
atelier::Candidates[21_000_000] committed ✓
relay::AcceptedHeaders[21_000_000] committed ✓
tally::Refunds[21_000_000] pending (awaiting on-chain inclusion)
Walking the pipeline slot-by-slot is the first debug step for every “something is stuck” incident.
Cross-organism secrets
Nothing about one organism’s committee secret lets a holder
join another organism’s committee. The secrets are disjoint.
This is by design: a compromised offer member does not taint
atelier.
The one shared piece of identity is the lattice’s
LATTICE_CONFIG_HEX, which every process reads; it is a
public fingerprint, not a secret.
Running fewer than six organisms
Not a supported shape. A partial lattice is not a lattice. If you want to run zipnet only, deploy the zipnet organism standalone (that is what the zipnet book covers). If you want the full pipeline, run all six.
The only “skip an organism” posture that makes sense is to
adopt someone else’s deployment of that organism — e.g. use the
foundation’s reference unseal instead of running your own.
This is a commercial agreement between you and that operator,
not a protocol feature. In a lattice, every organism’s identity
is compiled into the LatticeConfig; there is no runtime
routing that picks one provider over another.
Cross-references
- Lattice overview
- Quickstart — stand up a lattice
- composition.md — the same graph with state-machine semantics.
Rotations and upgrades
audience: operators
A lattice is a long-lived identity; the six organisms inside it need to rotate secrets, replace committee members, and upgrade TDX images on their own cadences. This page is the procedure checklist.
What rotates without a fingerprint change
Changes that do not touch the lattice’s on-wire identity. Integrators do not need to rebuild; existing bonds survive.
- Peer identity secrets per committee member
(
<ORG>_SECRET_FILEin env files). Rotating this changes thePeerIdof that one member; peer catalogs re-converge without intervention. Do one member at a time. - TDX image version for an organism, provided the new image produces the same MR_TD (reproducible build over a bit-for-bit identical codebase). Rolling restart.
- Committee-admission secrets (
<ORG>_COMMITTEE_SECRET_FILE) within the same fingerprint, provided you distribute the new secret to every member in lockstep. Rare; usually part of a coordinated incident response.
What rotates with a fingerprint change
Changes that change the lattice’s on-wire identity. Integrators
must rebuild against the new LatticeConfig; mid-flight bonds
break.
- Any
LatticeConfigfield. Instance name, chain id, any organism’s schema, window, threshold, MR_TD. - An organism’s
StateMachine::signature()change (usually coincident with a crate version bump). - DKG output for
unsealoroffer— new aggregate pubkey, new fingerprint.
When these change, the lattice effectively becomes a new lattice. See Lattice retirement.
Rotating a committee member’s peer identity
Per organism; repeat for each. Zero-downtime if n >= 3.
- Generate a new secret file on the host.
- Stop that member’s systemd unit.
- Swap
<ORG>_SECRET_FILEto point at the new file. - Start the unit. It joins with a new
PeerId. - Monitor
<org>_committee_sizeon the other members; when they have re-bonded (within a minute in the common case), the rotation is complete.
Repeat for the next member. Never rotate two members’ identities simultaneously in a committee of 3 or the remaining single member cannot achieve quorum.
Rolling a TDX image with the same MR_TD
This is the common case for TDX-gated organisms. The MR_TD is reproducible; the image’s kernel or user-space just got a patch whose bits are identical on reproducible builds.
- Build the new image.
- Verify
mrtd.hexmatches the pinnedatelier::Config.mrtd. If not, the change requires a fingerprint-changing rotation. - Stop one member at a time. Swap the VM image. Start.
- Confirm TDX attestation metrics green on the rotated member before rotating the next.
Rolling a TDX image with a new MR_TD
This is a fingerprint change. Follow Lattice retirement.
DKG rerun (unseal or offer)
Run at quarterly cadence by default, or on demand when a committee member is compromised.
- Announce the retirement date in your release notes.
- Run the DKG ceremony — every committee member participates online; the output is a new aggregate public key.
- Produce a new
LatticeConfigwith the new public key. - Proceed to Lattice retirement.
You cannot do an in-place DKG re-run that preserves the lattice fingerprint — the aggregate public key is in the fingerprint.
Lattice retirement
Procedure when the LatticeConfig fingerprint changes:
- Announce the retirement and the new lattice name
(typically
<current>.v<next>). Publish a timeline — four weeks is generous, two is tight. - Publish the new
LatticeConfigto your operator-crate or release-notes channel alongside the old one. Both lattices coexist on the universe during the transition. - Stand up the new lattice per Quickstart — stand up a lattice.
- Notify integrators with the new
LatticeConfigand migration deadline. - After the deadline, stop the old lattice’s processes.
Integrators still bonded to the old lattice get
ConnectTimeout.
Both lattices share the same universe and the same discovery layer; they appear to the mosaik layer as two unrelated deployments. This is the same migration pattern zipnet documents; see rotations in the zipnet book.
Schedule recommendations
| Item | Default cadence |
|---|---|
| Committee member peer identity | Quarterly |
| TDX image patch (same MR_TD) | As CVEs land |
DKG rerun (unseal, offer) | Quarterly |
| Committee admission secret rotation | On incident only |
Lattice retirement (-v<N>) | On breaking change |
LatticeConfig minor bump (non-FP) | Not a thing — there is no non-FP LatticeConfig bump |
“Non-FP” = non-fingerprint. Every LatticeConfig change is a
fingerprint change by construction; non-fingerprint changes
happen at the systemd-env layer (peer identity, TLS certs,
etc.) and do not touch the LatticeConfig at all.
Cross-references
- Incident response — rotations driven by security events.
- Running an atelier builder — the co-building Phase 2 onboarding procedure lives there; it is a lattice retirement in operational terms.
- zipnet rotations — the per-organism procedure for the zipnet organism, detailed.
Monitoring and alerts
audience: operators
Every organism exposes Prometheus metrics via mosaik’s built-in exporter. This page is the short list of what to watch, broken out per organism plus a lattice-wide section. The full metrics catalogue lives in Metrics reference.
Enable the exporter
Each organism binary accepts a PROMETHEUS_ADDR env var
(proposal). The reference systemd units expose it at
0.0.0.0:909x with the last digit picked per organism so one
host can run several side by side.
# /etc/builder/common.env
PROMETHEUS_ADDR=0.0.0.0:9090 # plus 9091 for unseal, 9092 offer, ...
Scrape with your Prometheus stack; the labels include
lattice, organism, and role so you can aggregate across
lattices.
Lattice-wide dashboards
Two dashboards every lattice operator should have.
End-to-end slot health
One row per slot in the past hour, one column per organism, green / yellow / red per cell. Green if the organism committed for that slot, yellow if it is expected and overdue, red if a deadline has passed without commit. Walk the row left-to-right to spot where the pipeline stalled.
Data sources: each organism’s <org>_commits_per_slot counter.
Discovery health
discovery_peers_total across all your hosts, filtered by
lattice. A sudden drop in peer count indicates network
partition or a discovery subsystem failure; diagnose before
any organism-level alert fires.
Data source: mosaik’s discovery metrics; see the mosaik metrics reference.
Per-organism red lines
Conservative defaults; tune to your lattice’s slot cadence.
zipnet
zipnet_server_up— 0 on any server beyond 1 minute = page.zipnet_round_commit_latency_secondsP95 >round_period— page.zipnet_broadcasts_appended_totalrate drops to zero for three consecutive rounds — page.
unseal
unseal_decrypt_latency_secondsP95 > slot period — page. Unseal is the first organism whose latency directly delays downstream organisms.unseal_member_tdx_attested= 0 for any member — page.
offer
offer_auction_commit_latency_secondsP95 > auction window — investigate.offer_bids_received_totalrate drops to zero beyond 10 consecutive slots — investigate (could be legitimate: quiet mempool).
atelier
atelier_candidates_committed_totalrate drops below slot rate = page. This is the lattice’s primary output.atelier_simulation_divergence_totalany non-zero rate = investigate immediately; a divergence means the co-building committee disagrees, which should be a rarity.atelier_member_tdx_attested= 0 for any member = page.
relay
relay_proposer_ack_latency_secondsP95 > slot deadline = page.relay_on_chain_mismatches_totalany non-zero = page. AnAcceptedHeadersthat did not land on-chain is either a malicious relay member or a proposer that changed its mind; both require immediate inspection.
tally
tally_attestation_latency_secondsP95 > 2 × block time = investigate.tally_evidence_failures_totalany sustained rate = page. A sustained mismatch between upstream evidence and tally’s attribution is a cross-organism integration bug.tally_chain_rpc_lag_seconds> 30 seconds = investigate.
Alerts you do not need
- Raft leader change alerts. Mosaik’s Raft variant churns leaders on normal network turbulence. Alerts for individual leader changes are noise; alert on “no leader for 30 seconds” instead.
- Per-commit-latency alerts at the fastest cadence you can measure. P95 over 1-minute windows is enough; sub-second jitter is not actionable.
Dashboards to hand integrators
A public status page:
- Per-organism “is up” indicator.
- Per-slot end-to-end pipeline health for the past hour.
- Lattice
lattice_id()hex so integrators can eyeball their handshake.
Integrators should not need access to your full metrics stack; they need to know whether the lattice is up. Publish that rolled-up signal; keep the rest internal.
Cross-references
- Appendix — Metrics reference
- Incident response — what to do when any of the red lines above trip.
- mosaik metrics
Incident response
audience: operators
Runbook entries for the common failure modes. One entry per named alert; each entry is a cause, a verification step, and a mitigation. Keep this page current; on-call engineers read it half-asleep.
General principle
Before any mitigation, run the slot walk (see Wiring the organisms together — the slot as foreign key). Pick the current slot, query each organism’s collection, and identify the last organism to have committed. That is where the pipeline stalled. Everything downstream of that point is paused, not broken.
Missing upstream
Alert. <org>_upstream_peers{source="<upstream>"} = 0 for
more than 60 seconds.
Cause. The organism’s driver cannot bond to any peer of
its upstream organism. Common reasons: upstream committee is
down; network partition; mismatched LatticeConfig (rare, but
possible after a botched rotation).
Verify.
- On the upstream organism’s hosts, is the process up? Check
<upstream>_upmetric. - Is the lattice fingerprint consistent? Print
lattice_id()on both organism hosts and compare. - Is discovery healthy? Check
discovery_peers_total{lattice=...}.
Mitigate.
- Upstream is down: bring it back. Do not try to work around upstream absence by re-configuring downstream.
- Fingerprint mismatch: emergency retire the lattice per Rotations and upgrades — Lattice retirement.
- Network partition: wait the mosaik discovery back-off; if not resolved in five minutes, escalate to the upstream’s operator.
Committee size below quorum
Alert. <org>_committee_size < ceil(n/2) + 1 on any
member for more than 60 seconds.
Cause. Enough committee members have gone offline that Raft cannot commit.
Verify. <org>_member_up per member identifies which
members are down.
Mitigate.
- Restart the downed members. If a host is unrecoverable, rotate in a replacement per Rotations and upgrades — Rotating a committee member’s peer identity.
- Do not shrink
nto route around dead members.nchanges the fingerprint.
TDX attestation failure
Alert. <org>_member_tdx_attested{member=...} = 0.
Cause. The member’s TDX image is not producing a valid quote. The image has booted without TDX, the hardware’s attestation service is reachable but returning errors, or the MR_TD does not match the pinned value.
Verify.
systemctl status builder@<org>-memberfor logs.- The TDX quote provider’s own logs on the host.
<ORG>_MRTDenv vs pinnedLATTICE_CONFIG_HEXfingerprint.
Mitigate.
- Transient quote failure: wait for the attestation service to recover, restart the unit.
- MR_TD mismatch: if you changed the image intentionally, this is a fingerprint change — see Rotations and upgrades — Rolling a TDX image with a new MR_TD. If you did not change the image, rebuild from a clean source and compare MR_TDs.
atelier simulation divergence
Alert. atelier_simulation_divergence_total > 0.
Cause. Atelier committee members disagree on a simulation output. Either the chain RPC differs across members (one host is pinning an old block; another is on the tip), an external input to the simulation has been compromised, or a committee member is malicious.
Verify.
- Chain RPC head block per member — any laggards?
atelier_simulation_input_hash— committee members should have matching hashes for matching slots.- Cross-check the divergent member’s output against an independent node running the same chain client.
Mitigate.
- Lagging RPC: rotate the member onto a healthier RPC endpoint.
- Malicious member: rotate them out (lose one committee
member, reduce trust to
n-1, retire-and-replace at the next scheduled window). - Repeated divergence: pause the lattice per Pausing the lattice.
relay on-chain mismatch
Alert. relay_on_chain_mismatches_total > 0.
Cause. A header relay committed as accepted does not match what landed on-chain. Either the relay’s committee is majority-malicious, the proposer rotated the block after ack (MEV-Boost allows this in some configurations), or the relay committed an ack that was itself forged.
Verify.
- Cross-check the on-chain block’s builder vs the
AcceptedHeaders[S].proposer. tally_evidence_failures_total— does tally agree with relay?- Check each relay member’s log for the raw proposer ack payload.
Mitigate.
- Proposer-side rotation: the chain’s reality is what happened; tally will not issue an attestation for the mis-accepted block. Alert is noise if this is a one-off.
- Majority-malicious relay: escalate immediately. Pause the lattice and coordinate with the atelier committee to determine whether to rotate relay or retire the lattice.
tally evidence failure
Alert. tally_evidence_failures_total > 0 sustained.
Cause. Tally cannot reconcile an upstream AcceptedHeaders
commit with atelier, offer, or zipnet data. Either an
upstream organism committed a fact that does not line up, or
tally’s state machine has a bug.
Verify.
- Walk the slot. Dump every organism’s commit for the slot tally is stumbling on.
- Compare tally’s expected attribution against what the state machine ought to derive from the dumped commits.
Mitigate.
- Upstream bug: escalate to the relevant organism’s on-call. Tally pauses attribution for that slot automatically; do not try to force it through.
- Tally bug: file against tally; in the interim, attestations for that slot are missing, integrators claim through the on-chain settlement contract’s dispute mechanism (if any).
Pausing the lattice
When the pipeline must stop — compromised committee, serious integration bug — the lattice-wide kill switch is “stop every organism’s systemd units in reverse pipeline order”:
systemctl stop builder@tally-member
systemctl stop builder@relay-member
systemctl stop builder@atelier-member
systemctl stop builder@offer-member
systemctl stop builder@unseal-member
systemctl stop builder@zipnet-server builder@zipnet-aggregator
Reverse order so outputs drain before inputs stop. Once every
organism is down, integrators see ConnectTimeout until you
decide whether to restart, rotate, or retire.
There is no protocol-level “pause” primitive. The lattice is its processes.
Public communication
Every incident should be accompanied by a status-page update. Integrators rely on your public signals (see Monitoring). Be explicit about scope:
- Which organism is affected?
- Is submission still accepted (
zipnetup) or not? - Is tally still paying (
tallyup) or not?
Do not post internal debugging detail publicly.
Cross-references
- Monitoring and alerts — the signals that page on-call.
- Rotations and upgrades — rotation procedures the incident playbooks reference.
- Wiring the organisms together — slot-walk debugging technique.
Designing block-building topologies on mosaik
audience: contributors
This is the design-intro chapter. It extends the pattern in the zipnet book’s Designing coexisting systems on mosaik from a single organism (anonymous broadcast) to a composition of organisms that together form a block-building pipeline for an EVM chain.
The reader is assumed to have read the mosaik book, the zipnet book, and in particular the zipnet design-intro. This page does not re-derive content + intent addressing, the narrow-public-surface discipline, or the shared-universe model. It uses them.
The problem, restated for a pipeline
Zipnet is one organism. A block-building pipeline is not — it’s a half-dozen services that must agree on who submitted what, what got auctioned, what got built, who won, and how value flows back to the order-flow providers.
The naïve way to build this on mosaik: one giant Group with one giant state machine whose commands cover every stage of the pipeline. This fails in the first hour of design review. The auction has a different trust model than the builder; the builder has different TEE posture than the relay; the submission layer has stricter rate-limiting than the refund accounting. A single state machine collapses five trust boundaries into one, with the worst of each.
The right decomposition is one organism per trust boundary, each following the zipnet pattern individually, with the whole set yoked together under one lattice identity.
Two axes of choice, revisited
Same axes as zipnet picked among. Both inherit the zipnet conclusion, adjusted for composition.
- Network topology. Does a lattice live on its own
NetworkId, or share a universe with every other mosaik service? - Composition. How do the six organisms inside one lattice reference each other without creating cross-Group atomicity mosaik doesn’t support?
This proposal picks shared universe + within-lattice derivation + no cross-Group atomicity. The three choices are independent and each has a narrow, defensible rationale.
Shared universe
builder::UNIVERSE = unique_id!("mosaik.universe") — the same
constant zipnet uses. Every lattice, every organism, every
integrator agent lives on it. Different lattices coexist as overlays
of Groups, Streams, and Collections distinguished by their
content + intent addressed IDs. An integrator that cares about
three lattices holds one Network handle and three LatticeConfigs.
The alternative — one NetworkId per lattice, the way Shape A in
the zipnet book laid out — was rejected for the same reason it was
rejected there: operators already run many services (zipnet alone
might host three deployments; searchers bid across chains; tally
aggregates across multiple lattices). Paying for one mosaik endpoint
per lattice is a bad trade when the services want to compose.
What the shared universe costs us: noisier discovery gossip and
larger peer catalogs. The escape hatch for genuinely high-frequency
internal traffic (aggregator fan-in inside atelier, threshold-share
chatter inside unseal) is a derived private network keyed off the
organism’s Config. Public surfaces stay on the universe.
Within-lattice derivation
A lattice Config is a parent struct. Each of the six organisms
has its own nested Config that derives from the lattice’s root
UniqueId. A contributor writing a new organism derives its IDs
like this:
LATTICE = blake3("builder|" || instance_name || "|chain=" || chain_id)
ZIPNET = LATTICE.derive("zipnet") // root for zipnet's Config
UNSEAL = LATTICE.derive("unseal")
OFFER = LATTICE.derive("offer")
ATELIER = LATTICE.derive("atelier")
RELAY = LATTICE.derive("relay")
TALLY = LATTICE.derive("tally")
Each organism’s own Config — when hashed to produce its
GroupId / StreamId / StoreId — folds in the organism root
above plus the organism’s own content parameters plus the organism’s
ACL. The full identity for, say, atelier’s committee group in the
ethereum.mainnet lattice is:
atelier_root = LATTICE(ethereum.mainnet).derive("atelier")
atelier_committee = blake3(
atelier_root
|| atelier_content_fingerprint // tx batch size, block template
// schema, gas-limit window, etc.
|| atelier_acl_fingerprint // TDX MR_TDs pinned
).derive("committee")
Two lattices with the same atelier parameters but different
instance names derive disjoint committee groups. Two atelier
deployments under the same lattice name but with different
parameters also derive disjoint groups. The failure mode is
ConnectTimeout, not split-brain Raft — same as zipnet.
No cross-Group atomicity
Mosaik does not provide multi-Group transactions and this topology
does not try to invent them. Concretely: “the same command
atomically commits to offer and atelier” is not something this
proposal supports. The organisms coordinate through the same pattern
zipnet’s own internal primitives coordinate: one organism writes to
its public surface (a stream or a collection), the next organism
subscribes and reacts.
This is a load-bearing constraint and the biggest single difference from a monolithic builder:
offercommits a winning bundle bid.ateliersubscribes tooffer’s outcome stream and reacts by including the bid’s transactions in a candidate block. There is no atomic “win + build” transaction.ateliercommits a winning candidate block.relaysubscribes and ships the header. There is no atomic “build + broadcast” span.relayobserves a proposer accepting the header.tallysubscribes to that event and commits refund attributions. There is no atomic “propose + refund” transaction.
What we lose: the strongest-possible consistency across the
pipeline. If atelier commits a block that relay never manages to
broadcast (liveness failure in relay), the block simply doesn’t
reach the proposer; tally sees no successful-broadcast event and
no refund is issued. That is a clean, debuggable failure. A
monolithic atomic pipeline would either have to resolve that failure
inside consensus (expensive and complicating the state machine) or
silently paper over it (which is worse).
What we gain: each organism’s state machine is simple enough to
reason about in isolation. atelier doesn’t need to understand
refund math. tally doesn’t need to understand TDX image builds.
Each organism can decentralize at its own pace.
The lattice identity
A lattice is identified by a LatticeConfig that folds every
root input into one deterministic fingerprint. Operators publish
the LatticeConfig the same way zipnet operators publish a
zipnet::Config; integrators compile it in.
pub struct LatticeConfig {
/// Short, stable, namespaced name chosen by the operator.
/// e.g. "ethereum.mainnet", "unichain.mainnet", "base.testnet".
pub name: &'static str,
/// EVM chain id. Folded into the fingerprint so a "mainnet"
/// instance on the wrong chain is a `ConnectTimeout`, not a
/// silent cross-chain mis-bond.
pub chain_id: u64,
/// Each organism's own Config. All six MUST be present; a
/// partial lattice is not a lattice.
pub zipnet: zipnet_organism::Config,
pub unseal: unseal::Config,
pub offer: offer::Config,
pub atelier: atelier::Config,
pub relay: relay::Config,
pub tally: tally::Config,
}
impl LatticeConfig {
pub const fn lattice_id(&self) -> UniqueId { /* blake3 of the above */ }
}
An integrator binds to the lattice by passing the LatticeConfig
into whichever organism handles they need:
const ETH_MAINNET: LatticeConfig = LatticeConfig { /* operator-published */ };
let network = Arc::new(Network::new(builder::UNIVERSE).await?);
let submit = zipnet::Zipnet::<Tx2718>::submit (&network, Ð_MAINNET.zipnet ).await?;
let bid = offer::Offer::<Bundle>::bid (&network, Ð_MAINNET.offer ).await?;
let blocks = relay::Relay::<Block>::watch (&network, Ð_MAINNET.relay ).await?;
let refunds = tally::Tally::<Attribution>::read (&network, Ð_MAINNET.tally ).await?;
Each organism exposes typed free-function constructors in the same
shape zipnet ships (Organism::<D>::verb(&network, &Config)). Raw
mosaik IDs never cross the organism crate boundary.
A fingerprint convention, not a registry
Same discipline as zipnet:
- The operator publishes the
LatticeConfigstruct (or a serialised fingerprint) as the handshake. - Consumers compile it in.
- If TDX-gated, the operator also publishes the committee MR_TDs
for every organism that gates its admission on TDX attestation
(
unseal,atelier, optionallyrelay). - There is no on-network lattice registry. A directory collection listing known lattices may exist as a devops convenience for humans; it is never part of the binding path.
Typos in the instance name, the chain id, or any organism parameter
surface as Error::ConnectTimeout, not “lattice not found”. The
library cannot distinguish “nobody runs this” from “operator isn’t
up yet” without a registry, and adding one would be lying with an
error enum.
The six organisms
This topology ships six organisms. Each is specified on a dedicated page:
| Organism | Role | Trust shape |
|---|---|---|
zipnet | anonymous submission of sealed tx / intents | any-trust (one honest server) |
unseal | threshold decryption of zipnet broadcasts | t-of-n threshold |
offer | sealed-bid bundle auction for searchers | majority-honest committee |
atelier | TDX-attested candidate block assembly | TDX attestation + majority-honest |
relay | PBS-style header fanout to proposers / sequencers | any operator suffices for liveness |
tally | order-flow attribution and refund accounting | majority-honest committee |
Why six and not four or ten:
- Submission, auction, building, relay, accounting. That’s the
natural PBS decomposition.
zipnetandunsealsplit the submission layer because anonymous broadcast and threshold decryption are different trust models and should not share a Group. relayis not folded intoatelierbecause relay liveness matters under different failure modes (proposer-side connectivity) than atelier liveness (builder-side compute). Folding them would couple the trust models unnecessarily.tallyis not folded intoatelierorrelaybecause refund accounting is a public-verifiable audit trail and its readers (searchers, order-flow providers, chain explorers) should not need to bond into a TEE-gated building committee to read it.
Further decomposition — e.g. splitting atelier into a bundle
sorter + a block executor — is an implementation concern for the
atelier crate, not a new organism. The organism boundary is drawn
at trust and interface seams, not at internal function.
See The six organisms for each organism’s public surface in detail, and Composition: how organisms wire together for the flow diagrams.
The three conventions (inherited)
Every organism in this proposal reproduces the three zipnet conventions verbatim. Briefly, so contributors writing a seventh organism have the list to hand:
- Identifier derivation from the organism’s Config fingerprint. Every public ID descends from one root (the organism’s piece of the lattice) and from the organism’s own content + intent + acl hash.
- Typed free-function constructors.
Organism::<D>::verb(&network, &Config)returning typed handles. Raw IDs never leak across the crate boundary. - Fingerprint, not registry. The
Config(plus the datum schema where applicable) is the complete handshake. Consumers compile it in; there is no on-wire discovery of the organism.
Details and rationale — exactly as in the zipnet design-intro — apply without change.
What the lattice pattern buys
- An integrator’s mental model collapses to: one
Network, oneLatticeConfigper lattice, typed handles on each organism they consume. - Operators decentralize at their own pace. A lattice can
run with one operator for every organism (Phase 1), then move
atelierto a multi-operator committee without touching the other five (Phase 2), then peel off cross-chainoffersubscriptions without touching anything (Phase 3). - Each organism can be replaced without touching the rest. The
contract between organisms is the public stream/collection
surface, not shared state. A team unhappy with
offer’s auction rules can ship an alternativeofferimplementation that reads from the sameunsealoutcome and writes to the same surface theateliersubscribes to. - Multiple lattices coexist trivially. Same argument zipnet made, scaled up: each lattice derives disjoint IDs on every organism; the shared universe’s peer catalog grows but does not fragment.
- ACL is per-organism, per-lattice.
unsealcan pin one MR_TD whileatelierpins another, under the same lattice name, without either organism having to know about the other’s image.
Where the pattern strains
Three pain points a contributor extending this should be honest about up front. The first two carry over from zipnet; the third is specific to composing multiple organisms.
Cross-organism atomicity is out of scope
As stated above: there is no way to atomically commit across two organisms’ Raft groups. If a use case genuinely needs that — rare, but real for some coordination-heavy cases — the right answer is a seventh primitive that is itself a deployment providing atomic composition across two specific organisms, not an ad-hoc cross-Group protocol. “Cross-chain atomic bundle” is the motivating candidate and is explicitly in the v2 column in Roadmap.
Versioning under stable lattice names
Same issue zipnet flagged: if an organism’s
StateMachine::signature() changes, its GroupId in that lattice
changes, and integrators compiled against the old code silently
split-brain. With six organisms in a lattice, the blast radius of a
single signature() bump is six times larger than zipnet’s.
The two reconciliation strategies zipnet’s roadmap listed apply here
too: version-in-name (ethereum.mainnet-v2) or lockstep releases
of a shared lattice crate. This proposal recommends lockstep for
each organism, version-in-name for the lattice, on the grounds
that a lattice-wide version bump is rare (it only happens when the
operator decides to retire the lattice identity) while organism
bumps happen every time an organism ships a breaking change. See
Roadmap — Versioning.
Noisy neighbours across lattices
Six organisms per lattice times N lattices per universe is a
multiplier on catalog size and /mosaik/announce volume. Zipnet’s
escape hatch — derived private networks for internal chatter —
applies per-organism, but the public surfaces (one or two
primitives per organism) stay on UNIVERSE. If a specific lattice’s
traffic would dominate the shared universe, it belongs behind its
own NetworkId — Shape A in the zipnet design-intro — not on the
shared universe. This is the correct call for an isolated
federation; it is not the default.
Checklist for a new organism
When adding a seventh organism to the lattice — or when standing up an organism that composes with the lattice without being part of it — use this list:
- Identify the one or two public primitives. If you cannot, the interface is not yet designed.
- Pick an organism root:
unique_id!("your-organism"), chained underLATTICE.derive("your-organism")if you intend to be part of a lattice. - Define the
Configfingerprint inputs: whatinstance_namemeans in the context of your organism, what content parameters affect the state machine signature, what ACL composition you pin. - Write typed constructors (
Organism::<D>::verb(&network, &Config)) that every integrator uses. Never export rawStreamId/StoreId/GroupIdvalues across the crate boundary. - Decide which internal channels, if any, move to a derived private
Network. Default: only high-churn ones (aggregation, gossip). - Specify
TicketValidatorcomposition on the public primitives. ACL lives there. - Document which other organisms read from or write to your organism’s public surface. This is the composition contract; changes to it touch composition.md.
- Call out your versioning story before shipping. If you cannot answer “what happens when my state machine signature bumps?”, you will regret it.
- Answer: does your organism add meaningfully to the lattice, or is it an implementation detail of an existing organism in disguise? If the latter, fold it in.
Cross-references
- Architecture of a lattice — the concrete instantiation of the pattern for the six organisms.
- The six organisms — per-organism surfaces.
- Composition — flow diagrams and the apply order across organisms.
- Cross-lattice coordination — how lattices on different chains cross-subscribe without cross-Group atomicity.
- Threat model — per-organism trust assumptions and how they compose.
- Roadmap — versioning, cross-lattice atomicity, L2-sequencer specialization.
Architecture of a lattice
audience: contributors
This chapter is the concrete instantiation of the pattern described in Designing block-building topologies on mosaik. It maps the six organisms onto a single lattice for one EVM chain, identifies the public surface each organism exposes on the shared universe, and traces the data flow from submission through refund attribution.
The reader is assumed to have read the topology intro and the zipnet architecture chapter.
Lattice identity recap
LATTICE = blake3("builder|" || instance_name || "|chain=" || chain_id)
ZIPNET_ROOT = LATTICE.derive("zipnet")
UNSEAL_ROOT = LATTICE.derive("unseal")
OFFER_ROOT = LATTICE.derive("offer")
ATELIER_ROOT = LATTICE.derive("atelier")
RELAY_ROOT = LATTICE.derive("relay")
TALLY_ROOT = LATTICE.derive("tally")
Each *_ROOT is the seed an organism hashes its own Config
against to produce the organism’s committee GroupId, public
StreamIds, and public StoreIds. See
topology-intro — Within-lattice derivation.
Integrators bind via the LatticeConfig they compile in from
operator-published release notes:
const ETH_MAINNET: LatticeConfig = /* operator-published */ ;
let network = Arc::new(Network::new(builder::UNIVERSE).await?);
// Six organism handles, each following the zipnet pattern.
let submit = zipnet::Zipnet::<Tx2718>::submit (&network, Ð_MAINNET.zipnet ).await?;
let read = zipnet::Zipnet::<Tx2718>::read (&network, Ð_MAINNET.zipnet ).await?;
let unseal = unseal::Unseal::<Tx2718>::watch (&network, Ð_MAINNET.unseal ).await?;
let bid = offer::Offer::<Bundle>::bid (&network, Ð_MAINNET.offer ).await?;
let blocks = atelier::Atelier::<Block>::read (&network, Ð_MAINNET.atelier).await?;
let headers = relay::Relay::<Header>::watch (&network, Ð_MAINNET.relay ).await?;
let refunds = tally::Tally::<Attribution>::read (&network, Ð_MAINNET.tally ).await?;
Integrators only open the handles they need. A searcher agent
typically holds bid, blocks, refunds. A wallet typically holds
submit, refunds. A rollup sequencer consuming a lattice
typically holds headers only.
Public surface summary
The lattice’s outward-facing primitives decompose cleanly into the six organisms’ own public surfaces. Full per-organism detail is in The six organisms; this table is the index.
| Organism | Write-side primitives | Read-side primitives |
|---|---|---|
zipnet | Submit<Tx> stream | Broadcasts, LiveRoundCell collections |
unseal | SharesStream (internal) | UnsealedPool collection |
offer | Bid<Bundle> stream | AuctionOutcome collection |
atelier | Hint<Template> stream (internal) | Candidates collection |
relay | Ship<Header> (internal) | AcceptedHeaders collection |
tally | Attribution (internal commit) | Refunds, Attestations collections |
“Internal” here means the primitive is ticket-gated to peers with a
specific role tag (e.g. atelier.member, relay.member) and is
not consumed by generic integrators. It still lives on the shared
universe; the ticket gate does the access control.
Data flow across one proposal slot
This is the canonical happy-path flow for one slot on a lattice. Each step is a commit into one organism’s state machine; transitions between steps are stream/collection subscriptions, not cross-Group commands.
integrator organism commit / effect
---------- -------- --------------------------
wallet / searcher ───► zipnet ──► Broadcasts grows by one
(sealed envelopes for slot S)
unseal ──► UnsealedPool[S] populated
(clear txs + intents for S)
searcher ───► offer ──► AuctionOutcome[S] committed
(winning bundle for S)
atelier ──► Candidates[S] committed
(candidate block for slot S)
relay ──► AcceptedHeaders[S] committed
(header shipped to proposer)
proposer chooses a header; slot S is included on chain
tally ──► Refunds[S] committed
(MEV captured, routed back)
wallet / searcher ◄─── tally ──► refunds + attestations stream
Every [S] index is the chain’s slot number (or block number on L1,
or sequencer slot number on L2). It is the foreign key that glues
the six organisms’ commits together without requiring cross-Group
atomicity. Each organism commits at its own cadence; downstream
organisms react when they see the upstream commit.
Participants
Not every organism is operated by the same entity. The lattice distinguishes three classes of participant.
- Lattice operator — the single team responsible for the
LatticeConfigidentity, the instance name, the chain-id mapping, and theunseal/offer/tallycommittees in the default topology. In Phase 1 this is one org. - Co-builder operator — a team contributing committee members
to
atelierand, optionally,relay. Multiple co-builder operators per lattice is the Phase 2 shape; in Phase 1 the lattice operator runs both roles. - Integrator — external dev, runs no committee members, binds handles against the lattice from their own mosaik agent.
An operator-run process typically hosts committee members for more
than one organism: one binary, several Group memberships. That is
fine and encouraged — the process pays for one Arc<Network> and
carries several organisms’ Raft roles on top.
Where each organism commits
In a non-atomic pipeline the question is not “what commits when” but “what decision is each organism actually making at commit time”. Clarifying this per organism:
zipnetcommits a finalized round’s broadcast vector. That is, the set of sealed envelopes for the round’s slot, in deterministic slot order. Integrator semantics: “one ordered log of sealed envelopes per slot, opaque to anyone without the matching unsealing key material”.unsealcommits the clear-text recovery of a given zipnet round, oncetof the threshold committee members have contributed their decryption shares. Commit is into theUnsealedPoolcollection, keyed by slot. Integrators that bond tounsealare typically downstream organisms (offer,atelier) and a small number of audit-capable integrators with the ACL.offercommits the winning bid per slot. The state machine runs a sealed-bid auction over bundles keyed to a given slot and commits exactly oneAuctionOutcome[S]per slot.ateliercommits a candidate block per slot. The state machine consumes theUnsealedPoolcontent and theAuctionOutcomefor the slot and commits aCandidates[S]block body signed by the TDX committee’s collective attestation.relaycommits that a specificAcceptedHeaders[S]was shipped to the proposer and whose proposer acknowledgement was received (Phase 1) or witnessed on-chain (Phase 2+).tallycommits the slot’s refund attribution: whichClientIdfromzipnetand which searcher fromoffercontributed value to the winning block, and what share they receive. Commit is into theRefunds[S]collection.
Every commit is a normal mosaik group.execute(...). The
orchestration across organisms is built from when() conditions
and collection subscriptions in each organism’s role driver, not
from a global scheduler.
L1 vs L2 specialisation
The six-organism decomposition above targets L1 PBS as the reference. Two specializations are anticipated and accommodated without changing the organism count:
- L2 rollup with centralized sequencer.
relayis replaced by arelayconfiguration that ships headers to a single sequencer endpoint rather than a validator set. The organism identity and public surface are unchanged; only the external endpoint changes. Integrators see the sameRelay::<Header>::watchsurface. - L2 rollup with decentralized sequencer. Same as L1 PBS except
the proposer set is the sequencer set.
relay’s state machine needs to understand the sequencer’s handoff protocol, which is an organism-internal concern.
See Cross-lattice coordination for the cross- chain cases (bundles spanning an L1 and an L2, L2-to-L2 intent routing).
Internal plumbing (optional derived private networks)
Same pattern zipnet established: the public surface lives on UNIVERSE; high-churn internal plumbing may move to a derived private network keyed off the organism’s root.
Candidates for derived private networks in v1:
unseal’s share gossip. Threshold-decryption shares per slot are a high-frequency internal channel; theUnsealedPoolon UNIVERSE is the public result.atelier’s bundle-simulation chatter. Candidate block simulation traffic between committee members.relay’s proposer-side socket pool. Per-slot long-poll connections the relay maintains with proposer endpoints.
Committee Groups themselves stay on UNIVERSE. Bridging a Group’s backing state across networks is worse than the catalog noise; the zipnet design-intro argument applies unchanged.
Identity under operator handover
A lattice’s instance name is an operator-level identity that
outlives specific organism parameter choices. An operator who wants
to retune atelier‘s block-template schema retires the existing
atelier deployment and stands up a new one under the same lattice
name. Integrators compile against the lattice’s new
atelier::Config, not against a new lattice name — provided the
other five organisms’ configs are unchanged.
If the lattice operator wants to hand over to a new operator, the new operator either:
- Keeps the
LatticeConfigbyte-for-byte and rotates secrets in place — an operator-level rotation, not a version bump. - Or stands up a new lattice under a new instance name
(
ethereum.mainnet-v2), and integrators migrate over time.
See operators/rotations-and-upgrades.md.
Concrete sizing for a Phase 1 lattice
Order-of-magnitude targets for a lattice at slot cadence (12s on L1, 2s on an L2):
| Organism | Committee members (v1) | Stream bytes/slot | Bond count per node |
|---|---|---|---|
zipnet | 3–7 servers + 1 agg | ~16 KiB | member-to-member |
unseal | 3–7 members | ~a few KiB | member-to-member |
offer | 3–5 members | O(N_bundles) × bid size | member + searchers |
atelier | 3–7 members (TDX) | O(block body) | member + co-builder |
relay | 3–5 members | O(header + bid) | member + proposer |
tally | 3–5 members | O(num_attributions) | member + consumers |
“v1” here means the Phase 1 shape. Phase 2 adds more committee
members to atelier as co-builders onboard; Phase 3 elasticises
the committee sets via mosaik’s peer discovery without changing the
ACL. See Roadmap.
What this chapter deliberately does not cover
- Per-organism state machines. Each organism owns its own
spec. See organisms.md and the organism’s own
contributors/pages when the crates land. - Wire formats. Same.
- Chain-specific transaction encoding. The lattice is chain- parameterised but organism internals are chain-generic. Chain-specific pieces (EIP-2718 vs OP-stack sequencer envelopes, etc.) are organism Config parameters, not new organisms.
- TDX image builds. Deferred to the operator-side runbook.
The six organisms
audience: contributors
This page is the index: six short specs, one per organism, each giving enough shape for a crate author to start writing code and a reviewer to challenge the surface. Each spec ends with a link to the organism’s own detailed page, where the state machine, wire types, and invariants are laid out in full.
Per-organism deep dives:
- zipnet — anonymous submission
- unseal — threshold decryption
- offer — sealed-bid auction
- atelier — TDX co-building
- relay — PBS fanout
- tally — refund accounting
Every organism follows the same template:
- Role. One sentence on what it does in the lattice.
- Public surface. The one or two public primitives it exposes.
- ACL. The
TicketValidatorcomposition that gates bonds. - State machine hook. The decision the Raft log actually commits.
- Trust assumption. What the adversary must achieve to break the organism’s guarantee.
- Reads from / Writes to. The organisms immediately upstream and downstream in the lattice.
Reading order matches the data flow: submission → unsealing → auction → building → relay → tally.
zipnet (submission)
Role. Anonymous, authenticated broadcast of sealed transactions and intents. The existing organism — this proposal consumes it unchanged.
Public surface.
Submit<Tx>— ticket-gated write stream. External wallets and searchers send sealed envelopes here.Broadcasts— append-only collection of finalized round broadcast vectors.LiveRoundCell— current round header so submitters know what to seal for.ClientRegistry,ServerRegistry— public X25519 bundles.
See zipnet book — architecture for the full surface. Lattice-specific wrapping: zipnet spec.
ACL. TDX attestation on the committee (in the v2 TDX path);
ticket-gated client admission via ClientBundle.
State machine hook. CommitteeMachine::apply(SubmitAggregate)
and apply(SubmitPartial) finalize a round’s broadcast vector per
the ZIPNet paper’s Algorithm 3. See the zipnet book’s
committee-state-machine page.
Trust assumption. Any-trust on anonymity (one honest committee server suffices). Majority-honest on liveness (v1; v2 relaxes).
Reads from. Integrators (external wallets/searchers).
Writes to. unseal (subscribes to Broadcasts).
unseal (threshold decryption)
Role. Recover cleartext of a zipnet round’s broadcast vector
by collecting t of n threshold-decryption shares from the
unseal committee, without any single committee member learning the
cleartext.
Public surface.
UnsealedPool— append-only collection ofUnsealedRound { slot, cleartext }entries, keyed by slot. The cleartext is the decrypted broadcast vector for that slot; every downstream organism reads from this collection.ShareRegistry— public PKs for every unseal committee member. Integrators that need to verify a share’s authenticity read this.
An internal share-gossip stream runs on a derived private network
keyed off UNSEAL_ROOT.derive("private").
ACL. TDX attestation required on every committee member
(.require_ticket(Tdx::new().require_mrtd(unseal_mrtd))). The
share registry is writable only by committee members; the
UnsealedPool is readable by any peer that holds a ticket from
the lattice operator.
State machine hook. UnsealMachine::apply(SubmitShare) tracks
shares per slot; when t shares arrive, the state machine combines
them in apply, pushes the resulting cleartext to UnsealedPool,
and discards the shares. No share is ever materialised outside the
committee’s in-memory set during combination.
Trust assumption. t-of-n threshold: fewer than t colluding
committee members learn nothing; t or more colluding members can
decrypt at will. Picking t is a per-lattice parameter that folds
into the organism’s Config fingerprint.
Reads from. zipnet::Broadcasts.
Writes to. Consumed by offer, atelier, and any integrator
authorised to see unsealed order flow.
Full spec. unseal.
Why not fold into zipnet
Zipnet’s committee is any-trust; unseal’s is t-of-n threshold.
Different trust shapes, different admission policies, different
rotation cadences. Keeping them as one organism would have forced
the strictest trust model on both.
offer (sealed-bid bundle auction)
Role. Run a sealed-bid auction, per slot, over bundles that searchers submit against the unsealed order-flow pool.
Public surface.
Bid<Bundle>— ticket-gated write stream. Searchers publish sealed bids keyed to a specific slotS. Sealing uses the sameunseal-style threshold encryption so that competing searchers do not learn each other’s bids until the auction commits.AuctionOutcome— append-only collection of{ slot, winner, bundle }committed once per slot. Integrators and downstream organisms read from this.SearcherRegistry— public searcher bundles with their auction- encryption public keys.
ACL. Ticket-gated on the Bid<Bundle> stream: only attested
searchers admitted. AuctionOutcome is world-readable by lattice
ticket holders.
State machine hook. OfferMachine::apply(OpenAuction),
apply(SubmitBid), apply(CloseAuction) — the committee commits a
round-opening at slot boundary, accumulates sealed bids during the
round window, and commits the winner at close time. The bid
decryption is a threshold combine inside apply, same pattern as
unseal.
Trust assumption. Majority-honest committee. A malicious majority can pick a non-max-bid winner; anonymity of losing bids holds under threshold assumption.
Reads from. unseal::UnsealedPool (so the auction winner
binds to a specific unsealed slot).
Writes to. atelier (subscribes to AuctionOutcome).
Full spec. offer.
Why not fold into atelier
offer runs a cryptographic sealed-bid auction; atelier runs a
TDX-attested block-assembly protocol. Those are different kinds of
computation with different operational cadences. A single Group
would force searchers who only care about bidding to bond into a
TDX-gated committee they don’t need to trust.
atelier (TDX co-building)
Role. Assemble a candidate block per slot, inside a TDX-
attested committee of co-builders, from the unseal cleartext
pool and the offer winning bundle. Commit the candidate block
body as the lattice’s proposal for that slot.
Public surface.
Candidates— append-only collection of{ slot, block_body, builder_attestation }committed once per slot. The attestation is a collective signature from the TDX committee covering the block body. Integrators (proposers, sequencers, analytics) read from this.Hint<Template>— ticket-gated write stream for co-builder operators to submit partial block templates during the assembly window. Internal to the atelier committee in effect (the ticket isatelier.member) but lives on UNIVERSE.
Internal plumbing — per-slot bundle simulation gossip, fee-sorting
traffic — runs on a derived private network keyed off
ATELIER_ROOT.derive("private").
ACL. TDX attestation required on every committee member
(.require_ticket(Tdx::new().require_mrtd(atelier_mrtd))). The
co-builder-contributed Hint<Template> stream is ticket-gated on
atelier.member.
State machine hook.
AtelierMachine::apply(OpenSlot),
apply(SubmitHint),
apply(SealCandidate) — per-slot state transitions that accumulate
hints, pick the final ordering from unseal + offer + hint
input, and seal the block body. The TDX quote on each committee
member’s PeerEntry is what makes the resulting commit attestable
off-lattice.
Trust assumption. TDX attestation (hardware root of trust) plus majority-honest committee. A minority of compromised TDX images is insufficient to break block-body integrity; a majority can commit an arbitrary block.
Reads from. unseal::UnsealedPool, offer::AuctionOutcome.
Writes to. relay (subscribes to Candidates).
Full spec. atelier.
Why this is a restatement of BuilderNet
The atelier organism is a mosaik-native restatement of the
BuilderNet co-building pattern: TDX-attested builders contributing
to one candidate block per slot, refund accounting tracked in a
peer collection. Differences:
- Identity — BuilderNet’s node identity is a BuilderHub
registration;
atelier’s is a content + intent addressedGroupIdderived from the lattice fingerprint. - Composition —
atelierdoes not own order-flow ingestion or refund accounting; those arezipnet+unseal+tally. - Substrate — BuilderNet uses bespoke peer-to-peer wiring;
atelieruses mosaikGroups andCollections.
An operator familiar with BuilderNet can map BuilderNet’s roles
onto the lattice by reading atelier as the building node and
tally as the refund role.
relay (PBS fanout)
Role. Ship candidate block headers + bids from atelier to
the proposer (on L1) or sequencer (on L2) and commit the proposer
acknowledgement.
Public surface.
AcceptedHeaders— append-only collection of{ slot, header, bid, proposer_ack }committed once per slot when the proposer acknowledges a header. Integrators read from this to follow proposer-side acceptance.Ship<Header>— ticket-gated write stream on which relay committee members publish the header they’ve sent to their assigned proposer. Internal to the committee.
ACL. Ticket-gated on relay.member for the write stream.
AcceptedHeaders is world-readable by lattice ticket holders.
State machine hook.
RelayMachine::apply(RecordSend),
apply(RecordAck),
apply(RecordTimeout) — per-slot tracking of which relay member
sent what header to whom, and whether the proposer acknowledged.
Trust assumption. A single honest committee member suffices
for liveness on L1 (any-trust on liveness). Integrity of
AcceptedHeaders is majority-honest: a malicious majority can
commit a lie about a proposer ack.
Reads from. atelier::Candidates.
Writes to. tally (subscribes to AcceptedHeaders).
Full spec. relay.
Why relay is not folded into atelier
Relay liveness is proposer-side connectivity, which is a different failure domain from TDX-builder compute. Folding them would force the TDX image to hold proposer socket state, which expands the TCB unnecessarily. The commit that actually matters for downstream refund accounting is “a proposer accepted this header”, which is a different fact from “the TDX committee signed this block”. Both facts want their own log.
tally (refund accounting)
Role. Attribute MEV captured on a winning block back to the order-flow providers and searchers whose input contributed to it; commit the attribution as a public-verifiable record; stream refund attestations integrators can prove against on-chain settlement layers.
Public surface.
Refunds— append-only collection of{ slot, recipients[], amounts[], evidence }committed once per slot after the on-chain inclusion of the winning block is observed. Evidence is the set of references back tozipnet,unseal,offer,atelier,relaycommits that justify the attribution.Attestations— ECDSA-signed attestations from tally committee members over eachRefundsentry. Integrators that want to claim a refund on-chain present anAttestationto the chain’s settlement contract.
ACL. Tally committee is ticket-gated. Refunds is
world-readable by lattice ticket holders; Attestations is
world-readable unconditionally (they are meant to be carried to
on-chain settlement).
State machine hook.
TallyMachine::apply(ObserveInclusion),
apply(ComputeAttribution),
apply(CommitRefund) — per-slot state transitions triggered by
the on-chain inclusion of an atelier block. Attribution is a
deterministic function of the commits across the other five
organisms.
Trust assumption. Majority-honest tally committee. A malicious majority can mis-attribute; the on-chain settlement contract is the ultimate arbiter and can reject malformed attestations.
Reads from.
relay::AcceptedHeaders, an on-chain inclusion watcher,
atelier::Candidates, offer::AuctionOutcome,
zipnet::Broadcasts.
Writes to. Integrators (searchers, wallets) and on-chain
settlement contracts.
Full spec. tally.
Why tally is the last organism
The refund / attribution commit is deliberately the last
non-reversible step in the pipeline. By the time tally commits,
every upstream organism has committed its piece, the winning block
is on-chain, and the attribution is a pure function of public
state. If earlier organisms had written tally’s data, any
failure upstream would have to be rolled back in tally —
re-introducing cross-organism atomicity we explicitly rejected.
Summary table
| Organism | Raft committee size (v1) | Trust shape | Key reads | Key writes |
|---|---|---|---|---|
zipnet | 3–7 servers | any-trust | integrator submitters | Broadcasts |
unseal | 3–7 TDX members | t-of-n threshold | Broadcasts | UnsealedPool |
offer | 3–5 members | majority-honest | UnsealedPool | AuctionOutcome |
atelier | 3–7 TDX members | TDX + majority-honest | UnsealedPool, AuctionOutcome | Candidates |
relay | 3–5 members | any-trust liveness, majority-honest integrity | Candidates | AcceptedHeaders |
tally | 3–5 members | majority-honest | AcceptedHeaders, on-chain | Refunds, Attestations |
See composition.md for the flow diagrams and the apply order; threat-model.md for how the per-organism trust assumptions compose.
zipnet — anonymous submission
audience: contributors
Proposed source: not in this repo. The lattice consumes the existing flashbots/zipnet crate unchanged at whatever version the lattice pins. This page documents only the lattice-specific wrapping.
What the lattice consumes
zipnet::Zipnet::<D>as the external SDK surface for wallets and searchers submitting into the lattice.zipnet::UNIVERSEwhich is identical tobuilder::UNIVERSE(both resolve tounique_id!("mosaik.universe")).zipnet::Configas one field of theLatticeConfig.zipnet::Broadcastsas the only upstream input tounseal(see unseal).
Authoritative docs for everything above live in the zipnet book:
Lattice wrapping
Three wrapping facts that matter at the builder level and do not appear in the zipnet book because zipnet does not know it is in a lattice.
1. Instance name derivation
A zipnet deployment that is part of a lattice takes its
zipnet::Config.name directly from the lattice’s instance
name. The builder meta-crate’s LatticeConfig constructor
enforces this:
impl LatticeConfig {
pub const fn new(name: &'static str, chain_id: u64) -> Self {
Self {
name,
chain_id,
// zipnet's own content + intent addressing folds the
// lattice name in as the zipnet instance name.
zipnet: zipnet::Config::new(name),
unseal: unseal::Config::new(name),
offer: offer::Config::new(name),
atelier: atelier::Config::new(name),
relay: relay::Config::new(name),
tally: tally::Config::new(name),
}
}
}
Two lattices under different names get disjoint zipnet
GroupIds by the zipnet content + intent addressing rules,
exactly as intended. A lattice cannot share a zipnet committee
with another lattice; a zipnet committee belongs to exactly one
lattice.
2. Sealed payload convention
Zipnet shuffles opaque D: ShuffleDatum values. The lattice’s
convention is that D carries an unseal-sealed payload:
the application-level transaction (e.g. an EIP-2718 RLP) is
first encrypted to the unseal committee’s threshold public
key, then right-padded to D::WIRE_SIZE, then submitted.
The reference datum is Tx2718; its WIRE_SIZE is set by the
lattice such that
unseal::seal(max_app_payload).len() <= Tx2718::WIRE_SIZE
with enough slack for the AEAD overhead. The exact constant
ships in the lattice’s datum crate.
This convention is enforced at the integrator layer (see
integrators/submitting.md);
zipnet itself does not know or care what is inside D. A
lattice that fails to seal payloads before submission is still
a valid zipnet deployment; it is just not a lattice whose
anonymity holds beyond zipnet’s own guarantee.
3. Consumer: the unseal organism
The lattice’s only downstream consumer of
zipnet::Broadcasts is the unseal organism. Every other
organism reads from unseal::UnsealedPool instead — the
cleartext side — because they need to reason about the actual
transactions, not the ciphertext.
This means in operational terms: if unseal is down but
zipnet is up, zipnet continues to commit Broadcasts (no
back-pressure from unseal); those broadcasts simply accumulate
without producing downstream effect until unseal recovers.
See composition.md — failure table.
State machine
Unchanged from zipnet. CommitteeMachine as specified in the
zipnet book’s committee state
machine page. signature() folds in the zipnet
wire version plus its round parameters; the lattice does not
add further inputs.
Cryptography
Unchanged from zipnet. X25519 ECDH + HKDF-SHA256 + AES-128-CTR for pads, keyed blake3 for the falsification tag. See Cryptography — zipnet in this book for the summary and zipnet cryptography for the full derivation.
ACL composition
zipnet::Config carries its own TicketValidator composition.
In a TDX-gated lattice it stacks with the lattice-level
tee-tdx feature:
// In the integrator agent's Cargo.toml, when tee-tdx is enabled:
zipnet = { version = "...", features = ["tee-tdx"] }
The validator chain pins atelier’s MR_TD for committee
admission (so zipnet committee members are TDX-attested
peers). Writer-side admission of external submitters is gated
by the lattice operator’s JWT issuer key — same mechanism
zipnet ships.
Trust shape
Any-trust on anonymity; crash-fault on liveness. See threat-model.md — zipnet. No change from the zipnet book.
Open questions specific to the lattice
- v2 receipts stream. Zipnet defers
Receipts<D>to its v2. The lattice’stallyorganism is a partial replacement at the attribution layer but does not restore the per-submitter receipt shape zipnet’s design contemplates. Does the lattice want to push for zipnet’s receipts to land, or istallyenough? Open. - Cover traffic rate. Lattices targeting public L1 PBS
have different anonymity-set cadences from lattices targeting
a fast L2 sequencer. The zipnet
ShuffleWindowpreset (interactive,archival) covers the common cases; lattices with unusual chain cadences ship a custom window, which folds into the zipnet fingerprint as usual. - Multi-lattice zipnet sharing? Not supported. A zipnet
committee belongs to exactly one lattice. Proposals to share
one zipnet across lattices would need a new zipnet shape
(higher-dimensional
Config) and are out of scope.
Cross-references
- The six organisms — the index that lives one level up.
- unseal spec — the immediate downstream.
- composition.md — the subscription graph zipnet feeds into.
- threat-model.md — trust composition.
unseal — threshold decryption
audience: contributors
Proposed source: crates/unseal/.
The threshold-decryption organism that unwraps
zipnet::Broadcasts into cleartext UnsealedPool for the
downstream lattice. Any t of n TDX-attested committee
members can combine their shares to recover a slot’s cleartext;
fewer than t colluding members learn nothing.
Crate layout
Following the zipnet purity rule. Three layers:
unseal::proto— wire types, threshold-crypto primitives. No I/O, no mosaik, no tokio.unseal::core— pure functions:seal,partial_decrypt,combine. No I/O.unseal::node— the only module that importsmosaik.UnsealMachine,declare!items, role event loops, ticket validators.
Public facade:
unseal::Unseal::<D>::watch(&network, &Config) -> Watch<D>— read-side for organisms that subscribe to cleartext (offer,atelier, authorised audit integrators).unseal::seal(&Config, plaintext: &[u8]) -> Sealed<D>— pure function used by integrators to encrypt a payload before writing it into a zipnet envelope.unseal::Config— const-constructible fingerprint input.
No submit verb. The only write into UnsealedPool is via
the committee’s state machine apply; there is no external
submit primitive.
Public surface
Two collections on the shared universe.
UnsealedPool
declare::collection! {
pub UnsealedPool = Vec<UnsealedRound>,
derive_id: UNSEAL_ROOT.derive("pool"),
consumer require_ticket: LATTICE_READ_TICKET,
writer require_ticket: UnsealMember,
}
pub struct UnsealedRound {
pub slot: u64,
pub round: zipnet::RoundId,
pub cleartext: Vec<Cleartext>,
}
pub struct Cleartext {
pub slot_index: usize, // slot index inside the zipnet round
pub payload: Vec<u8>, // decrypted, AEAD-authenticated
}
One entry per zipnet-finalized slot. Appended in slot order.
ShareRegistry
declare::collection! {
pub ShareRegistry = Map<UnsealMemberId, ShareBundle>,
derive_id: UNSEAL_ROOT.derive("share-registry"),
consumer require_ticket: LATTICE_READ_TICKET,
writer require_ticket: UnsealMember,
}
pub struct ShareBundle {
pub member: UnsealMemberId,
pub dh_pub: [u8; 32], // X25519 pubkey for share-gossip encryption
pub ts_pub: [u8; 48], // BLS12-381 threshold-share pubkey
}
Static after DKG; republished only on DKG rerun. Downstream
organisms consult this for cryptographic verification of
UnsealedPool commits.
Internal plumbing
A derived private network keyed off UNSEAL_ROOT.derive("private")
carries one stream:
Shares— per-slot threshold shares gossiped between committee members, encrypted pairwise via member X25519 pubkeys. Never surfaced to the public universe.
The committee Group itself stays on the public universe
(UNSEAL_ROOT.derive("committee") derives its GroupId)
because UnsealedPool and ShareRegistry are backed by it.
State machine
impl StateMachine for UnsealMachine {
type Command = Command;
type Query = Query;
type QueryResult = QueryResult;
type StateSync = Snapshot;
fn signature(&self) -> UniqueId { ... }
fn apply(&mut self, cmd: Command, ctx: &dyn ApplyContext) { ... }
fn query(&self, q: Query) -> QueryResult { ... }
fn state_sync(&self) -> Snapshot { ... }
}
pub enum Command {
SubmitShare(ShareCommit),
SealSlot(SealCommit),
MarkDone(u64),
}
pub enum Query {
SharesFor(u64), // how many shares landed for slot S
UnsealedSince(u64), // slots from cursor to head
Member(UnsealMemberId), // roster lookup
}
Command semantics
SubmitShare. A committee member commits its share for slotS. Validation:ShareCommit.slot == some observed zipnet::Broadcasts[S]— the upstream must have committed; shares for a slot that does not exist upstream are rejected.ShareCommit.memberis in the committee roster as of the slot’s effective-at.- Exactly one share per
(slot, member); duplicates are silently dropped. - The share verifies against the member’s
ts_pubunder the threshold scheme (BLS12-381 pairing check).
SealSlot. Idempotent per slot. When the apply handler observesSharesFor(S) >= t, it runsunseal::core::combineinside apply, producesUnsealedRound { slot: S, ... }, appends it toUnsealedPool, and discards the shares. This command is issued by every committee member once they seetshares; the first one to apply wins, the others are silent no-ops.MarkDone. Garbage collection. After slotShas been unsealed and the downstreamtallyhas committedRefunds[S], any committee member can issueMarkDone(S)to drop the slot’s state. Idempotent.
Apply invariants
- Shares are never materialised outside apply. The combine step runs in apply’s synchronous body; once combined, the in-memory shares for that slot are zeroed before apply returns.
- At most one
UnsealedPoolentry per slot. Enforced by theSealSlothandler checking for existing-slot before combining. - Deterministic cleartext. Given the same set of
tshares, combine always produces the same cleartext — every committee member’s replica ofUnsealedPoolconverges. - No share admission after finalize. Once
SealSlothas run for slotS, subsequentSubmitSharecommands for slotSare rejected. This prevents a laggard member from inadvertently keeping the share state alive.
Signature versioning
fn signature(&self) -> UniqueId {
let tag = format!(
"unseal.v{WIRE_VERSION}.t={}.n={}.scheme={}",
self.config.threshold.t,
self.config.threshold.n,
self.config.scheme, // e.g. "bls12381-threshold-v1"
);
UniqueId::from(tag.as_str())
}
Bumping WIRE_VERSION, changing t, changing n, or
switching the threshold scheme produces a different
GroupId. Two lattices that accidentally picked the same
threshold but different schemes do not bond.
DKG ceremony
A one-off at lattice bring-up; rerun on rotation. The ceremony
is not a state machine command — it happens before the
UnsealMachine Group exists. Shape:
- Every prospective committee member generates an X25519 keypair and a BLS12-381 share secret locally.
- The operator’s
builder lattice updriver runs a Pedersen DKG over authenticated SSH: each member publishes its commitment polynomial, exchanges shares pairwise, and collects verification shares. - The output is the aggregate public key that seals payloads, plus each member’s secret share.
- The aggregate public key and every member’s
ShareBundleland in theunseal::Config, which folds into the lattice fingerprint.
Losing a member’s share before DKG rerun reduces the effective
committee by one; if this brings it below t, the lattice
can no longer unseal and must retire + re-DKG. Rotation
procedure lives in operators/rotations-and-upgrades.md.
ACL composition
impl Config {
pub fn ticket_validator(&self) -> Validators {
Validators::stacked()
.with(JwtIssuer::from(self.operator_jwt_key))
.with(Tdx::new().require_mrtd(self.mrtd))
}
}
JwtIssuer gates on lattice-level membership (separating
members of one lattice’s unseal from another’s, even when the
MR_TD image is the same). Tdx::require_mrtd gates on the
committee image’s reproducible build measurement. Both folds
into the ShareRegistry ACL and into the committee’s admission.
Trust shape
t-of-n threshold on anonymity; majority-honest is
sufficient for liveness of UnsealedPool commits (since
SealSlot is idempotent and any member can trigger it).
See threat-model.md — unseal for the composition argument.
Open questions
- Trial-decrypt vs deterministic recipient?
unsealseals to the committee aggregate public key; recovery is unambiguous oncetshares land. No trial-decrypt cost per recipient. But this also meansunsealcannot mark a payload for selective decryption (e.g. “only unseal if the on-chain block at slot S was mined by the lattice’s relay”). Conditional decryption is research-open. - Post-quantum migration. BLS12-381 is not post-quantum.
Migration path is a new
schemevalue in the signature plus a second DKG ceremony. The organism surface does not change. See roadmap.md — post-quantum unseal. - Re-randomised shares? Currently shares are deterministic
per
(slot, member). A member who recovers their share secret can compute past shares. Forward-secure rotation is deferred until zipnet’s own ratcheting lands.
Cross-references
- The six organisms
- zipnet — anonymous submission — upstream.
- offer — sealed-bid auction — the parallel organism that reuses the threshold pattern.
- cryptography.md — unseal
- threat-model.md — unseal
offer — sealed-bid auction
audience: contributors
Proposed source: crates/offer/.
The sealed-bid auction organism. Searchers submit bundle bids
per slot, threshold-encrypted to the offer committee’s DKG-
produced public key. At auction close the committee runs a
threshold combine inside apply to decrypt bids, picks a
winner according to the lattice’s auction rule, and commits a
single AuctionOutcome entry for the slot. No committee
member — and no searcher other than the winner — ever sees a
losing bid’s cleartext.
Crate layout
offer::proto— wire types, serialization of sealed bids and auction outcomes. No I/O.offer::core— pure functions for sealing, combine-decrypt at close time, and winner selection.offer::node—OfferMachine,declare!items, role event loops, ticket validators.
Public facade:
offer::Offer::<B>::bid(&network, &Config) -> Bidder<B>— searcher-side writer of sealed bids.offer::Offer::<B>::outcomes(&network, &Config) -> Outcomes<B>— stream of committedAuctionOutcomes.offer::Config— const-constructible fingerprint input.
B: BundleDatum is a trait searchers implement for their
bundle type (analogous to zipnet’s ShuffleDatum). Carries
TYPE_TAG: UniqueId and MAX_BID_WIRE_SIZE: usize — bids
are ciphertexts of bounded size to preserve threshold-
encryption properties at the wire layer. See constant size
argument.
Public surface
Bid<B> stream
declare::stream! {
pub Bid<B: BundleDatum> = SealedBid<B>,
derive_id: OFFER_ROOT.derive("bid"),
producer require_ticket: SearcherJwt,
consumer require_ticket: OfferMember,
}
pub struct SealedBid<B: BundleDatum> {
pub nonce: [u8; 24], // unique per (searcher, slot)
pub slot: u64,
pub searcher: SearcherId, // from the JWT
pub ciphertext: Vec<u8>, // threshold-encrypted bundle + bid
pub _phantom: PhantomData<B>,
}
The searcher field is authenticated by the writer-side JWT
and not encrypted — this is what gets paired with
AuctionOutcome for attribution at refund time.
ciphertext decrypts to a Bundle<B> struct that carries
the bid value, the bundle contents, and the
UnsealedRef dependency:
pub struct Bundle<B: BundleDatum> {
pub bid: u128, // in chain native units
pub payload: B, // application-level bundle
pub depends_on: Option<UnsealedRef>, // reference into unseal::UnsealedPool
}
AuctionOutcome collection
declare::collection! {
pub AuctionOutcome = Vec<CommittedOutcome>,
derive_id: OFFER_ROOT.derive("outcome"),
consumer require_ticket: LATTICE_READ_TICKET,
writer require_ticket: OfferMember,
}
pub struct CommittedOutcome {
pub slot: u64,
pub winner: SearcherId,
pub bid: u128,
pub bundle: EncodedBundle, // cleartext of winning bundle only
pub evidence: CommitEvidence, // hashes of losing-bid ciphertexts
}
bundle is cleartext (the winner has no anonymity to preserve
against the builder — they want their txs included).
evidence carries hashes of every losing bid’s ciphertext so
that a post-hoc observer can check the committee considered
all bids without needing the losing plaintexts.
SearcherRegistry
declare::collection! {
pub SearcherRegistry = Map<SearcherId, SearcherBundle>,
derive_id: OFFER_ROOT.derive("searcher-registry"),
consumer require_ticket: LATTICE_READ_TICKET,
writer require_ticket: OfferMember,
}
Maintained by the committee; entries land when a searcher’s
JWT authenticates against a new SearcherId.
Internal plumbing
Derived private network keyed off OFFER_ROOT.derive("private"):
DecryptShares— per-slot threshold-decryption shares exchanged at auction close. Same shape asunsealshare gossip.
State machine
pub enum Command {
OpenAuction { slot: u64, opened_at: UnixSecs },
AcceptBid(BidAccept),
SubmitShare(DecryptShare),
CloseAuction(u64),
MarkDone(u64),
}
pub enum Query {
OpenAuctions,
BidsFor(u64),
OutcomeFor(u64),
Searcher(SearcherId),
}
Apply semantics
OpenAuction. Issued by the state-machine leader whenunseal::UnsealedPool[S]is observed. Creates an open auction window for slotSwith the configuredauction_window. Idempotent per slot.AcceptBid. Committed when a committee member observes a freshSealedBidon the publicBid<B>stream and proposes its inclusion in slotS’s auction. Validation:slotmatches an open auction.nonceis unique for this(searcher, slot)pair — duplicate nonces reject.- The ciphertext length is within
B::MAX_BID_WIRE_SIZE. searcheris admitted by the searcher JWT. First-committee-member-to-propose wins; duplicates are silent.
SubmitShare. Committee member’s threshold share for auction-close decryption of one specific bid ciphertext. Shares are indexed by(slot, bid_hash). Same validation shape asunseal::SubmitShare.CloseAuction. Idempotent per slot. When the leader observes the auction window elapsed, it issues this command. The apply handler:- Collects every
AcceptBidfor the slot. - For each bid, if
tshares have landed, combines them and decrypts the bid. - If a bid’s shares are insufficient, its ciphertext is considered non-admissible and is excluded.
- Among the admissible bids, picks the winner by the lattice’s auction rule (default: highest bid, with deterministic tie-break by blake3 of the bid ciphertext).
- Constructs
CommittedOutcomeand appends toAuctionOutcome.
- Collects every
MarkDone. GC after the slot is fully attributed by downstreamtally.
Invariants
- One outcome per slot. Enforced in
CloseAuctionby reject-if-already-closed. - Monotonic slots.
AuctionOutcome[S+1]cannot commit beforeAuctionOutcome[S]. - Losing bids stay encrypted.
CloseAuctionapply decrypts only the winning bid; losing-bid shares are discarded post-close without their cleartext ever leaving apply. - Bid admissibility is a pure function of the commit log. A replica re-applying the same commit sequence reaches the same winner.
Signature versioning
fn signature(&self) -> UniqueId {
let tag = format!(
"offer.v{WIRE_VERSION}.window={}ms.t={}.n={}.scheme={}.rule={}",
self.config.auction_window.as_millis(),
self.config.threshold.t,
self.config.threshold.n,
self.config.scheme,
self.config.auction_rule.tag(), // "highest-bid", etc.
);
UniqueId::from(tag.as_str())
}
Every knob folds in. A lattice swapping auction rule from
highest-bid to highest-profit-by-sim is a fingerprint
change.
DKG ceremony
Separate from unseal’s. Offer’s DKG produces an aggregate
public key whose secret shares are held by the offer
committee. Integrators (searchers) encrypt bids to this key
before submitting.
Rerun independently from unseal DKG — a compromised
offer-committee member does not force an unseal rerun.
ACL composition
impl Config {
pub fn bid_validator(&self) -> Validators {
// Writers: searchers authenticated by the operator's searcher JWT.
Validators::stacked().with(JwtIssuer::from(self.searcher_jwt_key))
}
pub fn member_validator(&self) -> Validators {
// Committee members: lattice-level JWT, optional TDX.
let mut v = Validators::stacked().with(JwtIssuer::from(self.operator_jwt_key));
if let Some(mrtd) = self.member_mrtd {
v = v.with(Tdx::new().require_mrtd(mrtd));
}
v
}
}
TDX on offer members is optional in v1. A lattice prioritising bid confidentiality under a stronger trust model adds TDX; most lattices run it with JWT-only admission.
Trust shape
Majority-honest committee for winner integrity; threshold cryptography for bid confidentiality against a minority of compromised members. See threat-model.md — offer.
Open questions
- Auction rule extensibility. The default rule is
highest-bid. Some lattices will want “highest-profit after
simulation against the unsealed pool”, which requires
running simulation inside offer’s committee. That is
conceptually a cross-organism call between offer and
atelier; the clean pattern is to have offer commit a
PendingOutcomewith the top-K bids and let atelier compute the actual profit. Not specified yet. - Withdrawal semantics. Searchers may want to withdraw a
bid once the slot’s
UnsealedPoolis revealed (if the revealed order flow makes the bid uneconomical). TheBundleWithdrawcommand is in the spec but withdrawal deadlines vs auction close need tightening. - Cross-chain bids. A bid targeting
ethereum.mainnetslotS1andunichain.mainnetslotS2is a single atomic desire from the searcher’s perspective but two independentAuctionOutcomecommits. Out of scope here; see cross-chain — Shape 3.
Cross-references
- The six organisms
- unseal — upstream.
- atelier — downstream.
- cryptography.md — offer
- threat-model.md — offer
atelier — TDX co-building
audience: contributors
Proposed source: crates/atelier/.
The block-assembly organism. Every atelier committee member
runs a reproducible TDX image whose MR_TD is pinned in the
lattice’s atelier::Config. Members subscribe to
unseal::UnsealedPool and offer::AuctionOutcome, simulate
bundles inside the enclave, and commit a single Candidates
entry per slot signed under a BLS aggregate signature across
the committee.
Crate layout
atelier::proto— wire types for block templates, hints, candidate-block payload, aggregate signature. No I/O.atelier::core— pure simulation + ordering. Inputs: unsealed transactions, auction winner, parent state root. Output: deterministic canonical tx list + gas estimate.atelier::node— mosaik integration.AtelierMachine, role event loops, TDX ticket validator, chain-RPC bridge.
The chain-RPC bridge is the one place atelier reaches outside
the lattice universe: simulation needs parent state. The
bridge is a sealed in-enclave HTTP client; the RPC endpoint is
pinned in atelier::Config so that the bridge’s target is part
of the fingerprint (and so that a rogue operator cannot swap
it out of an attested image).
Public facade:
atelier::Atelier::<Block>::read(&network, &Config) -> Reader<Block>— read-side for proposers, sequencers, analytics agents.atelier::Atelier::<Block>::verify(&Block, &Config) -> bool— pure verification of the BLS aggregate signature against the pinned committee roster.atelier::Config.
Public surface
Candidates collection
declare::collection! {
pub Candidates<B> = Vec<Candidate<B>>,
derive_id: ATELIER_ROOT.derive("candidates"),
consumer require_ticket: LATTICE_READ_TICKET,
writer require_ticket: AtelierMember,
}
pub struct Candidate<B: BlockDatum> {
pub slot: u64,
pub parent_hash: [u8; 32],
pub header: B::Header,
pub body: Vec<B::Tx>,
pub hints_applied: Vec<HintId>,
pub builder_sig: BlsAggSig,
pub committee_roster: Vec<BlsPub>,
}
B: BlockDatum is the lattice’s chain-specific block schema
(e.g. L1Post4844, OpStackBedrock).
Hint<Template> stream
declare::stream! {
pub Hint<T: HintDatum> = HintBundle<T>,
derive_id: ATELIER_ROOT.derive("hint"),
producer require_ticket: CoBuilder,
consumer require_ticket: AtelierMember,
}
pub struct HintBundle<T: HintDatum> {
pub slot: u64,
pub co_builder: CoBuilderId,
pub template: T,
pub signature: Secp256k1Sig,
}
Hints are how co-builder operators (Phase 2) contribute
partial block templates. In a Phase 1 single-operator lattice
the Hint stream has no external writers; the committee’s
own members may still emit hints internally for intra-member
proposal exchange.
Internal plumbing
Derived private network keyed off
ATELIER_ROOT.derive("private"). Two streams:
Simulations— pairwise exchange of simulation outputs for cross-member agreement. Not on the public universe because it carries pre-commit tentative block bodies at high frequency.RosterGossip— BLS roster synchronization used during committee churn.
State machine
pub enum Command {
OpenSlot { slot: u64, parent: [u8; 32] },
AcceptHint(HintAccept),
SubmitSimulation(SimOutput),
SealCandidate(SealCommit),
MarkDone(u64),
}
pub enum Query {
OpenSlots,
HintsFor(u64),
SimulationAgreement(u64),
CandidateFor(u64),
}
Apply semantics
OpenSlot. Leader-issued when the proposer slot boundary is observed on-chain. Initializes per-slot state.AcceptHint. Committee member proposes a hint for inclusion. Validation: hint’sslotmatches an open slot;co_builderACL ticket matches; signature verifies.SubmitSimulation. Each committee member runs the simulation of(UnsealedPool[S] + AuctionOutcome[S] + hints)inside its TDX enclave and commits the resulting output hash. The state machine cross-checks member outputs: if a majority agrees on a single hash, that hash wins; divergence increments a counter the operator watches.SealCandidate. Issued once a majority agrees. Apply handler:- Reconstructs the candidate block body from the deterministic ordering function using the agreed inputs.
- Aggregates each member’s BLS signature over the body
hash (signatures arrive via the private
Simulationsstream alongside the simulation output). - Constructs
Candidate<B>withbuilder_sig = BlsAggregate(valid_member_sigs)andcommittee_roster = members_who_signed. - Appends to
Candidates.
MarkDone. GC after tally commitsRefunds[S].
Invariants
- One candidate per slot.
SealCandidaterejects ifCandidateFor(S)is already populated. - Candidate body is a deterministic function of committed
inputs. Two committee members re-applying the same
sequence arrive at byte-identical
Candidate<B>.body. - Minority divergence cannot commit. A minority of TDX
images that simulate differently cannot force their
output into a
Candidate; apply requires majority agreement on the simulation hash. - Chain RPC is inside the TCB. The RPC endpoint URL is pinned in the fingerprint; a compromised RPC is a compromised atelier.
Signature versioning
fn signature(&self) -> UniqueId {
let tag = format!(
"atelier.v{WIRE_VERSION}.schema={}.mrtd={:x}.rpc={}.committee={}",
self.config.block_schema.tag(),
blake3(self.config.mrtd_acl.sorted_concat()),
self.config.chain_rpc_hash(),
self.config.committee_size,
);
UniqueId::from(tag.as_str())
}
Adding an MR_TD to the acl (onboarding a co-builder) changes the signature — this is intentional. See Rotations and upgrades — Co-building.
Committee key ceremony
At bring-up time every member generates a BLS12-381 keypair
inside its TDX image. The public keys are collected via the
operator’s driver and pinned as the committee_roster_pubkeys
in atelier::Config. No DKG is required — atelier’s
cryptography is aggregate signature, not threshold secret,
so keys are independent.
Rotation is per-member: generate new BLS key, publish, update
atelier::Config.committee_roster_pubkeys, retire + replace
the lattice. Rotating one key changes the fingerprint.
ACL composition
impl Config {
pub fn member_validator(&self) -> Validators {
let mut v = Validators::stacked()
.with(JwtIssuer::from(self.operator_jwt_key));
for mrtd in &self.mrtd_acl {
v = v.with(Tdx::new().require_mrtd(*mrtd));
}
v
}
pub fn co_builder_validator(&self) -> Validators {
Validators::stacked().with(JwtIssuer::from(self.co_builder_jwt_key))
}
}
mrtd_acl is a set (not a single value) to support co-building:
two operators contributing atelier members each publish their
own MR_TD; both land in the acl; admission succeeds for either.
Adding an MR_TD is a fingerprint change; see
phase2-spec.
Trust shape
TDX attestation + majority-honest. See threat-model.md — atelier.
Open questions
- Bundle simulation determinism across TDX images.
Simulation at byte-identical output requires a deterministic
EVM implementation. The reference atelier ships a vetted
revm build inside the image; divergence-in-practice between
images is what the
simulation_divergence_totalmetric surfaces. What level of determinism is actually achievable on shared chain RPC is an open research question. - Pre-confirmation support. Flashblocks-style
pre-confirmations (fast-confirm a partial ordering before
the full slot) would need atelier to commit partial
Candidatesentries per sub-slot. Not in v1; likely a v2 extension that adds a second collection keyed by(slot, sub_slot). - Revert protection. Integrator-level revert protection
requires atelier to simulate each bundle in a sandbox and
exclude any that revert. The logic is in
atelier::core; the question is whether to expose revert-filtered bundles in the canonical ordering or only at integrator request. Open.
Cross-references
- The six organisms
- offer — upstream.
- unseal — upstream.
- relay — downstream.
- cryptography.md — atelier
- threat-model.md — atelier
- Operator runbook
relay — PBS fanout
audience: contributors
Proposed source: crates/relay/.
The relay organism ships atelier::Candidates to the chain’s
proposer (L1) or sequencer (L2) and records the proposer’s
acknowledgement in AcceptedHeaders. Unlike every other
organism in the lattice, relay’s work is substantially outside
the mosaik universe — it speaks proposer-side protocols like
MEV-Boost — but its commit surface is mosaik-native.
Crate layout
relay::proto— wire types for headers, bid envelopes, send-records, ack-records. No I/O.relay::core— pure deduplication, rate-limiting, and policy evaluation.relay::endpoints— one module per supported policy:l1_mev_boost.rs— MEV-Boost submitter client.l2_sequencer.rs— L2 sequencer endpoint client.l2_leader_rotation.rs— leader-rotation aware client.
relay::node— mosaik integration.RelayMachine, role event loops, policy dispatch.
Public facade:
relay::Relay::<H>::watch(&network, &Config) -> Watch<H>— proposer / sequencer consumers that want to follow the lattice’s view of per-slot acceptance.relay::Config— pins the policy and the proposer endpoint specifiers.
Unlike other organisms, the relay binary is the intended process even for proposers that want the lattice’s view of acceptance — there is no “submit a header” integrator verb (relay is a downstream of atelier, not a submit endpoint).
Public surface
AcceptedHeaders collection
declare::collection! {
pub AcceptedHeaders<H> = Vec<AcceptedHeader<H>>,
derive_id: RELAY_ROOT.derive("accepted"),
consumer require_ticket: LATTICE_READ_TICKET,
writer require_ticket: RelayMember,
}
pub struct AcceptedHeader<H: HeaderDatum> {
pub slot: u64,
pub header: H,
pub bid: u128,
pub proposer: ProposerId,
pub ack_evidence: Vec<u8>, // proposer-signed payload
pub committed_at: UnixSecs,
}
One entry per slot whose header was ack’d by its proposer.
Slots where no proposer acknowledged produce no entry —
operators monitor the gap via
relay_committee_agreement_rate.
Internal plumbing
Derived private network (RELAY_ROOT.derive("private")):
Sends— per-slot records of each committee member’s proposer-side submission (what header, to which proposer, at what time). Used by other members to cross-check and by the state machine to driveRecordSendcommands.
State machine
pub enum Command {
OpenSlot { slot: u64 },
RecordSend(SendRecord),
RecordAck(AckRecord),
RecordTimeout(u64),
CommitAck(u64),
MarkDone(u64),
}
pub enum Query {
OpenSlots,
SendsFor(u64),
AcksFor(u64),
AcceptedFor(u64),
}
Apply semantics
OpenSlot. Leader-issued whenatelier::Candidates[S]appears. Opens per-slot bookkeeping.RecordSend. Committee member registers that it sent the candidate’s header to a specific proposer. Validation: slot matches an open slot;proposeris in the policy’s expected set; member is a distinct role (dedupe).RecordAck. Committee member registers a proposer ack for slotS. Validation: slot matches; ack signature verifies under the proposer’s published key; the proposerSendrecord exists for this member.RecordTimeout. Slot deadline elapsed without ack. Idempotent; prevents stuck slots.CommitAck. When a majority of committee members haveRecordAck’d for slotSwith consistent ack evidence, the leader issuesCommitAck. Apply handler cross-verifies the ack evidence (same proposer, same header hash, similar timestamps) and appendsAcceptedHeadertoAcceptedHeaders.MarkDone. GC post-tally.
Invariants
- One
AcceptedHeaderper slot. Enforced inCommitAck. - Committed ack evidence is consistent across the majority.
CommitAckapply rejects if majorityRecordAcks disagree on header hash or proposer. - Monotonic slots. Appends strictly in slot order.
- Timeouts never override a previously committed ack.
Once
CommitAck(S)has applied,RecordTimeout(S)is a no-op.
Signature versioning
fn signature(&self) -> UniqueId {
let tag = format!(
"relay.v{WIRE_VERSION}.policy={}.endpoints={:x}.committee={}",
self.config.policy.tag(),
blake3(self.config.endpoints.sorted_concat()),
self.config.committee_size,
);
UniqueId::from(tag.as_str())
}
Switching policy (e.g. L1 MEV-Boost to L2 sequencer) changes the fingerprint. Swapping endpoints within the same policy also changes the fingerprint; endpoint rotation is a lattice retirement, not a non-FP change.
Proposer-side specialization
relay::endpoints is the boundary between the lattice and the
chain. Each policy implements one trait:
pub trait Endpoint: Send + Sync {
async fn submit(&self, header: &Header, bid: u128) -> Result<AckFuture>;
async fn verify_ack(&self, ack: &AckEvidence) -> Result<ProposerId>;
fn expected_proposer_set(&self, slot: u64) -> Result<Vec<ProposerId>>;
}
L1MevBoostimplements this over the standard MEV-Boost HTTP API, pinning the validator set from the beacon chain via an embedded beacon-node RPC (configured inConfig).L2Sequencerpairs with one sequencer endpoint; theexpected_proposer_setdegenerates to a singleton.L2LeaderRotationqueries the chain’s leader schedule and targets the rotated leader per slot.
Adding a new policy is a relay::endpoints::newpolicy.rs
module plus a Policy::NewPolicy variant plus a signature
format bump.
ACL composition
impl Config {
pub fn member_validator(&self) -> Validators {
let mut v = Validators::stacked()
.with(JwtIssuer::from(self.operator_jwt_key));
if let Some(mrtd) = self.member_mrtd {
v = v.with(Tdx::new().require_mrtd(mrtd));
}
v
}
}
TDX is optional on relay in v1; the rationale is that the
organism’s integrity claim rides on atelier’s aggregate
signature, not on the relay’s own attestation. The proposer
verifies the atelier sig directly.
Trust shape
Any-trust liveness; majority-honest AcceptedHeaders
integrity. See threat-model.md — relay.
Open questions
- Proposer equivocation. The proposer may ack two
conflicting headers from different builders. Our
AcceptedHeaderscaptures what we saw; but if the proposer equivocates and we ack a header that is later replaced by another builder’s,tallymay misattribute. Detecting equivocation requires cross-builder cooperation; in v1 the lattice trusts the chain’s eventual head and treats ourAcceptedHeadersas a local view. - Flashblocks / pre-confirmations. For lattices targeting
L2s with sub-slot cadence, relay needs to commit multiple
AcceptedHeadersper slot. Same shape, higher rate. Not yet speced. - Relay incentives. Nothing in the current shape compensates
relay members for their connectivity costs. Refund
attribution happens entirely in
tally. A relay member share in the refund is a lattice policy decision, not a protocol one.
Cross-references
- The six organisms
- atelier — upstream.
- tally — downstream.
- threat-model.md — relay
- cross-chain.md — L1 / L2 specialization
- Operator runbook
tally — refund accounting
audience: contributors
Proposed source: crates/tally/.
The refund-accounting organism. Watches the chain for
inclusion of the lattice’s winning blocks, joins the upstream
organisms’ public commit logs to compute attribution, commits
one Refunds entry per included block, and publishes ECDSA
Attestations that integrators present to an on-chain
settlement contract.
Tally is the last non-reversible step in the lattice pipeline; everything upstream of it can fail or degrade gracefully, tally just doesn’t commit for that slot.
Crate layout
tally::proto— wire types for attributions, attestations, on-chain evidence. No I/O.tally::core— pure attribution computation: given a winning block and all upstream commits, who gets what.tally::chain— chain-RPC watcher. One module per supported chain backend.tally::node— mosaik integration.TallyMachine, role event loops, attestation signing.
Public facade:
tally::Tally::<A>::read(&network, &Config) -> Reader<A>— read-side for integrators claiming refunds.tally::Tally::<A>::attestations(&network, &Config) -> Attestations— presentable attestations for on-chain settlement.tally::Config— pins committee pubkeys, settlement contract address, chain RPC.
Public surface
Refunds collection
declare::collection! {
pub Refunds<A> = Vec<Attribution<A>>,
derive_id: TALLY_ROOT.derive("refunds"),
consumer require_ticket: LATTICE_READ_TICKET,
writer require_ticket: TallyMember,
}
pub struct Attribution<A: AttributionDatum> {
pub slot: u64,
pub block_hash: [u8; 32],
pub recipients: Vec<Recipient>,
pub evidence: Evidence,
pub committed_at: UnixSecs,
}
pub struct Recipient {
pub addr: [u8; 20],
pub amount: u128,
pub kind: RecipientKind,
}
pub enum RecipientKind {
OrderflowProvider { submission: SubmissionRef },
BidWinner { bid: BidRef },
CoBuilder { member: BlsPub },
Proposer { share: ProposerShare },
}
pub struct Evidence {
pub zipnet_broadcasts: Vec<BroadcastRef>,
pub unseal_pool: Vec<UnsealedRef>,
pub offer_outcome: OutcomeRef,
pub atelier_candidate: CandidateRef,
pub relay_accepted: AcceptedRef,
pub on_chain_inclusion: OnChainRef,
}
Attestations collection
declare::collection! {
pub Attestations = Vec<Attestation>,
derive_id: TALLY_ROOT.derive("attestations"),
consumer require_ticket: Open, // world-readable
writer require_ticket: TallyMember,
}
pub struct Attestation {
pub slot: u64,
pub block_hash: [u8; 32],
pub recipient: [u8; 20],
pub amount: u128,
pub kind_digest: [u8; 32],
pub signatures: Vec<(TallyMemberId, Secp256k1Sig)>,
}
Attestations are deliberately world-readable: on-chain settlement contracts verify signatures against published tally member pubkeys, and the attestation’s recipient is already public information (it is the payout address).
Internal plumbing
Derived private network (TALLY_ROOT.derive("private")):
ChainWatchGossip— per-member observation of on-chain inclusion, reconciled beforeObserveInclusionis proposed.
State machine
pub enum Command {
ObserveInclusion(InclusionReport),
ComputeAttribution(AttributionDraft),
CommitRefund(CommitRefund),
SubmitAttestationSignature(AttestationShare),
MarkDone(u64),
}
pub enum Query {
RefundFor(u64),
AttestationsFor(u64),
PendingInclusions,
ChainHeadLag,
}
Apply semantics
ObserveInclusion. Committee member reports that blockblock_hashat slotShas been observed on-chain. Validation: theatelier::Candidates[S]with matching hash exists; therelay::AcceptedHeaders[S]references that candidate. First-report-per-slot wins.ComputeAttribution. Any committee member computes the deterministic attribution for an observed slot and proposes it. Validation: the draft matches whattally::core::computeproduces from the upstream evidence. Duplicate drafts from other members either match (silent dedupe) or reject (integration bug — this is thetally_evidence_failuresmetric).CommitRefund. Leader-issued afterComputeAttributionhas majority agreement. Apply handler appendsAttributiontoRefunds.SubmitAttestationSignature. Each committee member signs the committed attribution with its ECDSA key; the signatures land here. Whentsignatures are present (tfrom the settlement contract’s threshold, not the Raft majority), the aggregate is appended toAttestations.MarkDone. GC after a slot is past the settlement contract’s claim window.
Attribution algorithm (tally::core::compute)
Deterministic function signature:
pub fn compute(
block: &Candidate<B>,
auction: &CommittedOutcome,
unsealed: &UnsealedRound,
broadcasts: &Broadcasts,
inclusion: &OnChainRef,
policy: &AttributionPolicy,
) -> Attribution<A> { ... }
AttributionPolicy is a per-lattice parameter that folds into
the tally fingerprint. The reference policy (AttributionPolicy::Default)
is:
- Total MEV =
block's coinbase_transfer - baseline_reward. - Winning searcher gets
auction.bid * searcher_share_pct. - Each wallet whose zipnet submission landed in the block
gets
(tx_value / total_tx_value) * orderflow_share_pctof the remaining MEV. - Co-builders (Phase 2) split
cobuilder_share_pctequally. - The proposer gets whatever is left (via the on-chain
coinbase transfer; no explicit recipient in
Refunds).
Shares are pinned in the policy and fold into the signature.
Invariants
- One
Refundsentry per included slot. Enforced inCommitRefund. - Attribution is a deterministic function of the upstream evidence plus the policy. Every committee member computes the same draft.
- Attestations are idempotent under committee rotation. An attestation issued by committee set C_k is still valid after C_k retires, as long as the settlement contract’s pubkey list includes C_k’s keys (rotation is a pubkey-set extension, not a replacement).
- Nothing tally commits can be rolled back. Once an
Attestationis in the collection, it is meant to land on-chain; post-commit corrections happen via the settlement contract’s dispute mechanism if any, not via aRefundsrewrite.
Signature versioning
fn signature(&self) -> UniqueId {
let tag = format!(
"tally.v{WIRE_VERSION}.policy={}.settlement={:x}.chain_backend={}.committee={}",
self.config.policy.tag(),
self.config.settlement_addr,
self.config.chain_backend.tag(),
self.config.committee_size,
);
UniqueId::from(tag.as_str())
}
Changing the attribution policy shares is a fingerprint change. Changing the settlement contract address is a fingerprint change. Swapping chain backends (e.g. from a full-node RPC to an indexer) is a fingerprint change.
Chain backend
tally::chain abstracts the inclusion watcher:
pub trait ChainBackend: Send + Sync {
async fn head(&self) -> Result<(u64, [u8; 32])>;
async fn block(&self, slot: u64) -> Result<Option<BlockInfo>>;
async fn coinbase_transfer(&self, block: &BlockInfo) -> Result<u128>;
fn tag(&self) -> &'static str;
}
Implementations:
L1FullNode— Ethereum full-node JSON-RPC.L2Rollup— op-node / op-geth dual-source reader.Archival— historical indexer for slow back-fill.
A lattice pinning chain_backend: L1FullNode and pointing at
an archival indexer is a configuration mismatch that the
fingerprint catches at deploy time.
ACL composition
impl Config {
pub fn member_validator(&self) -> Validators {
Validators::stacked().with(JwtIssuer::from(self.operator_jwt_key))
}
}
Tally has no TDX requirement in v1. The settlement contract is the ground truth; mis-attestations cannot be paid out. Integrators who distrust tally’s committee can compute attribution themselves from the upstream organisms.
Trust shape
Majority-honest for Refunds and Attestations integrity;
settlement contract is the ultimate arbiter. See
threat-model.md — tally.
Open questions
- Settlement contract interface standardization. Different lattices running different chains will have different contract conventions. Do we ship a reference ABI? Open; the v1 target is one contract per lattice, bespoke to the chain.
- Refund policy extensibility. The
AttributionPolicyenum covers a small set of reference splits. Custom policies (e.g. per-bundle cap, per-wallet throttle) need a more expressive policy DSL. Not speced; a lattice with custom needs ships a fork oftallywith its own policy variant. - Cross-lattice attribution. When a bundle spans two lattices (cross-chain backrun), whose tally attributes? v1 answer: each tally attributes the local slice; searchers integrate across lattices in their own agent. The bridge organism shape in cross-chain.md — Shape 3 is the right answer for tighter coupling.
- Reorg handling. If the block we attributed reorgs out,
the attestations we emitted are invalid. v1 commits
optimistically and relies on the settlement contract’s
finality check. A delayed commit (after
kconfirmations) is safer and trivially added — but changes the SLA tally exposes to integrators.
Cross-references
- The six organisms
- relay — upstream.
- atelier — upstream.
- offer — upstream.
- zipnet — upstream.
- cryptography.md — tally
- threat-model.md — tally
- integrators/refunds.md
- Operator runbook
Composition: how organisms wire together
audience: contributors
Architecture maps the six organisms onto a lattice. Organisms specifies each organism’s public surface. This chapter shows the wiring: which stream and collection subscription drives which organism’s state machine, and where the happy path splits when something upstream fails.
The wiring is intentionally weak. No cross-Group atomicity, no
shared state, no global scheduler. Each organism reacts to its
upstream’s public commits via mosaik’s when() DSL and Collection
/ Stream subscriptions. That is the whole composition model.
The slot as foreign key
Every commit in every organism is keyed by a slot — the chain’s slot number (on L1, the proposer slot; on L2, the sequencer’s block or sub-block index). The slot is the only shared identifier across organisms’ state machines. This is deliberate:
- It is a small, dense, monotonic integer. Cheap to carry in every commit.
- It is picked by the chain, not by any lattice organism. No organism can unilaterally fabricate or reorder slots.
- It is a natural synchronization point: every organism has a well-defined answer to “what slot are we working on?”
Contributors writing new organisms should reuse the slot as the composition key. Composing on anything else — transaction hashes, bundle IDs, block numbers — breaks when chains reorg, re-propose, or split.
The subscription graph
Each arrow is a subscription: the downstream organism watches the upstream organism’s public collection or stream and reacts in its own state machine. No arrow represents atomic cross-Group commit.
integrators
(wallets, ┌─► zipnet:Broadcasts ────► unseal:UnsealedPool ──┐
searchers) │ │
│ │ ├─► atelier:Candidates ──► relay:AcceptedHeaders ──► tally:Refunds
├──► zipnet:Submit │ │
│ │ │
└──► offer:Bid ────────► offer:AuctionOutcome ─────────────┘ │
│
on-chain inclusion watcher ──────────────────────────────────────────────┘
│
tally:Attestations ──► integrators
Read top to bottom, left to right:
- Wallets submit to
zipnet:Submit; searchers submit tooffer:Bid. Both are ticket-gated writes onto the lattice’s public surface. zipnetfinalizes a round and appends toBroadcasts.unsealwatcheszipnet:Broadcasts, combines threshold shares, and writes cleartext intoUnsealedPool.offerwatchesUnsealedPoolfor slot boundaries, runs its sealed-bid auction over bundles that reference the pool, and commitsAuctionOutcome.atelierwatches bothUnsealedPoolandAuctionOutcome, assembles a candidate block in the TDX committee, and commitsCandidates.relaywatchesCandidates, ships headers to the proposer, and commitsAcceptedHeaderswhen the proposer acknowledges.- An on-chain inclusion watcher watches the chain; when the
winning block lands,
tallycommitsRefundsand publishesAttestations. - Integrators read
RefundsandAttestationsto claim their share.
Subscription code shape
A contributor implementing one organism writes a role driver in
that organism’s crate. The driver is just a mosaik event loop that
watches an upstream primitive and calls group.execute(...) on
its own Group. The pattern is identical across organisms; the
unseal driver is the simplest and serves as the template:
// Inside unseal::node::roles::server
loop {
// Watch zipnet's public Broadcasts collection for the next
// finalized round.
let next = broadcasts.when().appended().await;
let round = broadcasts.get(next).expect("appended implies present");
// Compute this node's threshold share for the round.
let share = compute_share(&self.share_secret, &round);
// Commit into unseal's own Group.
self.unseal_group
.execute(UnsealCommand::SubmitShare { slot: round.slot, share })
.await?;
}
atelier’s driver is the same shape over UnsealedPool + AuctionOutcome.
tally’s is the same shape over AcceptedHeaders + an on-chain
watcher. In every case the driver ends at group.execute(...) on
its own state machine; the upstream is read-only.
Apply order across organisms
Within one organism, mosaik’s Raft variant guarantees that every committee member applies commands in the same order. Across organisms, no such guarantee exists — each organism’s state machine runs independently. The lattice relies on two properties instead:
- Monotonicity by slot. Within any organism, commits for slot
S+1are never applied before commits for slotS. The organism’s own state machine enforces this in its apply handler by rejecting out-of-order slots. - Eventual consistency by subscription. A downstream organism’s driver will observe every upstream commit eventually, because mosaik collections are append-only and readers converge. It may observe them out of wall-clock order across organisms, but each organism’s state machine processes them in slot order regardless.
The two properties together are enough to reconstruct a globally consistent view per slot without global atomicity. Integrators wanting “the canonical decision for slot S” read each organism’s per-slot commit independently and join on the slot number.
What happens when an upstream organism fails
Each row below covers one organism failing or stalling. “Fails” means its committee cannot commit within a slot’s deadline. The columns are the downstream organisms’ observable behaviour.
| Upstream fails | zipnet | unseal | offer | atelier | relay | tally |
|---|---|---|---|---|---|---|
zipnet | - | no UnsealedPool[S] | AuctionOutcome[S] still commits (bids still valid) | Candidates[S] degrades (no order flow, searcher bids only) | AcceptedHeaders[S] still possible | Refunds[S] attribution set may be empty for slot |
unseal | unaffected | - | AuctionOutcome[S] still commits | Candidates[S] degrades (bids only) | AcceptedHeaders[S] still possible | Refunds[S] attribution set missing wallet contributions |
offer | unaffected | unaffected | - | Candidates[S] degrades (no bid included) | AcceptedHeaders[S] still possible | Refunds[S] missing searcher attributions |
atelier | unaffected | unaffected | unaffected | - | no Candidates[S] to ship; proposer falls back to another builder | no Refunds[S] committed |
relay | unaffected | unaffected | unaffected | Candidates[S] commits fine | - | no Refunds[S] if block never reaches chain |
tally | unaffected | unaffected | unaffected | unaffected | unaffected | - |
Two patterns fall out:
- Upstream failures degrade downstream outputs; they do not
corrupt them. A missing
unsealfor slotSproduces aCandidates[S]built from searcher bids alone, which is still a well-formed block, just with less order-flow content. The chain still progresses via the proposer’s fallback builder. - The pipeline is drainable. Failures during slot
Sdo not block slotS+1— every organism’s state machine accepts new slots without waiting for earlier ones to finalise.
What the composition contract guarantees
Given that every organism commits its own decision for slot S,
the lattice guarantees:
- Deterministic replay. Given the full commit logs of all six
organisms for slot
S, anyone can recompute every per-organism decision and cross-checktally’s attribution. - Independent auditability. A consumer that trusts the chain
can verify
tally’s attestations against on-chain inclusion without trusting any other organism’s commit log directly — the attestation carries the evidence. - No silent corruption across organisms. The derived id
discipline means mis-configured organisms in a lattice produce
disjoint IDs and cannot cross-subscribe at all. A half-upgraded
lattice is always a
ConnectTimeout, never a subtle mis-attribution.
What the contract does not guarantee
- Atomic all-or-nothing inclusion across organisms. Already discussed. A partial path through the pipeline is a valid state.
- Bounded end-to-end latency in the presence of failures. If
relaystalls indefinitely,tally’s commit for that slot never happens; no organism-level timeout triggers it. Operators who need bounded end-to-end latency add per-slot deadlines at thetallylevel (commit an emptyRefunds[S]after a timeout rather than blocking). - Cross-lattice coordination. Out of scope for this chapter; see Cross-lattice coordination.
Contributors implementing a new organism
If you are adding a seventh organism to the lattice, your wiring checklist is:
- Identify the upstream organism(s) you subscribe to. If none —
you are a root organism like
zipnet, triggered only by integrator input. - Identify the downstream organism(s) that will subscribe to you.
If none — you are a leaf like
tally, triggered only by external observers. - Key every commit by slot. If your organism has a natural sub-slot cadence (per-round inside a slot, per-bundle), commit at the sub-slot cadence but always stamp the owning slot.
- Write one role driver per
Groupmember role. Keep it as a singletokio::select!over the upstream subscriptions and your local timers. - Write unit tests against in-memory collections (mosaik ships test helpers); integration tests against an in-process lattice of two or three organism Groups.
- Document the subscription contract in your organism’s
contributors/composition-hooks.md. Update organisms.md and this page’s subscription graph to include the new organism.
Cross-references
- Architecture — the lattice shape these subscriptions run on.
- The six organisms — each organism’s own public surface that this page’s arrows point at.
- Threat model — how trust assumptions compose across the subscription graph.
- Cross-lattice coordination — what happens when the subscription graph crosses a lattice boundary.
Cross-lattice coordination
audience: contributors
A lattice is one end-to-end block-building deployment for one EVM chain. In practice operators and integrators will want more than one: a mainnet lattice and a testnet lattice, an L1 lattice and an L2 lattice, lattices for sibling rollups that share searchers. This chapter describes how lattices on the same mosaik universe coordinate without giving up the per-lattice trust boundaries.
The reader is assumed to have read topology-intro, architecture, organisms, and composition.
What cross-lattice means
Two lattices on the same mosaik universe are simply two
LatticeConfigs whose instance names differ and whose per-organism
Configs therefore derive disjoint GroupIds, StreamIds, and
StoreIds. Everything the topology intro — Shared
universe page says about zipnet deployments coexisting
applies unchanged to full lattices.
“Cross-lattice coordination” means something stronger: an integrator agent, or an organism inside one lattice, reads from or writes to a second lattice’s public surface, coordinated by slot or by intent.
Three use cases motivate this section:
- Cross-chain searchers. One searcher agent bids on L1
ethereum.mainnetand L2unichain.mainnetsimultaneously. Their bundles may span both chains (sell on L1, buy on L2 in a coordinated pair). - Cross-chain order flow. A wallet submits an intent on
base.mainnetthat resolves on multiple chains (swap X on Base, receive Y on OP). The intent needs to reach several lattices’zipnet/unsealpools. - Shared tally. Attribution for an MEV-Share-style refund spans multiple lattices (a backrun on L1 attributed partially to an L2-originating order).
None of these require a “cross-lattice Group” or a seventh
organism. They are all implementable as integrators holding
multiple LatticeConfigs, or as organisms reading from adjacent
lattices’ public surfaces. The mosaik universe is shared; the
work is in specifying the integrator / organism’s driver shape.
Shape 1: integrator spans multiple lattices
The simplest shape. The integrator compiles in N LatticeConfigs
and binds organism handles against each from one Arc<Network>.
use std::sync::Arc;
use mosaik::Network;
use builder::{LatticeConfig, UNIVERSE};
const ETH_MAINNET: LatticeConfig = /* ... */;
const UNICHAIN_MAINNET: LatticeConfig = /* ... */;
const BASE_MAINNET: LatticeConfig = /* ... */;
#[tokio::main]
async fn main() -> anyhow::Result<()> {
let network = Arc::new(Network::new(UNIVERSE).await?);
// One searcher, three lattices.
let eth_offer = offer::Offer::<Bundle>::bid(&network, Ð_MAINNET.offer ).await?;
let uni_offer = offer::Offer::<Bundle>::bid(&network, &UNICHAIN_MAINNET.offer).await?;
let base_offer = offer::Offer::<Bundle>::bid(&network, &BASE_MAINNET.offer ).await?;
// Read outcomes to close the loop.
let eth_wins = offer::Offer::<Bundle>::outcomes(&network, Ð_MAINNET.offer ).await?;
let uni_wins = offer::Offer::<Bundle>::outcomes(&network, &UNICHAIN_MAINNET.offer).await?;
// The driver pairs cross-chain bundles by correlator and
// submits to both lattices in wall-clock — but NOT in
// consensus. A partial win (one chain accepts, the other
// does not) is a normal outcome the driver must handle.
// ...
Ok(())
}
What integrators get out of this:
- One mosaik endpoint, one DHT record, one gossip loop. Peer discovery is shared across lattices; adding a third lattice costs no additional network resources.
- Independent bidding per lattice. Each lattice’s
offerauction commits on its own cadence; the searcher agent can withdraw a partial bid on one lattice if the paired lattice rejects. - No cross-chain atomic guarantee. The integrator must be willing to tolerate partial outcomes — sell-leg fills on one chain, buy-leg does not on the other. This is a searcher-level risk-management problem, not a protocol-level one. The alternative — cross-lattice atomic commit — is explicitly out of scope for this topology. See topology-intro — No cross-Group atomicity.
Shape 2: organism reads across the lattice boundary
An organism in lattice A subscribes to an organism’s public surface
in lattice B. Concretely: tally in unichain.mainnet reads
zipnet::Broadcasts from ethereum.mainnet so that a refund for
an L2 block can credit an L1-originating transaction.
This is legal under the topology because:
- Both lattices share the same universe, so the subscription just
works as a mosaik
Collectionread against aStoreIdderived from the other lattice’sConfig. - The cross-lattice read is authenticated by the other lattice’s
ACL. If
ethereum.mainnet’szipnet::Broadcastsis readable by any lattice ticket holder, theunichain.mainnet::tallycommittee members hold tickets for both lattices and the read succeeds. - The
tallystate machine inunichain.mainnetstill commits its attribution inside its own Group. The cross-lattice read is input, not commit. No cross-Group atomicity is introduced.
What the organism operator has to wire up:
unichain.mainnet::tallycommittee members are admitted toethereum.mainnet‘szipnet::Broadcastsvia a ticket. (Ticket granting is an operator-level agreement between the two lattices’ operators, out of band.)unichain.mainnet::tally’sConfigfolds in theethereum.mainnet::LatticeConfigfingerprint it reads from. This makes a mis-pairedunichain.mainnet::tally— one compiled against a differentethereum.mainnet— derive a differentGroupIdand fail to bond with its own peers, not silently misattribute.
This is the machinery that lets MEV-Share across chains work without a central coordinator.
Shape 3: a “bridge” organism
When the coordination is symmetric (both lattices read from and write to a shared fact), the right answer is a seventh organism — one that lives outside any lattice and whose role is to provide the shared fact.
Example: a cross-chain intent router organism that sits above
multiple lattices, reads from each one’s zipnet::UnsealedPool,
and commits an IntentRouting collection that tells each
lattice’s offer which cross-chain bundles are admissible.
A bridge organism is exactly one more organism, following exactly the same pattern:
- Its own
Configfolds in the set of lattices it spans (byLatticeConfigfingerprint). - Its own
GroupIdis derived from that. - Its own public surface is one or two primitives (an
Intentsstream, anIntentRoutingcollection). - Its own committee is an operator-level deployment.
This proposal does not ship a bridge organism. It identifies the shape so that contributors extending the topology do not reinvent a second kind of cross-lattice coordination on top of the ones above.
What is explicitly not supported
Cross-lattice atomic commit
No primitive in this topology offers “commit both slot S on lattice A and slot S’ on lattice B or neither”. Two reasons:
- Mosaik’s Raft variant does not support multi-Group transactions within a lattice, let alone across lattices. Inventing it here would reinvent the protocol we chose not to use at a level that would force every organism’s state machine to participate.
- The business justification is thin. Cross-chain MEV in practice relies on searcher risk management (post-bond, posted collateral, on-chain HTLCs), not on builder-level atomic commit. Lattices provide a faithful pipeline; searchers structure their bundles to tolerate partial fills.
If a concrete use case for cross-lattice atomicity emerges, the right answer is a bridge organism whose state machine implements the atomic commit semantics the use case requires. See the roadmap.
Lattice registries
A cross-lattice directory collection — a Map<LatticeName, LatticeCard>
listing known lattices — is explicitly not a core feature. The
topology intro argument applies unchanged: lattices
are operator-scoped, discovery is a compile-time LatticeConfig
reference, and a registry would add its own ACL problem without
buying anything an out-of-band handshake does not already provide.
A devops-convenience directory may exist; it is never part of the
binding path.
Cross-universe lattices
If two lattices live on different NetworkIds, they are on
different mosaik universes and this chapter does not apply. The
integrator has to hold two Network handles, pay for two
discovery loops, and the shared-universe arguments all invert.
The topology takes no position on whether that is desirable. If
it is, do it; this book does not describe it.
L1 / L2 / rollup specialization
Three chain archetypes the lattice supports without new organisms:
- L1 PBS (Ethereum mainnet). The reference shape.
relaytalks to the validator set’s MEV-Boost-compatible endpoints. - L2 rollup with centralized sequencer.
relaytalks to a single sequencer endpoint instead of a validator set. Operationally, therelayorganism has one output target; its state machine is otherwise unchanged.tallywatches the sequencer’s canonical chain rather than the L1 for inclusion. - L2 rollup with decentralized sequencer.
relaytalks to the sequencer’s leader rotation.tallystill watches the canonical chain. The organism interfaces are the same.
In every case the lattice’s LatticeConfig carries a chain_id
and the relay::Config carries a proposer_endpoint_policy
parameter selecting the mode. That is the whole customization
surface.
Cross-references
- topology-intro — Shared universe
- topology-intro — No cross-Group atomicity
- Roadmap — Cross-lattice atomicity
- Composition — the subscription pattern cross-lattice reads reuse.
Cryptography overview
audience: contributors
This chapter names the cryptographic primitives each organism relies on, names the specific schemes the reference implementation will use, and points at the literature. It does not re-derive any of them.
The reader is assumed to be comfortable with threshold cryptography, authenticated encryption, elliptic-curve DH, and zero-knowledge proofs at a practitioner level.
Shared primitives
The lattice inherits mosaik’s baseline and layers per-organism schemes on top.
- Peer identity. Ed25519 (iroh) for peer-to-peer TLS cert signing and bond authentication. Out of scope here — see the mosaik book.
- Hashing for identifier derivation. blake3 for every
UniqueId,GroupId,StreamId,StoreIdfingerprint. Same discipline zipnet uses; see topology-intro — Within-lattice derivation. - Serialisation for state-machine commits. postcard (mosaik default). Stable, deterministic, fits in-flight sizing.
zipnet
Inherits zipnet v1 exactly. Summary:
- Pad derivation. Pairwise X25519 ECDH + HKDF-SHA256 + AES- 128-CTR, per server per client per round.
- Falsification tag. Keyed blake3 over plaintext + slot.
- Client attestation (v2 TDX path). Intel TDX quote verified
via
mosaik::tee::tdx.
See the zipnet cryptography chapter for the full derivation.
unseal (threshold decryption)
Candidate scheme. Threshold BLS12-381 encryption.
- DKG. Each committee member runs a Pedersen-style
distributed key generation at deployment time. The resulting
public key is published in
UnsealConfigand folds into the organism’s fingerprint. - Encryption. Zipnet’s client-side seal layer — applied on top of the DC-net payload — encrypts to the unseal committee’s public key. Ciphertext is constant-size, same discipline zipnet maintains on wire.
- Share generation. Each committee member computes its
share on incoming
zipnet::Broadcasts, gossips it on the unseal private network, and commits it toUnsealMachineonce it has seent-1peer shares too (to avoid committing a share that can never be completed on the other side). - Combine.
UnsealMachine::apply(SubmitShare)runs the threshold combine when thet-th share lands; the cleartext is appended toUnsealedPooland the shares are discarded.
Why BLS12-381. Widely available, small shares, combine is pairing-free on the committee side, standard cryptographic libraries have audited implementations.
Why not post-quantum. Out of scope for v0.3; see Roadmap — Post-quantum unseal.
Literature.
- Shoup, “Practical Threshold Signatures” (EUROCRYPT 2000).
- Boneh, Lynn, Shacham, “Short Signatures from the Weil
Pairing” (ASIACRYPT 2001) — for the signing variant that
tallyborrows.
offer (sealed-bid auction)
Candidate scheme. Same threshold encryption primitive as
unseal, parameterised with an auction-specific DKG.
- Sealed bid. Searcher encrypts to the offer committee’s
threshold public key before submitting. The public key is
part of
OfferConfig. - Auction close.
OfferMachine::apply(CloseAuction)triggers a round of share exchange identical tounseal’s; the winning bid is recovered inside apply, no committee member ever sees the losing bids in the clear. - Fairness. Because bids are threshold-encrypted, a minority of compromised committee members cannot learn losing bids. A majority still can; see threat-model.md — offer.
Why not a commit-reveal scheme. Commit-reveal leaks a linkable commitment on the wire, which a colluding adversary could correlate with an on-chain bid identity. Threshold encryption collapses the reveal step into the committee’s state-machine apply, preserving the anonymity set through the full auction.
atelier (co-building)
Candidate scheme. TDX attestation + BLS aggregate signature.
- Admission. Every committee member’s
PeerEntrycarries a TDX quote (Intel DCAP). Admission tickets validate the quote’s MR_TD against the pinned expected value inAtelierConfig. - Collective signature on
Candidates. Each committee member signs the candidate block body with their BLS public key after the block passes the state machine’s validation; the aggregated signature is stored alongside the body inCandidates. Proposers can verify this signature to confirm that the lattice committee attested to the block. - Bundle simulation integrity. Simulations run inside the
TDX enclave; committee members compare simulation outputs via
SubmitHintcommits. Divergent simulations are visible in the commit log and trigger alerts.
Why TDX and not SGX / other. Operational availability. TDX
is the hardware mosaik already supports; SGX support lives in
a hypothetical mosaik extension. A future switch to another TEE
changes which require_ticket(...) validator is used, not the
organism’s public surface.
relay (PBS fanout)
Cryptography. Minimal.
- Proposer-side authentication. MEV-Boost-compatible TLS (L1 PBS) or the sequencer’s native auth (L2).
- Member-to-member integrity on
Ship<Header>. Standard mosaik ticket-gated stream.
No bespoke cryptography. Relay’s integrity claim rides on the
atelier aggregate signature; the proposer verifies that
signature directly.
tally (refund attribution)
Candidate scheme. ECDSA over secp256k1 (the curve Ethereum settlement contracts expect).
- Committee key management. Each tally member holds an
ECDSA keypair whose public key is published in
TallyConfig. - Attestation. Each
Refunds[S]commit triggers each committee member to sign the commit; the collection of signatures (or a t-of-n aggregate — tbd at v0.3) is theAttestations[S]entry. - On-chain verification. Settlement contracts verify the attestation signature set against the published committee public keys. A minority of compromised members cannot forge an attestation; a majority can, but the contract can pin a higher threshold than a simple majority if needed.
Why secp256k1 and not BLS. Settlement contracts on EVM
chains have native ecrecover support; BLS verification is
possible via precompiles (BLS12-381) but more expensive. For
cross-chain settlement to L2s without BLS precompiles, secp256k1
is the safer default. A future tally may ship a BLS variant
parametric in the settlement contract’s verification primitive.
Summary table
| Organism | Primitive | Reference scheme |
|---|---|---|
zipnet | DC-net pads | X25519 + HKDF-SHA256 + AES-128-CTR |
zipnet | falsification tag | keyed blake3 |
unseal | threshold decryption | BLS12-381 threshold |
offer | sealed-bid encryption | BLS12-381 threshold (offer DKG) |
atelier | admission | Intel TDX (DCAP) via mosaik |
atelier | collective attestation | BLS12-381 aggregate signature |
relay | member auth | mosaik ticket |
tally | committee attestation | ECDSA over secp256k1 |
Deferred items
- Post-quantum unseal. See Roadmap.
- Re-randomisable bids in
offer(so that a searcher cannot prove after-the-fact which bid was theirs to an external party). Research-open; not blocking v1. - Zero-knowledge proof of block validity in
atelier. Would remove the need forrelayto carry the collective signature to the proposer; would add a ZK proving cost per slot. Deferred to post-v1 once proving systems are cheaper.
Cross-references
- Threat model — what each scheme protects against and what it does not.
- Organisms — each organism’s public surface that these primitives live behind.
- Roadmap — where the deferred schemes are scheduled.
Threat model
audience: contributors
This chapter restates the per-organism trust assumptions named in Organisms and describes how they compose across a lattice. It is scoped to one lattice; cross-lattice threats live in Cross-lattice coordination.
Mosaik’s own security posture — crash-fault-tolerant Raft, QUIC peer authentication via iroh, ticket-gated bonding — is assumed and not re-derived. Where a claim depends on mosaik, the mosaik book is linked.
Goals and non-goals
Goals.
- Sender-to-transaction unlinkability at the submission and
auction layers, up to the trust boundary of
zipnet+unseal+offer. - Non-equivocating candidate blocks:
ateliercommits one body per slot, signed under a TDX-attested collective, so no committee minority can smuggle an alternate block past the other members. - Auditable attribution:
tally’sRefundscommit is a deterministic function of the upstream commits and on-chain inclusion, and itsAttestationsare presentable to an on-chain settlement contract for independent verification.
Non-goals.
- Byzantine fault tolerance of Raft. mosaik’s variant is crash-fault tolerant. A deliberately compromised committee member within an organism can DoS liveness but cannot forge a commit that does not pass the state machine’s own validation. Each organism spells out what “validation” means in its own crate docs.
- Confidentiality of on-chain effects. Once a
Candidates[S]block is executed on-chain, the chain reveals whatever that execution reveals. The lattice does not try to hide the economic outcome; it hides the sender-to-transaction linkage up to the moment the chain reveals it. - Resistance to message-length side channels beyond what
zipnetalready enforces. Integrators that encode variable- size payloads directly intozipnet’s fixed-size slots leak size metadata at the application layer. This is the integrator’s problem; see zipnet’s security checklist. - Cross-lattice guarantees. See cross-chain.md.
Per-organism trust shapes
Short recap of the assumption each organism operates under. Each organism’s own docs carry the full derivation; this page ties them together.
zipnet
Assumption. Any-trust — anonymity holds as long as at least one committee server is honest. Liveness in v1 requires all servers honest (the zipnet roadmap’s v2 item relaxes this).
Attacker power ceiling. The adversary can control the TEE of every client except one; the aggregator; the network; all but one committee server. Under that ceiling, sender-to-envelope linkability is PRF-indistinguishable.
Breaks when. Every committee server colludes. Every client TEE is compromised (then attestation — the v2 TDX path — no longer admits any client).
See zipnet threat model for the full argument.
unseal
Assumption. t-of-n threshold. Fewer than t committee
members colluding learn nothing about the cleartext of any zipnet
round; t or more colluding members can decrypt at will.
Attacker power ceiling. Control the TEE of up to t - 1
committee members.
Breaks when. An adversary obtains t share secrets. The TDX
attestation on every committee member raises the bar from “I
compromised t operator hosts” to “I compromised t TDX images
and their attestation chains”.
Composition note. Because unseal feeds offer and
atelier, compromising unseal beyond threshold also breaks the
anonymity of order flow to those two organisms. A lattice
operator picking t for unseal is picking the anonymity budget
for the whole lattice.
offer
Assumption. Majority-honest committee. A majority can commit
an AuctionOutcome[S] whose winner is not the highest bid.
Attacker power ceiling. Up to floor(n/2) committee members
compromised. Searchers’ bid anonymity is additionally protected by
the auction’s threshold encryption: a minority of compromised
committee members cannot decrypt losing bids.
Breaks when. A majority of the offer committee colludes
against the searcher set. The fallback is that the on-chain
settlement contract can reject an AuctionOutcome whose evidence
set does not verify.
atelier
Assumption. TDX attestation (hardware root of trust) plus
majority-honest committee. A committee member without a valid
TDX quote on their PeerEntry is not admitted; a minority of
compromised TDX images cannot commit a block body the majority
rejects; a majority can commit an arbitrary block body.
Attacker power ceiling. Compromise fewer than floor(n/2) + 1
TDX images and their attestation chain (including Azure / cloud
provider attestation services for rented TDX hosts).
Breaks when. A majority of the atelier committee’s TDX images
are compromised. At that point the adversary can choose blocks;
downstream relay still surfaces the block body to the proposer,
but the proposer (or the chain) is the last line of defense.
Composition note. A malicious atelier majority cannot
retroactively change offer’s outcome or zipnet’s broadcast;
those logs are authoritative in their own Groups. A malicious
majority can simply choose to exclude a winning bid or include
transactions outside the unsealed pool. Downstream tally
attribution will then reflect the misbehavior — a malicious
atelier cannot hide its actions from the audit log, only choose
what to commit.
relay
Assumption. Any-trust on liveness; majority-honest on the
integrity of AcceptedHeaders. A single honest relay committee
member suffices to ship a header; a majority of dishonest members
can commit an AcceptedHeaders[S] that lies about the proposer’s
acknowledgement.
Breaks when. A majority of the relay committee conspires to
forge a proposer-ack record. The on-chain inclusion watcher in
tally is the ground truth — an AcceptedHeaders[S] that does
not correspond to an included block is a visible discrepancy and
is not used for attribution.
tally
Assumption. Majority-honest committee. A majority can misattribute a refund.
Breaks when. A majority of the tally committee conspires. The on-chain settlement contract can reject attestations whose evidence does not verify, which bounds the attack to “a majority commits a refund attestation the contract later rejects”, not “the refund is paid out”.
How the assumptions compose
The lattice does not require every organism’s trust assumption to hold simultaneously for every property. Different properties depend on different subsets:
| Property | Depends on |
|---|---|
| Sender-to-tx unlinkability | zipnet any-trust AND unseal t-of-n |
| Bid confidentiality until auction ends | offer threshold crypto |
| Winning-bid integrity per slot | offer majority-honest |
| Block-body non-equivocation | atelier TDX + majority-honest |
| Header delivery to proposer | relay any-trust liveness |
| Faithful refund attestation | tally majority-honest AND on-chain inclusion |
| Lattice liveness | Every organism’s liveness condition |
Concretely:
- If
zipnetis fully honest butunsealcrosses threshold, order flow is deanonymized to the unseal adversary.offerbidding is still confidential to competing searchers. - If
offergoes majority-malicious, wrong bundles win, butatelierstill builds legally andtallycan attribute from the committed (wrong)AuctionOutcome. The integrator can tell from the public logs thatoffermisbehaved. - If
ateliergoes TDX-compromised, the lattice commits bad blocks but cannot hide the fact of doing so. Proposers can choose a different builder for the next slot; operators retire theatelierdeployment. - If
tallygoes majority-malicious, attestations are refused by on-chain settlement contracts; refunds simply do not flow.
This decomposition is the point. A monolithic pipeline forces every user to trust every component’s worst-case failure mode; the lattice lets each property depend only on the organisms that actually produce it.
What a compromised lattice cannot do
Regardless of how many organisms go bad (short of all of them):
- Cannot change the chain’s head. The lattice produces candidate blocks; the chain’s proposer accepts them or not. Compromising the lattice does not override chain-level validity.
- Cannot mint money.
tally’s attestations only route MEV that was actually captured on a block; they cannot manufacture payouts. On-chain settlement contracts enforce this. - Cannot forge an organism’s commit log. Mosaik-native collections are append-only and signed by their Group members. A post-hoc rewrite is detectable by any integrator replaying the log.
What a compromised lattice can do
Conservatively:
- Deny service. Every organism’s liveness is a failure mode. A lattice that DoSes itself is possible.
- Bias block content within the atelier trust boundary. A majority-compromised atelier + majority-compromised offer can commit a block whose bundle ordering favours the adversary. The public commit logs record this and it is visible to any integrator; the chain executes it.
- Refuse to refund. A majority-compromised tally can withhold attestations. Searchers with no on-chain recourse get no refund for that slot.
The observability is load-bearing. Integrators — and chain explorers — can monitor the lattice’s public logs and raise objections, switch lattices, or escalate through on-chain governance. A lattice that misbehaves is a lattice whose integrators stop using it.
Operator responsibilities
The trust assumptions above are protocol-level; they hold only if operators run the software the assumptions describe. Every operator running a committee member in any organism is responsible for:
- Running the attested image for organisms with TDX admission
(
unseal,atelier, optionallyrelay). - Protecting committee-admission secrets — the
GroupKey- equivalent for each organism. Loss of this secret means an adversary can join the committee. - Rotating on schedule per Rotations and upgrades.
- Reporting anomalies — divergences between organism logs and on-chain reality — via whatever channel the lattice operator has established.
See operators/security-posture.md when that page lands.
Cross-references
- Organisms — per-organism trust sketch.
- Composition — subscription graph used above.
- Roadmap — items that tighten the assumptions (BFT liveness, post-quantum unseal, etc.).
Roadmap
audience: contributors
This roadmap is scoped to the topology, not to any specific organism’s own v2 list. Each organism crate, when it lands, ships its own internal roadmap; the zipnet crate’s Roadmap to v2 is the template.
Items below are ordered by how visibly they affect the external behaviour of a lattice. Items are not engineering-ordered; pick what to implement based on what the first production lattice forces.
v0.1 — Proposal (this document)
Specification complete. No crates shipped. Audience: contributors who want to challenge the shape before implementation starts.
v0.2 — Walking skeleton
Goal: one lattice stands up end-to-end in a single-host integration test and commits one slot across all six organisms.
Minimum content:
- Crate layout. One crate per organism (
builder-zipnet, …,builder-tally) plus abuildermeta-crate that ownsLatticeConfig,UNIVERSE, and re-exports. Each organism crate follows the zipnet layered shape:proto/+role-specific logic/+node/+lib.rsfacade + optionalmain.rsoperator binary. - State machines. One
StateMachineimpl per organism, with just enough commands to commit the per-organism facts in Organisms. Implementations are minimum-viable; the threshold-decrypt math inunseal, the auction math inoffer, the block-assembly logic inatelierare all placeholders that write well-typed facts without doing the real cryptography. - Integration test.
tests/e2e.rsthat stands up six in- process Groups, submits one envelope throughzipnet, watches it drop out the other end as aRefundsentry intally. Modeled on zipnet’sone_round_end_to_end. - Book pages for each organism’s own contributor doc set.
Non-goals for v0.2:
- Real cryptography in
unsealandoffer. The real threshold crypto lands in v0.3. - TDX admission in
atelier. Mock attestation is fine for v0.2. - L2 specialization. L1 PBS only.
v0.3 — Real crypto and TDX admission
Goal: every organism whose security assumption depends on cryptography actually runs that cryptography.
unsealruns a real distributed key generation + threshold- decryption scheme (BLS12-381 threshold BLS is the candidate; CRYPTOGRAPHY will pin one).offer’s sealed-bid auction is encrypted to the offer committee’s threshold key, decrypted in apply at auction close.ateliercommittee admission requires valid TDX quotes via.require_ticket(Tdx::new().require_mrtd(atelier_mrtd)), same pattern zipnet’s v2 enables.tallyattestations are real ECDSA signatures verifiable by an on-chain settlement contract.
v0.4 — L2 specialization and the first sequencer lattice
Goal: stand up a lattice whose relay targets an L2
sequencer endpoint.
relay::Configgrows aproposer_endpoint_policyparameter selecting { L1 validator set, L2 single sequencer, L2 leader rotation }.- Reference deployment: a
unichain.testnetlattice wired to a testnet sequencer. This is the pattern described in Cross-lattice coordination — L1 / L2 / rollup specialization. tallygrows a chain-watcher backend per chain type.
v0.5 — Multi-operator atelier (Phase 2)
Goal: the first lattice where atelier committee members
span more than one operator.
- Operator-onboarding runbook for adding a co-builder. Same pattern BuilderNet uses: MR_TD pin, public roster, cut-over ceremony.
atelier::Configgrows acommittee_admission_policyparameter that lists the admission tickets instead of implying a single-operator default.tally’s refund attribution carries a per-co-builder share.
v1.0 — First production lattice
Goal: one lattice runs on mainnet continuously for a month with no protocol-level incidents.
- Rolling upgrades. Until this point, “stop the lattice, upgrade every binary, restart” is an acceptable release strategy; at v1.0 it is not.
- Snapshot + state-sync for every organism’s state machine. No mosaik Group is allowed to need a full log replay after a restart in production.
- Full operator runbooks under operators/, graded by severity and audited externally.
- Stable
LatticeConfigserialization format.Config::from_hexis safe to use as the handshake after v1.0.
Longer-term items (no version assigned)
Cross-lattice atomicity
The bridge-organism path described in cross-chain.md — Shape 3. Research-complete, engineering-deferred. The concrete candidate is a cross-chain intent router whose state machine implements HTLC- style atomic commit semantics. Needs a specific use case to force the design.
Byzantine liveness for committee organisms
Mosaik’s Raft variant is crash-fault tolerant. The Roadmap — Liveness resilience item in zipnet applies to every committee organism in the lattice. If mosaik eventually ships a BFT variant, every organism inherits it for free; until then, a deliberately compromised committee member can DoS liveness in its own organism.
Post-quantum unseal
Threshold BLS is not post-quantum. A lattice whose anonymity
matters into the post-quantum era replaces unseal’s primitive
with a lattice-based threshold scheme. The organism surface does
not change; only the crypto inside UnsealMachine does.
Research-complete for several schemes in the literature;
engineering-deferred.
Globally parallel building
Phase 3 in the Flashbots decentralization writeup. The lattice already supports many lattices on one universe; going to “a lattice per region, dynamically joined” is an ops story (peer rotation, dynamic committee rebalance) more than a protocol change. Picked up when the first production lattice wants it.
Optional lattice directory
Not a core feature, same argument as zipnet. A shared
Map<LatticeName, LatticeCard> listing known lattices may
ship as a devops convenience. If built, it must:
- be documented as a convenience, not a binding path;
- be independently bindable — the organism crates never consult it;
- not become load-bearing for ACL or attestation.
Flag in-source as // CONVENIENCE: if it lands.
Versioning
Carrying the zipnet policy into the multi-organism world with one adjustment.
- Per organism: lockstep. Operators and integrators cut
releases against matching organism crate versions. Organism
StateMachine::signature()bumps are uncommon in steady state and coordinated. - Per lattice: version-in-name for the lattice identity. A
lattice retirement is an operator-level decision and produces
a new instance name (
ethereum.mainnet-v2). Individual organisms inside a stable lattice name can still upgrade lockstep without retiring the lattice.
Neither policy is load-bearing for v0.2 / v0.3. The first v1.0 release forces the call.
What this roadmap does not include
- Specific crate-versioning schedules. Each organism crate owns its own version cadence.
- Cryptographic parameter choices beyond naming candidates. Those are the crate authors’ calls, made visible via cryptography.md.
- Operator-onboarding contractual detail. Lattice operators run different organisations under different legal regimes; the book does not try to standardise that.
- Marketing. The book’s job is to specify and document; the adoption path is the operators’.
Glossary
audience: all
Domain terms as they are used in this book and in the proposed organism crates. Kept terse; where a term is inherited from mosaik, zipnet, or the block-building literature, this entry points at the authoritative source rather than re-deriving.
Atelier. The TDX-attested co-building organism. Assembles a
candidate block per slot from UnsealedPool cleartext and
AuctionOutcome winner. See
atelier organism spec.
AcceptedHeaders. The append-only collection relay
commits once per slot when a proposer has acknowledged the
candidate’s header.
AuctionOutcome. The append-only collection offer
commits per slot naming the winning bundle of the sealed-bid
auction.
Bridge organism. A hypothetical seventh organism, outside any lattice, that provides a cross-lattice coordination fact (e.g. cross-chain intent routing). See Cross-lattice coordination — Shape 3.
Candidates. The append-only collection atelier commits
per slot containing the assembled block body and the BLS
aggregate signature.
Chain id. EIP-155 chain id; folded into the lattice
fingerprint so a mis-bound cross-chain agent produces
ConnectTimeout rather than a silent wrong-chain operation.
Co-builder. An operator contributing committee members to
atelier (and optionally relay). Multi-operator co-building
is the Phase 2 shape; see
Roadmap — Phase 2.
Content + intent addressing. The discipline every
consensus-critical id in the lattice obeys:
id = blake3(intent ‖ content ‖ acl). Inherited verbatim from
the zipnet design-intro; see
topology-intro — Within-lattice derivation.
Deployment. A single instance of one organism, identified
by its own Config fingerprint. A lattice is a composition of
six deployments under one name.
DKG. Distributed key generation. A one-off ceremony at
lattice bring-up (and at each rotation) that produces an
aggregate public key plus per-committee-member shares. Used by
unseal and offer. See
Cryptography.
Fingerprint. A synonym for the content + intent addressed
id of a Config (lattice or organism). Mismatched fingerprints
are the lattice’s debuggable failure mode.
Integrator. External developer consuming a lattice. See audiences.
Lattice. One end-to-end block-building deployment for one
EVM chain, identified by an instance name. Wires the six
organisms under one LatticeConfig.
LatticeConfig. The parent struct the operator publishes
and the integrator compiles in. Contains the six organisms’
configs plus name + chain id. See
topology-intro — The lattice identity.
MR_TD. Intel TDX measurement register binding a boot image
to the hardware root of trust. Pinned per TDX-gated organism in
the lattice’s LatticeConfig.
Narrow public surface. The discipline of exposing one or two primitives per organism on the shared universe. Inherited from the zipnet design-intro.
Offer. The sealed-bid bundle-auction organism. See offer organism spec.
Operator. Team running a lattice, or one or more organisms inside a lattice. See audiences.
Organism. A mosaik-native service with a narrow public
surface. The six named organisms (zipnet, unseal, offer,
atelier, relay, tally) are the building blocks of a
lattice.
Phase 1 / Phase 2 / Phase 3. The decentralization-progression vocabulary from Flashbots — decentralized building: wat do?. This proposal targets Phase 1 to Phase 2.
Relay. The PBS-fanout organism. Ships atelier::Candidates
to the chain’s proposer or sequencer. See
relay organism spec.
Refunds. The append-only collection tally commits per
slot detailing MEV attribution back to contributing order-flow
providers and searchers.
Slot. The chain’s slot number (L1) or sub-slot index (L2); the foreign key every organism commits against.
StateMachine. Mosaik’s term for the deterministic
command processor inside a Group<M>. Every organism in a
lattice has its own state machine. See
mosaik groups.
Tally. The refund-accounting organism. See tally organism spec.
TDX. Intel Trust Domain Extensions. The TEE technology
mosaik ships first-class support for and which this topology
uses for unseal and atelier admission. See
mosaik TDX.
Unseal. The threshold-decryption organism that unwraps
zipnet::Broadcasts into UnsealedPool. See
unseal organism spec.
UnsealedPool. The append-only collection unseal
commits per zipnet-finalized slot containing the cleartext
order flow for that slot.
Universe. The shared mosaik NetworkId
(builder::UNIVERSE = unique_id!("mosaik.universe")) that
hosts every lattice and every mosaik service that composes
with them.
Zipnet. The anonymous-submission organism. Inherited from the existing zipnet book; this topology consumes it unchanged.
Environment variables
audience: operators
Complete list of environment variables consumed by the organism binaries. Every organism binary also accepts a small set of mosaik-level knobs (discovery, Prometheus bind address); see the mosaik configuration reference.
Status. The names below are the proposed convention; the organism crates pin them when they ship. Treat as the target shape.
Common — every organism
| Variable | Required | Purpose |
|---|---|---|
LATTICE_INSTANCE | yes | The lattice instance name (e.g. ethereum.mainnet). |
LATTICE_CHAIN_ID | yes | EIP-155 chain id. |
LATTICE_CONFIG_HEX | yes | Hex-encoded LatticeConfig. Folds in every organism’s config. |
LATTICE_UNIVERSE | no | Override builder::UNIVERSE. Set only for an isolated federation. |
PROMETHEUS_ADDR | no | Bind address for the metrics exporter. Default unbound. |
RUST_LOG | no | Log filter. Default info. |
zipnet
Inherits zipnet’s own env var reference; see the zipnet environment variables page. The lattice-specific subset:
| Variable | Required | Purpose |
|---|---|---|
ZIPNET_COMMITTEE_SECRET_FILE | server role | Committee admission secret, per-lattice. |
ZIPNET_SECRET_FILE | server role | Stable peer identity secret, per-member. |
unseal
| Variable | Required | Purpose |
|---|---|---|
UNSEAL_COMMITTEE_SECRET_FILE | yes | Committee admission secret. |
UNSEAL_SHARE_SECRET_FILE | yes | This member’s threshold-decryption share. |
UNSEAL_SECRET_FILE | yes | Stable peer identity. |
offer
| Variable | Required | Purpose |
|---|---|---|
OFFER_COMMITTEE_SECRET_FILE | yes | Committee admission secret. |
OFFER_SHARE_SECRET_FILE | yes | This member’s offer-DKG share. |
OFFER_SECRET_FILE | yes | Stable peer identity. |
atelier
| Variable | Required | Purpose |
|---|---|---|
ATELIER_COMMITTEE_SECRET_FILE | yes | Committee admission secret. |
ATELIER_BLS_SECRET_FILE | yes | This member’s BLS key for collective signature. |
ATELIER_SECRET_FILE | yes | Stable peer identity. |
ATELIER_CHAIN_RPC | yes | Chain RPC for bundle simulation. |
relay
| Variable | Required | Purpose |
|---|---|---|
RELAY_COMMITTEE_SECRET_FILE | yes | Committee admission secret. |
RELAY_SECRET_FILE | yes | Stable peer identity. |
RELAY_PROPOSER_ENDPOINTS | yes | Comma-separated proposer/sequencer endpoints. |
tally
| Variable | Required | Purpose |
|---|---|---|
TALLY_COMMITTEE_SECRET_FILE | yes | Committee admission secret. |
TALLY_ECDSA_SECRET_FILE | yes | This member’s secp256k1 attestation key. |
TALLY_SECRET_FILE | yes | Stable peer identity. |
TALLY_CHAIN_RPC | yes | Chain RPC for on-chain inclusion watcher. |
TALLY_SETTLEMENT_ADDR | yes | The settlement contract attestations target. |
Secrets — always use *_FILE pointers
Every secret var above uses the *_FILE convention: the value
is a path to a file containing the secret, not the secret
itself. This keeps secrets out of process environment (where
they leak via /proc/*/environ) and lets you mount secrets as
tmpfs files under your orchestration layer.
Do not set the non-_FILE form unless you are running a
development smoke test.
Cross-references
- Rotations and upgrades — which secrets rotate on which cadence.
- zipnet env reference
- mosaik config reference
Metrics reference
audience: operators
Complete per-organism metrics catalogue. Every metric is emitted via mosaik’s Prometheus exporter with the following common labels:
lattice—LATTICE_INSTANCEvalue.chain_id—LATTICE_CHAIN_ID.organism— one ofzipnet | unseal | offer | atelier | relay | tally.role— the organism-specific role (e.g.server,member).member— per-member id, on metrics emitted by a specific committee member.
Status. Names are the proposed convention; organism crates pin them when they ship. The cross-organism labelling scheme is load-bearing and will not change.
Lattice-wide
| Metric | Type | Description |
|---|---|---|
lattice_id | gauge | 1; label lattice_id_hex carries the fingerprint. Used as a sanity check that a host is in the intended lattice. |
lattice_up | gauge | 1 when the process is up. |
discovery_peers_total | gauge | Total peers in the discovery catalogue, from mosaik. |
zipnet
Inherits the zipnet metrics reference. Key metrics to cross-reference at the lattice level:
zipnet_server_up{member=...}zipnet_aggregator_upzipnet_round_commit_latency_secondszipnet_broadcasts_appended_totalzipnet_client_submissions_total
unseal
| Metric | Type | Description |
|---|---|---|
unseal_member_up | gauge | 1 when the unseal member is up. |
unseal_member_tdx_attested | gauge | 1 when the member’s TDX quote is valid. |
unseal_committee_size | gauge | Bonded committee member count, from this member’s view. |
unseal_shares_submitted_total | counter | Rate of threshold-share submissions. |
unseal_decrypt_latency_seconds | histogram | Time from zipnet::Broadcasts append to UnsealedPool commit. |
unseal_pool_committed_total | counter | Rate of UnsealedPool commits. |
unseal_upstream_peers{source=...} | gauge | Bonded peers on upstream subscriptions. |
offer
| Metric | Type | Description |
|---|---|---|
offer_member_up | gauge | 1 when the offer member is up. |
offer_committee_size | gauge | Bonded committee size. |
offer_bids_received_total | counter | Rate of sealed bids accepted. |
offer_bids_rejected_total{reason=...} | counter | Bids rejected (bad size, wrong slot, bad ACL). |
offer_auction_commit_latency_seconds | histogram | Time from slot open to auction-commit. |
offer_winner_bid_wei | histogram | Distribution of winning bid amounts. |
offer_upstream_peers{source=...} | gauge | Bonded peers on upstream subscriptions. |
atelier
| Metric | Type | Description |
|---|---|---|
atelier_member_up | gauge | 1 when the atelier member is up. |
atelier_member_tdx_attested | gauge | 1 when the member’s TDX quote is valid. |
atelier_committee_size | gauge | Bonded committee size. |
atelier_candidates_committed_total | counter | Rate of Candidates commits. |
atelier_candidate_build_latency_seconds | histogram | Per-slot build latency. |
atelier_simulation_divergence_total | counter | Committee simulation disagreements. |
atelier_simulation_input_hash | gauge | Label-only: per-slot simulation input hash. |
atelier_upstream_peers{source=...} | gauge | Bonded peers on upstream subscriptions. |
relay
| Metric | Type | Description |
|---|---|---|
relay_member_up | gauge | 1 when the relay member is up. |
relay_committee_size | gauge | Bonded committee size. |
relay_headers_sent_total{proposer=...} | counter | Rate of headers shipped to each proposer. |
relay_proposer_ack_latency_seconds | histogram | Round-trip to proposer. |
relay_accepted_headers_committed_total | counter | Rate of AcceptedHeaders commits. |
relay_committee_agreement_rate | gauge | Fraction of slots the committee agreed on ack. |
relay_on_chain_mismatches_total | counter | Incremented by downstream tally when an ack did not land on chain. |
relay_upstream_peers{source=...} | gauge | Bonded peers on upstream subscriptions. |
tally
| Metric | Type | Description |
|---|---|---|
tally_member_up | gauge | 1 when the tally member is up. |
tally_committee_size | gauge | Bonded committee size. |
tally_blocks_attributed_total | counter | Rate of Refunds commits. |
tally_attestation_latency_seconds | histogram | Time from on-chain inclusion to Attestations commit. |
tally_evidence_failures_total | counter | Attribution attempts that failed upstream-evidence join. |
tally_chain_rpc_lag_seconds | gauge | Lag between chain head and this member’s RPC feed. |
tally_refund_amount_wei | histogram | Distribution of per-recipient refund amounts. |
tally_upstream_peers{source=...} | gauge | Bonded peers on upstream subscriptions. |
Cross-references
- Monitoring and alerts — the red-line subset of the above.
- Incident response — runbooks keyed to the specific alerts.
- zipnet metrics
- mosaik metrics