Designing block-building topologies on mosaik
audience: contributors
This is the design-intro chapter. It extends the pattern in the zipnet book’s Designing coexisting systems on mosaik from a single organism (anonymous broadcast) to a composition of organisms that together form a block-building pipeline for an EVM chain.
The reader is assumed to have read the mosaik book, the zipnet book, and in particular the zipnet design-intro. This page does not re-derive content + intent addressing, the narrow-public-surface discipline, or the shared-universe model. It uses them.
The problem, restated for a pipeline
Zipnet is one organism. A block-building pipeline is not — it’s a half-dozen services that must agree on who submitted what, what got auctioned, what got built, who won, and how value flows back to the order-flow providers.
The naïve way to build this on mosaik: one giant Group with one giant state machine whose commands cover every stage of the pipeline. This fails in the first hour of design review. The auction has a different trust model than the builder; the builder has different TEE posture than the relay; the submission layer has stricter rate-limiting than the refund accounting. A single state machine collapses five trust boundaries into one, with the worst of each.
The right decomposition is one organism per trust boundary, each following the zipnet pattern individually, with the whole set yoked together under one lattice identity.
Two axes of choice, revisited
Same axes as zipnet picked among. Both inherit the zipnet conclusion, adjusted for composition.
- Network topology. Does a lattice live on its own
NetworkId, or share a universe with every other mosaik service? - Composition. How do the six organisms inside one lattice reference each other without creating cross-Group atomicity mosaik doesn’t support?
This proposal picks shared universe + within-lattice derivation + no cross-Group atomicity. The three choices are independent and each has a narrow, defensible rationale.
Shared universe
builder::UNIVERSE = unique_id!("mosaik.universe") — the same
constant zipnet uses. Every lattice, every organism, every
integrator agent lives on it. Different lattices coexist as overlays
of Groups, Streams, and Collections distinguished by their
content + intent addressed IDs. An integrator that cares about
three lattices holds one Network handle and three LatticeConfigs.
The alternative — one NetworkId per lattice, the way Shape A in
the zipnet book laid out — was rejected for the same reason it was
rejected there: operators already run many services (zipnet alone
might host three deployments; searchers bid across chains; tally
aggregates across multiple lattices). Paying for one mosaik endpoint
per lattice is a bad trade when the services want to compose.
What the shared universe costs us: noisier discovery gossip and
larger peer catalogs. The escape hatch for genuinely high-frequency
internal traffic (aggregator fan-in inside atelier, threshold-share
chatter inside unseal) is a derived private network keyed off the
organism’s Config. Public surfaces stay on the universe.
Within-lattice derivation
A lattice Config is a parent struct. Each of the six organisms
has its own nested Config that derives from the lattice’s root
UniqueId. A contributor writing a new organism derives its IDs
like this:
LATTICE = blake3("builder|" || instance_name || "|chain=" || chain_id)
ZIPNET = LATTICE.derive("zipnet") // root for zipnet's Config
UNSEAL = LATTICE.derive("unseal")
OFFER = LATTICE.derive("offer")
ATELIER = LATTICE.derive("atelier")
RELAY = LATTICE.derive("relay")
TALLY = LATTICE.derive("tally")
Each organism’s own Config — when hashed to produce its
GroupId / StreamId / StoreId — folds in the organism root
above plus the organism’s own content parameters plus the organism’s
ACL. The full identity for, say, atelier’s committee group in the
ethereum.mainnet lattice is:
atelier_root = LATTICE(ethereum.mainnet).derive("atelier")
atelier_committee = blake3(
atelier_root
|| atelier_content_fingerprint // tx batch size, block template
// schema, gas-limit window, etc.
|| atelier_acl_fingerprint // TDX MR_TDs pinned
).derive("committee")
Two lattices with the same atelier parameters but different
instance names derive disjoint committee groups. Two atelier
deployments under the same lattice name but with different
parameters also derive disjoint groups. The failure mode is
ConnectTimeout, not split-brain Raft — same as zipnet.
No cross-Group atomicity
Mosaik does not provide multi-Group transactions and this topology
does not try to invent them. Concretely: “the same command
atomically commits to offer and atelier” is not something this
proposal supports. The organisms coordinate through the same pattern
zipnet’s own internal primitives coordinate: one organism writes to
its public surface (a stream or a collection), the next organism
subscribes and reacts.
This is a load-bearing constraint and the biggest single difference from a monolithic builder:
offercommits a winning bundle bid.ateliersubscribes tooffer’s outcome stream and reacts by including the bid’s transactions in a candidate block. There is no atomic “win + build” transaction.ateliercommits a winning candidate block.relaysubscribes and ships the header. There is no atomic “build + broadcast” span.relayobserves a proposer accepting the header.tallysubscribes to that event and commits refund attributions. There is no atomic “propose + refund” transaction.
What we lose: the strongest-possible consistency across the
pipeline. If atelier commits a block that relay never manages to
broadcast (liveness failure in relay), the block simply doesn’t
reach the proposer; tally sees no successful-broadcast event and
no refund is issued. That is a clean, debuggable failure. A
monolithic atomic pipeline would either have to resolve that failure
inside consensus (expensive and complicating the state machine) or
silently paper over it (which is worse).
What we gain: each organism’s state machine is simple enough to
reason about in isolation. atelier doesn’t need to understand
refund math. tally doesn’t need to understand TDX image builds.
Each organism can decentralize at its own pace.
The lattice identity
A lattice is identified by a LatticeConfig that folds every
root input into one deterministic fingerprint. Operators publish
the LatticeConfig the same way zipnet operators publish a
zipnet::Config; integrators compile it in.
pub struct LatticeConfig {
/// Short, stable, namespaced name chosen by the operator.
/// e.g. "ethereum.mainnet", "unichain.mainnet", "base.testnet".
pub name: &'static str,
/// EVM chain id. Folded into the fingerprint so a "mainnet"
/// instance on the wrong chain is a `ConnectTimeout`, not a
/// silent cross-chain mis-bond.
pub chain_id: u64,
/// Each organism's own Config. All six MUST be present; a
/// partial lattice is not a lattice.
pub zipnet: zipnet_organism::Config,
pub unseal: unseal::Config,
pub offer: offer::Config,
pub atelier: atelier::Config,
pub relay: relay::Config,
pub tally: tally::Config,
}
impl LatticeConfig {
pub const fn lattice_id(&self) -> UniqueId { /* blake3 of the above */ }
}
An integrator binds to the lattice by passing the LatticeConfig
into whichever organism handles they need:
const ETH_MAINNET: LatticeConfig = LatticeConfig { /* operator-published */ };
let network = Arc::new(Network::new(builder::UNIVERSE).await?);
let submit = zipnet::Zipnet::<Tx2718>::submit (&network, Ð_MAINNET.zipnet ).await?;
let bid = offer::Offer::<Bundle>::bid (&network, Ð_MAINNET.offer ).await?;
let blocks = relay::Relay::<Block>::watch (&network, Ð_MAINNET.relay ).await?;
let refunds = tally::Tally::<Attribution>::read (&network, Ð_MAINNET.tally ).await?;
Each organism exposes typed free-function constructors in the same
shape zipnet ships (Organism::<D>::verb(&network, &Config)). Raw
mosaik IDs never cross the organism crate boundary.
A fingerprint convention, not a registry
Same discipline as zipnet:
- The operator publishes the
LatticeConfigstruct (or a serialised fingerprint) as the handshake. - Consumers compile it in.
- If TDX-gated, the operator also publishes the committee MR_TDs
for every organism that gates its admission on TDX attestation
(
unseal,atelier, optionallyrelay). - There is no on-network lattice registry. A directory collection listing known lattices may exist as a devops convenience for humans; it is never part of the binding path.
Typos in the instance name, the chain id, or any organism parameter
surface as Error::ConnectTimeout, not “lattice not found”. The
library cannot distinguish “nobody runs this” from “operator isn’t
up yet” without a registry, and adding one would be lying with an
error enum.
The six organisms
This topology ships six organisms. Each is specified on a dedicated page:
| Organism | Role | Trust shape |
|---|---|---|
zipnet | anonymous submission of sealed tx / intents | any-trust (one honest server) |
unseal | threshold decryption of zipnet broadcasts | t-of-n threshold |
offer | sealed-bid bundle auction for searchers | majority-honest committee |
atelier | TDX-attested candidate block assembly | TDX attestation + majority-honest |
relay | PBS-style header fanout to proposers / sequencers | any operator suffices for liveness |
tally | order-flow attribution and refund accounting | majority-honest committee |
Why six and not four or ten:
- Submission, auction, building, relay, accounting. That’s the
natural PBS decomposition.
zipnetandunsealsplit the submission layer because anonymous broadcast and threshold decryption are different trust models and should not share a Group. relayis not folded intoatelierbecause relay liveness matters under different failure modes (proposer-side connectivity) than atelier liveness (builder-side compute). Folding them would couple the trust models unnecessarily.tallyis not folded intoatelierorrelaybecause refund accounting is a public-verifiable audit trail and its readers (searchers, order-flow providers, chain explorers) should not need to bond into a TEE-gated building committee to read it.
Further decomposition — e.g. splitting atelier into a bundle
sorter + a block executor — is an implementation concern for the
atelier crate, not a new organism. The organism boundary is drawn
at trust and interface seams, not at internal function.
See The six organisms for each organism’s public surface in detail, and Composition: how organisms wire together for the flow diagrams.
The three conventions (inherited)
Every organism in this proposal reproduces the three zipnet conventions verbatim. Briefly, so contributors writing a seventh organism have the list to hand:
- Identifier derivation from the organism’s Config fingerprint. Every public ID descends from one root (the organism’s piece of the lattice) and from the organism’s own content + intent + acl hash.
- Typed free-function constructors.
Organism::<D>::verb(&network, &Config)returning typed handles. Raw IDs never leak across the crate boundary. - Fingerprint, not registry. The
Config(plus the datum schema where applicable) is the complete handshake. Consumers compile it in; there is no on-wire discovery of the organism.
Details and rationale — exactly as in the zipnet design-intro — apply without change.
What the lattice pattern buys
- An integrator’s mental model collapses to: one
Network, oneLatticeConfigper lattice, typed handles on each organism they consume. - Operators decentralize at their own pace. A lattice can
run with one operator for every organism (Phase 1), then move
atelierto a multi-operator committee without touching the other five (Phase 2), then peel off cross-chainoffersubscriptions without touching anything (Phase 3). - Each organism can be replaced without touching the rest. The
contract between organisms is the public stream/collection
surface, not shared state. A team unhappy with
offer’s auction rules can ship an alternativeofferimplementation that reads from the sameunsealoutcome and writes to the same surface theateliersubscribes to. - Multiple lattices coexist trivially. Same argument zipnet made, scaled up: each lattice derives disjoint IDs on every organism; the shared universe’s peer catalog grows but does not fragment.
- ACL is per-organism, per-lattice.
unsealcan pin one MR_TD whileatelierpins another, under the same lattice name, without either organism having to know about the other’s image.
Where the pattern strains
Three pain points a contributor extending this should be honest about up front. The first two carry over from zipnet; the third is specific to composing multiple organisms.
Cross-organism atomicity is out of scope
As stated above: there is no way to atomically commit across two organisms’ Raft groups. If a use case genuinely needs that — rare, but real for some coordination-heavy cases — the right answer is a seventh primitive that is itself a deployment providing atomic composition across two specific organisms, not an ad-hoc cross-Group protocol. “Cross-chain atomic bundle” is the motivating candidate and is explicitly in the v2 column in Roadmap.
Versioning under stable lattice names
Same issue zipnet flagged: if an organism’s
StateMachine::signature() changes, its GroupId in that lattice
changes, and integrators compiled against the old code silently
split-brain. With six organisms in a lattice, the blast radius of a
single signature() bump is six times larger than zipnet’s.
The two reconciliation strategies zipnet’s roadmap listed apply here
too: version-in-name (ethereum.mainnet-v2) or lockstep releases
of a shared lattice crate. This proposal recommends lockstep for
each organism, version-in-name for the lattice, on the grounds
that a lattice-wide version bump is rare (it only happens when the
operator decides to retire the lattice identity) while organism
bumps happen every time an organism ships a breaking change. See
Roadmap — Versioning.
Noisy neighbours across lattices
Six organisms per lattice times N lattices per universe is a
multiplier on catalog size and /mosaik/announce volume. Zipnet’s
escape hatch — derived private networks for internal chatter —
applies per-organism, but the public surfaces (one or two
primitives per organism) stay on UNIVERSE. If a specific lattice’s
traffic would dominate the shared universe, it belongs behind its
own NetworkId — Shape A in the zipnet design-intro — not on the
shared universe. This is the correct call for an isolated
federation; it is not the default.
Checklist for a new organism
When adding a seventh organism to the lattice — or when standing up an organism that composes with the lattice without being part of it — use this list:
- Identify the one or two public primitives. If you cannot, the interface is not yet designed.
- Pick an organism root:
unique_id!("your-organism"), chained underLATTICE.derive("your-organism")if you intend to be part of a lattice. - Define the
Configfingerprint inputs: whatinstance_namemeans in the context of your organism, what content parameters affect the state machine signature, what ACL composition you pin. - Write typed constructors (
Organism::<D>::verb(&network, &Config)) that every integrator uses. Never export rawStreamId/StoreId/GroupIdvalues across the crate boundary. - Decide which internal channels, if any, move to a derived private
Network. Default: only high-churn ones (aggregation, gossip). - Specify
TicketValidatorcomposition on the public primitives. ACL lives there. - Document which other organisms read from or write to your organism’s public surface. This is the composition contract; changes to it touch composition.md.
- Call out your versioning story before shipping. If you cannot answer “what happens when my state machine signature bumps?”, you will regret it.
- Answer: does your organism add meaningfully to the lattice, or is it an implementation detail of an existing organism in disguise? If the latter, fold it in.
Cross-references
- Architecture of a lattice — the concrete instantiation of the pattern for the six organisms.
- The six organisms — per-organism surfaces.
- Composition — flow diagrams and the apply order across organisms.
- Cross-lattice coordination — how lattices on different chains cross-subscribe without cross-Group atomicity.
- Threat model — per-organism trust assumptions and how they compose.
- Roadmap — versioning, cross-lattice atomicity, L2-sequencer specialization.