Lattice overview
audience: operators
A lattice is one end-to-end block-building deployment for one EVM
chain. You — the operator — stand up the six organisms that make
up the lattice, publish a LatticeConfig for integrators to
compile in, and keep the whole thing running against the
chain’s cadence. This page is the architectural orientation; the
runbook is Quickstart — stand up a lattice.
One-paragraph mental model
A mosaik universe is a single shared NetworkId
(builder::UNIVERSE = unique_id!("mosaik.universe")) that hosts
every lattice and every other mosaik service. Your job as the
lattice operator is to stand up an instance under a name you
pick (e.g. ethereum.mainnet, base.mainnet) and keep it running.
A lattice is a composition of six organisms — zipnet, unseal,
offer, atelier, relay, tally — each of which is itself a
mosaik-native service. External integrators bind to your lattice
by pinning a LatticeConfig (the six organism configs under one
name) and opening typed handles against it — they compile the
fingerprint in from their side, so there is no on-network
registry to publish to and nothing to advertise. Your servers
simply need to be reachable.
Who runs a lattice
Typical operator shapes:
- A rollup team wanting a builder pipeline for their own L2. One operator runs every organism for one lattice.
- An MEV coalition hosting a shared builder for a group of
rollups or L1. Multiple operators each run committee members in
the
atelierorganism (and optionallyoffer/relay); one of them is the lattice’s authoritative steward of theLatticeConfig. - A chain foundation running a reference lattice for their chain. Same single-operator shape as the rollup team, with public participation in the co-builder role.
- A research / testnet lattice. Small, loose, development- grade. Single operator, all organisms on one or two hosts.
The default pattern in this book is one lattice operator who runs every organism. Multi-operator co-building is covered in Roadmap — Phase 2.
Six organisms, one pipeline
Each organism is a distinct piece of infrastructure with its own trust model, hardware profile, and rotation cadence. The table below is the shortcut reference; per-organism runbooks live on their own pages.
| Organism | Committee size (v1) | TDX required | Hardware | Rotation cadence |
|---|---|---|---|---|
zipnet | 3–7 servers + 1 agg | v2 | modest cloud | quarterly |
unseal | 3–7 members | yes | TDX-enabled hosts | quarterly |
offer | 3–5 members | optional | modest cloud | monthly |
atelier | 3–7 members | yes | TDX hosts, high RAM | monthly |
relay | 3–5 members | optional | well-connected cloud | weekly |
tally | 3–5 members | no | modest cloud | monthly |
“v1” means the first shipped version of the organism crate;
sizes are recommended, not enforced by protocol. A lattice
running 5 unseal members with t=3 threshold is a different
fingerprint from one running 7 with t=4.
What every host in your lattice needs
Regardless of organism role:
- Outbound UDP to the internet (iroh/QUIC transport) and to mosaik relays.
- A few MB of RAM beyond whatever the organism itself consumes.
- A clock within a few seconds of the universe consensus (Raft tolerates skew but not arbitrary drift).
LATTICE_INSTANCE=<name>set to the same instance name on every node in that lattice (e.g.ethereum.mainnet).LATTICE_CHAIN_ID=<id>set to the EIP-155 chain id the lattice services.
See Environment variables for the complete list.
What defines your lattice
Your lattice is identified by a LatticeConfig that folds every
signature-altering input for every organism into one on-wire
fingerprint. When integrators bind to your lattice they compare
LatticeConfig::lattice_id() against the hex you publish;
mismatches produce ConnectTimeout on their side, not silent
disagreement.
The LatticeConfig has:
| Field | Responsibility |
|---|---|
name | Short stable namespaced string you pick. Examples: ethereum.mainnet, base.testnet. |
chain_id | The EIP-155 chain id. Folded into the fingerprint so cross-chain mis-binds surface as ConnectTimeout. |
zipnet | The zipnet config: shuffle window, init salt, ACL. |
unseal | Threshold parameters (t, n), committee share pubkeys, acl. |
offer | Auction window, committee offer pubkey, acl. |
atelier | TDX MR_TD pin, committee pubkeys, block-template schema. |
relay | Policy selector (L1 MEV-Boost, L2 sequencer endpoint), committee pubkeys. |
tally | Settlement contract address, committee secp256k1 pubkeys, refund policy. |
You change any field and the whole lattice fingerprint changes. That is the content + intent addressing discipline from zipnet’s design intro applied to six organisms at once. See topology-intro — Within-lattice derivation for the mathematical layout.
Minimum viable lattice
A minimum instance runs six organism committees. In the common “one operator, one lattice” shape, that is:
- 3 zipnet committee server processes + 1 aggregator.
- 3 unseal committee member processes (TDX required).
- 3 offer committee member processes.
- 3 atelier committee member processes (TDX required).
- 3 relay committee member processes.
- 3 tally committee member processes.
That is 16 processes across however many hosts you choose. A tight layout packs committee members from different organisms onto the same host (one process per organism role, distinct systemd units); a paranoid layout gives each organism its own hosts.
How your nodes find each other
Mosaik’s standard peer discovery — /mosaik/announce gossip
plus the Mainline DHT via pkarr plus optional mDNS for local
development — handles everything. You do not configure streams,
groups, or IDs by hand. Every process starts with
LATTICE_INSTANCE=<name>, derives the organism’s own
GroupId/StreamId/StoreId from the lattice fingerprint, and
bonds to its peer set automatically.
This means you pay no DevOps cost to scale a lattice horizontally
within a single operator (add a host, start the systemd units,
it joins). It also means a typo in LATTICE_INSTANCE on one
host produces a process that does not bond — the process runs,
it does not break anything, it simply does not join. Check
lattice_id() in metrics before concluding a host is joined.
What your nodes do not do
- They do not configure each other. Every organism derives
its identity from the
LatticeConfigyou pin at each host’s environment; no inter-organism handshake discovers config at runtime. - They do not share a database. Each organism holds its own Raft state independently. State machine snapshots are per organism.
- They do not cross-authorise. An
ateliercommittee member does not get to join anoffercommittee just because they are in the same operator’s fleet. Each organism’sTicketValidatorcomposition controls admission independently.
Running many lattices side by side
One operator can run several lattices on the same universe — production, testnet, internal dev, per-chain variants. Each has its own instance name, its own committees, its own MR_TDs, its own ACL. Hosts can run one or many; one systemd unit per (lattice, organism, role) is the standard layout:
systemctl start builder@ethereum.mainnet-zipnet-server
systemctl start builder@ethereum.mainnet-atelier-member
systemctl start builder@base.testnet-zipnet-server
systemctl start builder@base.testnet-atelier-member
Unit names are operator-chosen; each wraps an invocation of the
appropriate organism binary with a distinct LATTICE_INSTANCE.
The lattices share the universe and the discovery layer, and
appear to integrators as distinct LatticeConfig fingerprints.
See also
- Quickstart — stand up a lattice
- Wiring the organisms together
- Rotations and upgrades
- Monitoring and alerts
- Incident response
- Designing block-building topologies on mosaik — the rationale for this decomposition, if you want to understand why the lattice is shaped this way before standing one up.