Quickstart — stand up a lattice
audience: operators
This page walks you from a fresh checkout to a live lattice that
external integrators can bind to with a LatticeConfig you
publish. Read Lattice overview first for the
architectural background; this page assumes it.
Status. This proposal’s organism crates are not yet shipped. The commands below are the target shape; when the crates land they will be runnable verbatim. Flagged with
// proposal:in the text below where the exact invocation is subject to change.
What you will run
Six organism roles, each as its own systemd unit (or k8s deployment, or whatever orchestration you prefer). The reference minimum:
| Role | Processes | TDX required | Hardware profile |
|---|---|---|---|
| zipnet server | 3 | in v2 | small cloud |
| zipnet aggregator | 1 | no | medium cloud |
| unseal member | 3–7 | yes | TDX-enabled hosts |
| offer member | 3–5 | optional | small cloud |
| atelier member | 3–7 | yes | TDX hosts, higher RAM |
| relay member | 3–5 | optional | well-connected cloud |
| tally member | 3–5 | no | small cloud |
Total: 16–33 processes across however many hosts you choose.
Fast path — builder lattice up
If you already have SSH access to a set of hosts that can cover the table above, one command brings up the whole lattice end-to-end. The long-form step-by-step below is what this command automates — read it when you need to diverge from the defaults (split operator responsibilities, partial deployments, bespoke orchestration).
Write a manifest that pairs each role in the table with an SSH target, plus the lattice’s two identity inputs (name, chain id) and each organism’s content parameters:
# lattice.toml
[lattice]
name = "acme.ethereum.mainnet"
chain_id = 1
[organisms.zipnet]
window = "interactive" # or "archival" / explicit tuple
[organisms.unseal]
threshold = "5-of-7"
[organisms.offer]
auction_window_ms = 800
[organisms.atelier]
block_schema = "l1-post-4844"
chain_rpc = "https://eth-mainnet.g.alchemy.com/v2/..."
[organisms.relay]
policy = "l1-mev-boost"
proposer_endpoints = ["https://mev-boost.relay-a.example"]
[organisms.tally]
settlement_addr = "0x1234..."
chain_rpc = "https://eth-mainnet.g.alchemy.com/v2/..."
# One entry per host. `roles` is drawn from the table above.
# `tdx = true` is required for hosts that carry unseal-member or
# atelier-member roles; the tool fails closed otherwise.
[[hosts]]
ssh = "ubuntu@tdx-01.acme.com"
tdx = true
roles = ["unseal-member", "atelier-member"]
[[hosts]]
ssh = "ubuntu@tdx-02.acme.com"
tdx = true
roles = ["unseal-member", "atelier-member"]
[[hosts]]
ssh = "ubuntu@tdx-03.acme.com"
tdx = true
roles = ["unseal-member", "atelier-member"]
[[hosts]]
ssh = "ubuntu@cloud-01.acme.com"
roles = ["zipnet-server", "offer-member", "relay-member", "tally-member"]
[[hosts]]
ssh = "ubuntu@cloud-02.acme.com"
roles = ["zipnet-server", "offer-member", "relay-member", "tally-member"]
[[hosts]]
ssh = "ubuntu@cloud-03.acme.com"
roles = ["zipnet-server", "offer-member", "relay-member", "tally-member"]
[[hosts]]
ssh = "ubuntu@agg-01.acme.com"
roles = ["zipnet-aggregator"]
Then:
# proposal: ships as a subcommand of the `builder` meta-crate
builder lattice up --manifest ./lattice.toml
The command performs — in order — exactly the steps documented further down this page:
- Validates the manifest against the role table (minimum
counts per role,
tdx = truewhere required, one aggregator maximum). - Generates the six per-organism admission secrets, the DKG
shares for
unsealandoffer, the BLS keys foratelier, the ECDSA keys fortally, and stable peer identity secrets per member. All secrets land locally under./secrets/<lattice-name>/with 0400 permissions. - Builds the reproducible TDX images for
unsealandatelierand records their MR_TDs. - Runs the DKG ceremonies (in-process coordination over SSH).
- Assembles the
LatticeConfig, hashes it, and stamps every organism’s fingerprint. - SSHes to each host, installs the organism binary, drops the per-unit env file, and enables the systemd unit.
- Waits for the end-to-end pipeline to commit one test slot
— same loop as
tests/e2e.rs, but against the live fleet. - Prints the lattice’s handshake kit:
lattice: acme.ethereum.mainnet
chain_id: 1
lattice_id: 7f3a9b1c...
LatticeConfig: <hex to publish to integrators>
atelier_mrtd: <48-byte hex>
unseal_mrtd: <48-byte hex>
hosts up: 7 / 7
pipeline: one slot committed end-to-end in 42.1s
If any step fails the command exits non-zero and leaves the fleet in its last known state; re-run after fixing the underlying problem and the tool resumes from the step that broke (idempotent per (lattice, host, role) tuple).
Day-2 operations
builder lattice up is also the update command. Re-run it
against a manifest whose identity fields have not changed to
roll updated binaries host-by-host; against one whose identity
has changed, it refuses and points you at
Rotations and upgrades — Lattice retirement.
Companion subcommands in the same tool:
| Command | Purpose |
|---|---|
builder lattice status | Print the health of every host and every organism. |
builder lattice down | Stop every organism in reverse pipeline order. |
builder lattice publish | Re-print the integrator handshake kit. |
builder lattice add-host | Add a host to an existing lattice (non-FP change). |
builder lattice rotate-peer | Rotate a single member’s peer identity (§Rotations). |
The subcommands read the same lattice.toml manifest.
When to skip the fast path
- You are running on Kubernetes, Nomad, or another orchestration
layer that already owns systemd-equivalent lifecycle. Generate
the
LatticeConfigwith the long-form flow below, hand the env files to your orchestrator, and skip the SSH-based host management. - You are a co-builder joining an existing lattice rather than
operating your own. You are an
atelierhost operator only; the lattice’s owning operator runs the tool. - You want to split the organism runs across several operators
(
unsealby one team,offerby another). The tool assumes a single operator; you coordinate per-organism bring-ups out of band.
Everything below is the step-by-step the fast path automates. Read it either because you fall into one of the cases above or because you want to understand what the tool does before you trust it with a production lattice.
Prerequisites
- A Rust toolchain (pinned by each organism crate; expect
>=1.93). - For TDX-gated organisms (
unseal,atelier): Intel TDX hosts. The reference build is Ubuntu 24.04 under Intel TDX; see mosaik TDX subsystem. - Outbound UDP from every host; inbound UDP is recommended but not required when iroh relays are available.
- A chain RPC endpoint for the
tallyinclusion watcher and — on L2 — forrelay’s sequencer handoff. - A configuration management system you already use (systemd, ansible, kubernetes, nomad — any).
Step 1: pick your instance identity
The instance name is the operator-chosen string that folds into every organism’s on-wire identity. Pick one that:
- Is namespaced by your organisation (
acme.ethereum.mainnet, notethereum.mainnet). - Is stable across minor rotations (rotate secrets without changing the name).
- Changes only when you retire the whole lattice identity (major version bump).
Write it down. You will set LATTICE_INSTANCE=<name> on every
process in the lattice.
Step 2: generate root secrets
Every organism has its own committee-admission secret. Generate one per organism and store in your secret manager:
# proposal: replace with per-organism `cargo run -p builder-<org> -- gen-secret`
for org in zipnet unseal offer atelier relay tally; do
openssl rand -hex 32 > "secrets/$LATTICE_INSTANCE.$org.secret"
done
These are the organism-level equivalents of zipnet’s
ZIPNET_COMMITTEE_SECRET. Distribute to the hosts that run
each organism’s committee members; treat them like root
credentials.
Step 3: generate the LatticeConfig fingerprint
Each organism has a gen-config subcommand (proposal:
builder-<org> gen-config --instance <name>) that produces the
organism’s share of the LatticeConfig. Combine them into one
LatticeConfig:
# proposal
cargo run -p builder -- gen-config \
--instance $LATTICE_INSTANCE \
--chain-id 1 \
--zipnet-window interactive \
--unseal-threshold 5-of-7 \
--offer-window 800ms \
--atelier-image ./atelier.tdx.img \
--relay-policy l1-mev-boost \
--tally-settlement-addr 0x....\
> secrets/$LATTICE_INSTANCE.lattice-config.hex
The output is the hex-encoded LatticeConfig you will publish
to integrators (see Step 7).
Step 4: build the TDX images
For every TDX-gated organism:
# proposal
cargo build --release --features tdx-builder-ubuntu -p builder-unseal
cargo build --release --features tdx-builder-ubuntu -p builder-atelier
# ship the resulting images + MR_TDs out-of-band
cat target/release/tdx-artifacts/unseal/mrtd.hex
cat target/release/tdx-artifacts/atelier/mrtd.hex
Both commands are borrowed verbatim from zipnet’s operator
runbook; the pattern is identical. Publish the MR_TDs in your
release notes — integrators compile them in via tee-tdx feature.
Step 5: smoke-test on one host
Before you touch committee hosts, confirm the six organisms run
end-to-end on your laptop. The reference test (proposal:
cargo test -p builder --test e2e lattice_end_to_end)
spins up six in-process committees, submits one envelope
through zipnet, and asserts a Refunds[0] commit on tally.
A green run in roughly 30 seconds tells you the organism crates are sound in your checkout. If it fails, nothing else on this page is going to work — investigate before touching production hosts.
Step 6: bring up the committee hosts
Provision 3–7 hosts per TDX organism; 3–5 per non-TDX. Suggested layout:
- 3 TDX hosts running both
unseal-memberandatelier-member(share the TDX host fleet). - 3 general-purpose hosts running
zipnet-server,offer-member,relay-member,tally-member(one process per organism, same host). - 1 general-purpose host running
zipnet-aggregator.
Example systemd unit (proposal):
# /etc/systemd/system/builder@.service
[Unit]
Description=Builder lattice role %i
After=network-online.target
Wants=network-online.target
[Service]
EnvironmentFile=/etc/builder/common.env
EnvironmentFile=/etc/builder/%i.env
ExecStart=/usr/local/bin/%i
Restart=on-failure
RestartSec=5s
[Install]
WantedBy=multi-user.target
One unit per (lattice, organism, role) pair. Example enable-and-start:
systemctl enable --now builder@zipnet-server
systemctl enable --now builder@zipnet-aggregator
systemctl enable --now builder@unseal-member
systemctl enable --now builder@offer-member
systemctl enable --now builder@atelier-member
systemctl enable --now builder@relay-member
systemctl enable --now builder@tally-member
Per-organism env files set the per-role secrets and the common
LATTICE_INSTANCE / LATTICE_CHAIN_ID / LATTICE_CONFIG_HEX
values. See
Environment variables.
Step 7: publish to integrators
Ship integrators three things:
- The
LatticeConfig— as a deployment crate (eth-mainnet-lattice = "..."on crates.io) or as a hex fingerprint in your release notes. - The MR_TDs for every TDX-gated organism, hex-encoded.
- The universe
NetworkIdif (and only if) you diverge frombuilder::UNIVERSE. Otherwise omit — integrators default tobuilder::UNIVERSE.
No bootstrap peers are required; mosaik discovery handles them.
If you want to give integrators a cold-start bootstrap hint,
publish your aggregator’s PeerId.
Step 8: cut integrators a test submission
Pick one of the integrator use cases (wallet, searcher,
proposer) and walk through
Quickstart — submit, bid, read
against your lattice. Treat this as an end-to-end smoke test:
if an external agent can bind, submit, and see a tally refund,
you are live.
Running many lattices on one fleet
Run one systemd unit per (lattice, organism, role). Each unit
reads a different LATTICE_INSTANCE / LATTICE_CONFIG_HEX from
its own env file. Mosaik’s peer discovery handles the rest; the
lattices share the universe without colliding on organism ids.
What to do next
- Wiring the organisms together — operator-level view of the subscription graph from composition.md.
- Running a committee server — per-organism runbook, one page per organism.
- Rotations and upgrades — how to rotate committee secrets and upgrade TDX images without losing the lattice identity.
- Monitoring and alerts — what to watch.
- Incident response — when things go wrong.