Install guide · Monad testnet · v1.0

Running a Monad full node — without stepping on the rakes

This is a practical, opinionated walkthrough of bringing up a Monad testnet full node on a dedicated Ubuntu server. It follows the official documentation where the official docs are clear, and fills in the rakes the official docs politely step around — of which there are several.

target · testnet · chain 10143 client · monad 0.14.1 os · ubuntu 24.04 LTS

00Intro

Monad's documentation at docs.monad.xyz/node-ops/full-node-installation describes a clean happy path. The happy path is accurate as far as it goes — but there are a handful of places where a first-time operator can do exactly what the docs say and end up with a node that either will not start, will not accept external peers, or will silently run with no firewall at all. This guide is the happy path plus the four or five gotchas that cost me several hours the first time around.

Nothing here is hardware-specific. Any reasonably modern bare-metal server with enough NVMe and 64 GB+ RAM will do. Every command is given verbatim — copy, paste, verify, move on.

What you'll have at the end A Monad testnet full node, synced from a recent snapshot to the current chain tip, with a clean firewall, isolated TrieDB, working RPC on localhost:8080, and an OpenTelemetry metrics endpoint on 127.0.0.1:8889.

01Prerequisites

Before you start, confirm you have all of the following:

Hardware

  • x86_64 CPU, 16+ physical cores recommended. High single-thread performance matters for execution.
  • 64 GB RAM (DDR5 ECC if you can get it). Less than 64 GB is very much not recommended.
  • At least 2 NVMe drives, ideally 3 or 4. You want the TrieDB, the ledger and the OS on separate devices so nothing ever contends for the same queue. Enterprise-class NVMe strongly preferred.
  • Dedicated NVMe for TrieDB — at least 1.5 TB usable capacity. It will grow over time.
  • 1 Gbps+ network, static public IPv4. No NAT, no shared tenancy.

Software

  • Ubuntu 24.04 LTS (other Debian-family distros may work but are not what the package is tested on).
  • Root / sudo access.
  • Somewhere safe, off-server, to keep encrypted key backups.
SMT / Hyper-Threading The official docs recommend disabling SMT in BIOS. This reduces some noisy-neighbour effects between logical cores. In practice many operators run with SMT on and accept the trade-off. Make the call consciously — if you later see odd performance regressions, your first suspect is SMT.

02System preparation

Fresh-out-of-the-box Ubuntu. Update, reboot if needed, install basic tools.

# apt update
# apt -y full-upgrade
# apt -y install curl nvme-cli aria2 jq gpg smartmontools

If apt upgrade installed a new kernel, reboot before continuing. Make sure timedatectl status reports System clock synchronized: yes — BFT consensus is sensitive to clock drift.

03Install the monad package

Monad binaries come from the Category Labs apt repository. The package contains monad-node, monad (execution), monad-rpc, monad-mpt, monad-cli, monad-keystore, monad-sign-name-record and a set of systemd units.

# fetch repo signing key
# curl -fsSL https://pkg.category.xyz/apt/gpg.key | gpg --dearmor -o /etc/apt/keyrings/category-labs.gpg

# add repo
# echo "deb [signed-by=/etc/apt/keyrings/category-labs.gpg] https://pkg.category.xyz/apt noble main" \
      > /etc/apt/sources.list.d/category-labs.list

# apt update
# apt -y install monad=0.14.1
# apt-mark hold monad

The apt-mark hold is important — you don't want an unattended upgrade surprising your running node with a new binary. Upgrade consciously, one step at a time.

04User and directories

The systemd units expect a monad service user.

# useradd -m -s /bin/bash monad
# sudo -u monad mkdir -p /home/monad/monad-bft/{config,ledger,config/forkpoint,config/validators}
# mkdir -p /opt/monad/{backup,scripts}
# chown -R root:root /opt/monad

Later, for security, consider changing the monad shell to /usr/sbin/nologin. systemd units start the services regardless of login shell.

05TrieDB on dedicated NVMe

TrieDB is Monad's Merkle-Patricia Trie store. It lives directly on a raw block device — not a filesystem. Pick one NVMe drive that will be dedicated to it.

First, identify your devices:

# lsblk -o NAME,SIZE,MODEL,TYPE

Pick the NVMe that will host the TrieDB (in this guide we'll call it /dev/nvme2n1 — substitute your own). Create a GPT partition table and one partition spanning the disk:

# parted -s /dev/nvme2n1 mklabel gpt
# parted -s /dev/nvme2n1 mkpart triedb 0% 100%
# partprobe /dev/nvme2n1
# lsblk -no PARTUUID /dev/nvme2n1p1

Copy that PARTUUID. Create a udev rule so the partition always appears at the stable path /dev/triedb, regardless of nvme reordering:

# cat > /etc/udev/rules.d/99-triedb.rules <<EOF
SUBSYSTEM=="block", ENV{ID_PART_ENTRY_UUID}=="<YOUR-PARTUUID>", SYMLINK+="triedb"
EOF
# udevadm control --reload
# udevadm trigger /dev/nvme2n1p1
# ls -la /dev/triedb   # must show a symlink to nvme2n1p1

Initialize the TrieDB via the systemd oneshot unit the package ships:

# systemctl start monad-mpt.service
Note monad-mpt.service is a Type=oneshot unit. It runs once, initializes the database header on the device, and exits. It's normal for is-active to show inactive (dead) after it has finished.

06Firewall — the first rake

This is the first place the official docs will quietly betray you. Two very different firewall management packages exist on Ubuntu — ufw and iptables-persistent. They are mutually exclusive at the apt level. Installing one silently removes the other, taking its rules with it.

The rake If you configure ufw rules first, and then run apt install iptables-persistent, apt will quietly uninstall ufw and your carefully built ruleset evaporates. Your node ends up exposed on every port, your INPUT policy defaults to ACCEPT, and you won't notice until the next firewall audit.

Pick one tool and commit. In this guide we use pure iptables with iptables-persistent, because it's the lower-level option and composes better with custom rules:

# DEBIAN_FRONTEND=noninteractive apt -y install iptables-persistent

# base rules
# iptables -P INPUT DROP
# iptables -P FORWARD DROP
# iptables -P OUTPUT ACCEPT
# iptables -A INPUT -i lo -j ACCEPT
# iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
# iptables -A INPUT -m conntrack --ctstate INVALID -j DROP

# anti-flood on BFT UDP (small packets)
# iptables -A INPUT -p udp --dport 8000 -m length --length 0:1400 -j DROP

# SSH, BFT P2P, ICMP
# iptables -A INPUT -p tcp --dport 22   -j ACCEPT
# iptables -A INPUT -p tcp --dport 8000 -j ACCEPT
# iptables -A INPUT -p udp --dport 8000 -j ACCEPT
# iptables -A INPUT -p udp --dport 8001 -j ACCEPT
# iptables -A INPUT -p icmp -j ACCEPT

# mirror into ip6tables: lo, est/rel, ICMPv6, SSH — drop everything else
# ip6tables -P INPUT DROP
# ip6tables -A INPUT -i lo -j ACCEPT
# ip6tables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT
# ip6tables -A INPUT -m conntrack --ctstate INVALID -j DROP
# ip6tables -A INPUT -p ipv6-icmp -j ACCEPT
# ip6tables -A INPUT -p tcp --dport 22 -j ACCEPT

# persist
# netfilter-persistent save
Before applying If you are SSH'd in over port 22, it's a good idea to schedule an at job that reverts to iptables -P INPUT ACCEPT in 5 minutes — so if you mis-type a rule and lock yourself out, you'll be able to reconnect shortly after.

07OpenTelemetry collector

Monad's services emit metrics and traces via OTLP. Run a local OpenTelemetry collector and have the node talk to it on 127.0.0.1:4317.

# curl -L -o /tmp/otelcol.deb \
    https://github.com/open-telemetry/opentelemetry-collector-releases/releases/download/v0.139.0/otelcol_0.139.0_linux_amd64.deb
# dpkg -i /tmp/otelcol.deb
# cp /opt/monad/scripts/otel-config.yaml /etc/otelcol/config.yaml
# systemctl enable --now otelcol

The default config binds OTLP receivers to 127.0.0.1 only, and exposes a Prometheus endpoint on 0.0.0.0:8889. Make sure your firewall blocks 8889 from outside (our ruleset above does; if you use different rules, double-check).

08Testnet configs

Fetch the testnet .env template and node.toml template from the Monad infrastructure bucket:

# sudo -u monad curl -fsSL -o /home/monad/.env \
    https://bucket.monadinfra.com/config/testnet/latest/.env.example
# sudo -u monad curl -fsSL -o /home/monad/monad-bft/config/node.toml \
    https://bucket.monadinfra.com/config/testnet/latest/full-node-node.toml
# chmod 600 /home/monad/.env
# chown monad:monad /home/monad/.env /home/monad/monad-bft/config/node.toml

Do not fill in the placeholders yet — we need keystores and a signed name record first. The placeholders are <NODE_NAME>, <IP>:<PORT> and <NAME_RECORD_SIG>.

09Keystores (SECP & BLS)

The node identifies itself on the network by two keypairs — a SECP256k1 key for peer discovery / networking, and a BLS12-381 key for consensus signatures. Generate a random keystore password, then generate both keys:

# generate + persist the keystore password
# KSPW=$(openssl rand -base64 32)
# echo "KEYSTORE_PASSWORD=$KSPW" >> /home/monad/.env
# echo "Keystore password: $KSPW" > /opt/monad/backup/keystore-password-backup
# chmod 600 /opt/monad/backup/keystore-password-backup

# generate keystores as the monad user
# sudo -u monad bash -c '
    source /home/monad/.env
    monad-keystore new secp --password "$KEYSTORE_PASSWORD" \
        --keystore-path /home/monad/monad-bft/config/id-secp
    monad-keystore new bls --password "$KEYSTORE_PASSWORD" \
        --keystore-path /home/monad/monad-bft/config/id-bls
'

# back them up immediately
# cp /home/monad/monad-bft/config/id-secp /opt/monad/backup/secp-backup
# cp /home/monad/monad-bft/config/id-bls  /opt/monad/backup/bls-backup
# chmod 600 /opt/monad/backup/*-backup
Never regenerate these keys The SECP and BLS keys are the identity of your node. If you regenerate them, you have a new node — peers that know your old identity stop talking to you, and any reputation or delegation you've accumulated is gone. Before touching these files, check ls /home/monad/monad-bft/config/id-secp — if it exists, restore from backup, don't create new.

Now is also the right time to exfiltrate the /opt/monad/backup/* files off the server, to a safe location (password manager, external storage, trusted custodian). Treat them like private keys — because they are.

10node.toml + name record signature

The peer discovery protocol verifies that a node claiming an IP really holds the SECP private key for its advertised identity, by signing a "name record" — IP, port, sequence number — with the SECP key. Create this signature and place it in node.toml.

First fill in the human-editable fields. Pick a short node name:

# IP=$(curl -s4 ifconfig.me)
# sed -i \
    -e "s|node_name = \"<NODE_NAME>\"|node_name = \"my-node\"|" \
    -e "s|self_address = \"<IP>:<PORT>\"|self_address = \"$IP:8000\"|" \
    -e "s|self_record_seq_num = 0|self_record_seq_num = 1|" \
    /home/monad/monad-bft/config/node.toml

Then sign the name record:

# sudo -u monad bash -c '
    source /home/monad/.env
    monad-sign-name-record \
        --address '"$IP"':8000 \
        --authenticated-udp-port 8001 \
        --keystore-path /home/monad/monad-bft/config/id-secp \
        --password "$KEYSTORE_PASSWORD" \
        --self-record-seq-num 1
'

Output includes a self_name_record_sig = "..." line. Copy the hex string and paste it into node.toml, replacing the <NAME_RECORD_SIG> placeholder.

11Hard reset + snapshot restore — the second rake

The rake The top-level "full node installation" docs end with "now chown and start the services." If you do exactly that, your node will sit forever in statesync because there's no forkpoint and no validators file, and there's nothing in TrieDB for execution to run against. You must also run the hard reset / snapshot restore sequence, which lives on a separate doc page. Here it is, explicitly.

We'll use the helper scripts the Monad team publishes on their infrastructure bucket. Download them first so you can read them before running:

# curl -fsSL -o /opt/monad/scripts/restore-from-snapshot.sh \
    https://bucket.monadinfra.com/scripts/testnet/restore-from-snapshot.sh
# curl -fsSL -o /opt/monad/scripts/download-forkpoint.sh \
    https://bucket.monadinfra.com/scripts/testnet/download-forkpoint.sh
# chmod +x /opt/monad/scripts/*.sh
# less /opt/monad/scripts/restore-from-snapshot.sh   # read it before running

Now execute the sequence in order:

# 1. wipe ledger / forkpoint / validators dirs and truncate TrieDB
# bash /opt/monad/scripts/reset-workspace.sh

# 2. download + verify + import the latest testnet snapshot (~5-10 min)
# bash /opt/monad/scripts/restore-from-snapshot.sh

# 3. pull the current forkpoint file
# bash /opt/monad/scripts/download-forkpoint.sh

# 4. pull the current validators file
# curl -fsSL -o /home/monad/monad-bft/config/validators/validators.toml \
    https://bucket.monadinfra.com/validators/testnet/validators.toml
# chown monad:monad /home/monad/monad-bft/config/validators/validators.toml

After this, your TrieDB has a multi-gigabyte state snapshot from a recent block, your forkpoint/forkpoint.toml points at a slightly newer block than the snapshot, and your validators/validators.toml lists the current validator set. Now the node has everything it needs.

12Start the node & verify

Make absolutely sure ownership is correct, then enable and start the three services:

# chown -R monad:monad /home/monad/
# systemctl enable --now monad-bft monad-execution monad-rpc
# systemctl is-active monad-bft monad-execution monad-rpc

Tail the logs and look for "committed block" lines appearing from monad-bft:

# journalctl -u monad-bft -u monad-execution -u monad-rpc -f

Peer discovery typically populates 100-250 peers within a minute. Blocks should start being committed almost immediately after statesync gates lift. The monad-rpc service will log "Waiting for statesync to complete" and won't open port 8080 until the statesync phase is done — this is normal, not a bug. Give it time.

Once monad-rpc opens port 8080, you can check the sync state:

$ curl -s -X POST http://127.0.0.1:8080 \
    -H 'Content-Type: application/json' \
    -d '{"jsonrpc":"2.0","method":"eth_blockNumber","params":[],"id":1}'

Compare the returned block with a public RPC (https://testnet-rpc.monad.xyz) to see how far behind the tip you are. Once the gap is near zero, eth_syncing will return false. You're done.

13Pitfalls summary

Quick reference of all the places a first-time operator can get caught:

  • ufw vs iptables-persistent — they conflict at the apt level. Installing one wipes the other. Pick iptables-persistent from the start.
  • Missing forkpoint and validators files — the vanilla install-and-start sequence from the top-level docs leaves these empty and your node cannot sync. Run the hard reset / snapshot restore sequence in step 11, always.
  • RPC blocking on statesyncmonad-rpc does not open port 8080 until statesync completes. Curl will refuse the connection for the first few minutes. That's expected.
  • Regenerating keystores — don't. Once created, treat SECP and BLS keystores as permanent identity. Regenerating = new node, lost peers, lost reputation.
  • Public metrics / RPC exposure — by default monad-rpc binds 0.0.0.0:8080 and the OTEL Prometheus endpoint is on *:8889. If your firewall is anything less than strict, those are publicly reachable. Close them unless you explicitly want a public RPC.
  • Clock drift — BFT consensus is sensitive to it. Make sure NTP is synchronized before starting services.
Next steps after a healthy node Monitoring (service health, peer count, chain lag, disk usage, NVMe SMART), automated alerting, off-site key backups, SSH hardening (password auth off, fail2ban, non-default port), and a documented runbook for recovery and upgrades. A validator-grade node is much more than a successful first boot — but a successful first boot is where everything begins.