Skip to content

Commit 482065c

Browse files
authored
Reth as recommended client (#575)
1 parent 31694b3 commit 482065c

File tree

4 files changed

+46
-75
lines changed

4 files changed

+46
-75
lines changed

docs/base-chain/node-operators/performance-tuning.mdx

Lines changed: 8 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -28,13 +28,13 @@ If utilizing Amazon Elastic Block Store (EBS), io2 Block Express volumes are rec
2828

2929
The following are the hardware specifications used for Base production nodes:
3030

31-
- **Geth Full Node:**
32-
- Instance: AWS `i4i.12xlarge`
31+
- **Reth Archive Node (recommended):**
32+
- Instance: AWS `i7i.12xlarge` or larger
3333
- Storage: RAID 0 of all local NVMe drives (`/dev/nvme*`)
3434
- Filesystem: ext4
3535

36-
- **Reth Archive Node:**
37-
- Instance: AWS `i4ie.6xlarge`
36+
- **Geth Full Node:**
37+
- Instance: AWS `i7i.12xlarge` or larger
3838
- Storage: RAID 0 of all local NVMe drives (`/dev/nvme*`)
3939
- Filesystem: ext4
4040

@@ -46,16 +46,13 @@ Using a recent [snapshot](/base-chain/node-operators/snapshots) can significantl
4646

4747
The [Base Node](https://github.com/base/node) repository contains the current stable configurations and instructions for running different client implementations.
4848

49-
### Supported Clients
50-
5149
Reth is currently the most performant client for running Base nodes. Future optimizations will primarily focus on Reth. You can read more about the migration to Reth [here](https://blog.base.dev/scaling-base-with-reth).
5250

53-
| Type | Supported Clients |
54-
| ------- | -------------------------------------------------------------------------------------------------- |
55-
| Full | [Reth](https://github.com/base/node/tree/main/reth), [Geth](https://github.com/base/node/tree/main/geth) |
56-
| Archive | [Reth](https://github.com/base/node/tree/main/reth) |
51+
### Geth Performance Tuning (deprecated)
5752

58-
### Geth Performance Tuning
53+
<Warning>
54+
Geth is no longer supported and Reth is the recommended client and shown to be more performant. We recommend migrating Geth nodes to Reth, especially if you are experiencing performance issues.
55+
</Warning>
5956

6057
#### Geth Cache Settings
6158

docs/base-chain/node-operators/run-a-base-node.mdx

Lines changed: 2 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@ If you're just getting started and need an RPC URL, you can use our free endpoin
2323

2424
**Note:** Our RPCs are rate-limited, they are not suitable for production apps.
2525

26-
If you're looking to harden your app and avoid rate-limiting for your users, please check out one of our [partners](/base-chain/tools/node-providers).
26+
If you're looking to harden your app and avoid rate-limiting for your users, please consider using an endpoint from one of our [partners](/base-chain/tools/node-providers).
2727
</Warning>
2828

2929

@@ -65,39 +65,13 @@ curl -d '{"id":0,"jsonrpc":"2.0","method":"eth_getBlockByNumber","params":["late
6565
Syncing your node may take **days** and will consume a vast amount of your requests quota. Be sure to monitor usage and up your plan if needed.
6666
</Warning>
6767

68-
6968
### Snapshots
7069

7170
<Note>
7271
Geth Archive Nodes are no longer supported. For Archive functionality, use Reth, which provides significantly better performance in Base’s high-throughput environment.
7372
</Note>
7473

75-
76-
If you're a prospective or current Base Node operator and would like to restore from a snapshot to save time on the initial sync, it's possible to always get the latest available snapshot of the Base chain on mainnet and/or testnet by using the following CLI commands. The snapshots are updated every week.
77-
78-
#### Restoring from snapshot
79-
80-
In the home directory of your Base Node, create a folder named `geth-data` or `reth-data`. If you already have this folder, remove it to clear the existing state and then recreate it. Next, run the following code and wait for the operation to complete.
81-
82-
| Network | Client | Snapshot Type | Command |
83-
| ------- | ------ | ------------- | --------------------------------------------------------------------------------------------------------------------- |
84-
| Testnet | Geth | Full | `wget https://sepolia-full-snapshots.base.org/$(curl https://sepolia-full-snapshots.base.org/latest)` |
85-
| Testnet | Reth | Archive | `wget https://sepolia-reth-archive-snapshots.base.org/$(curl https://sepolia-reth-archive-snapshots.base.org/latest)` |
86-
| Mainnet | Geth | Full | `wget https://mainnet-full-snapshots.base.org/$(curl https://mainnet-full-snapshots.base.org/latest)` |
87-
| Mainnet | Reth | Archive | `wget https://mainnet-reth-archive-snapshots.base.org/$(curl https://mainnet-reth-archive-snapshots.base.org/latest)` |
88-
89-
You'll then need to untar the downloaded snapshot and place the `geth` subfolder inside of it in the `geth-data` folder you created (unless you changed the location of your data directory).
90-
91-
Return to the root of your Base node folder and start your node.
92-
93-
```bash Terminal
94-
cd ..
95-
docker compose up --build
96-
```
97-
98-
Your node should begin syncing from the last block in the snapshot.
99-
100-
Check the latest block to make sure you're syncing from the snapshot and that it restored correctly. If so, you can remove the snapshot archive that you downloaded.
74+
If you're a Base Node operator and would like to save significant time on the initial sync, you may [restore from a snapshot](/base-chain/node-operators/snapshots#restoring-from-snapshot). The snapshots are updated every week.
10175

10276
### Syncing
10377

@@ -111,4 +85,3 @@ echo Latest synced block behind by: $((($(date +%s)-$( \
11185
```
11286

11387
You'll also know that the sync hasn't completed if you get `Error: nonce has already been used` if you try to deploy using your node.
114-

docs/base-chain/node-operators/snapshots.mdx

Lines changed: 25 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -17,24 +17,26 @@ These steps assume you are in the cloned `node` directory (the one containing `d
1717

1818
1. **Prepare Data Directory**:
1919
- **Before running Docker for the first time**, create the data directory on your host machine that will be mapped into the Docker container. This directory must match the `volumes` mapping in the `docker-compose.yml` file for the client you intend to use.
20-
- For Geth:
20+
- For Reth (recommended):
2121
```bash
22-
mkdir ./geth-data
22+
mkdir ./reth-data
2323
```
24-
- For Reth:
24+
- For Geth:
2525
```bash
26-
mkdir ./reth-data
26+
mkdir ./geth-data
2727
```
28-
- If you have previously run the node and have an existing data directory, **stop the node** (`docker compose down`), remove the _contents_ of the existing directory (e.g. `rm -rf ./geth-data/*`), and proceed.
28+
- If you have previously run the node and have an existing data directory, **stop the node** (`docker compose down`), remove the _contents_ of the existing directory (e.g. `rm -rf ./reth-data/*`), and proceed.
2929

3030
2. **Download Snapshot**: Choose the appropriate snapshot for your network and client from the table below. Use `wget` (or similar) to download it into the `node` directory.
3131

3232
| Network | Client | Snapshot Type | Download Command (`wget …`) |
3333
| -------- | ------ | ------------- | ----------------------------------------------------------------------------------------------------------------- |
34-
| Testnet | Geth | Full | `wget https://sepolia-full-snapshots.base.org/$(curl https://sepolia-full-snapshots.base.org/latest)` |
35-
| Testnet | Reth | Archive | `wget https://sepolia-reth-archive-snapshots.base.org/$(curl https://sepolia-reth-archive-snapshots.base.org/latest)` |
36-
| Mainnet | Geth | Full | `wget https://mainnet-full-snapshots.base.org/$(curl https://mainnet-full-snapshots.base.org/latest)` |
37-
| Mainnet | Reth | Archive | `wget https://mainnet-reth-archive-snapshots.base.org/$(curl https://mainnet-reth-archive-snapshots.base.org/latest)` |
34+
| Testnet | Reth | Archive (recommended)| `wget https://sepolia-reth-archive-snapshots.base.org/$(curl https://sepolia-reth-archive-snapshots.base.org/latest)` |
35+
| Testnet | Reth | Full | Coming Soon |
36+
| Testnet | Geth | Full | `wget https://sepolia-full-snapshots.base.org/$(curl https://sepolia-full-snapshots.base.org/latest)` |
37+
| Mainnet | Reth | Archive (recommended)| `wget https://mainnet-reth-archive-snapshots.base.org/$(curl https://mainnet-reth-archive-snapshots.base.org/latest)` |
38+
| Testnet | Reth | Full | Coming Soon |
39+
| Mainnet | Geth | Full | `wget https://mainnet-full-snapshots.base.org/$(curl https://mainnet-full-snapshots.base.org/latest)` |
3840

3941
<Note>
4042
Ensure you have enough free disk space to download the snapshot archive (`.tar.gz` file) _and_ extract its contents. The extracted data will be significantly larger than the archive.
@@ -46,9 +48,16 @@ These steps assume you are in the cloned `node` directory (the one containing `d
4648
tar -xzvf <snapshot-filename.tar.gz>
4749
```
4850

49-
4. **Move Data**: The extraction process will likely create a directory (e.g., `geth` or `reth`).
51+
4. **Move Data**: The extraction process will likely create a directory (e.g., `reth` or `geth`).
5052

5153
* Move the *contents* of that directory into the data directory you created in Step 1.
54+
* Example (if archive extracted to a reth folder - **verify actual folder name**):
55+
56+
```bash
57+
# For Reth
58+
mv ./reth/* ./reth-data/
59+
rm -rf ./reth # Clean up empty extracted folder
60+
```
5261

5362
* Example (if archive extracted to a geth folder):
5463

@@ -58,22 +67,15 @@ These steps assume you are in the cloned `node` directory (the one containing `d
5867
rm -rf ./geth # Clean up empty extracted folder
5968
```
6069

61-
* Example (if archive extracted to a reth folder - **verify actual folder name**):
62-
63-
```bash
64-
# For Reth
65-
mv ./reth/* ./reth-data/
66-
rm -rf ./reth # Clean up empty extracted folder
67-
```
68-
69-
* The goal is to have the chain data directories (e.g., `chaindata`, `nodes`, `segments`, etc.) directly inside `./geth-data` or `./reth-data`, not nested within another subfolder.
70+
* The goal is to have the chain data directories (e.g., `chaindata`, `nodes`, `segments`, etc.) directly inside `./reth-data` or `./geth-data`, not nested within another subfolder.
7071

71-
5. **Start the Node**: Now that the snapshot data is in place, start the node using the appropriate command (see the [Running a Base Node](/base-chain/node-operators/run-a-base-node#setting-up-and-running-the-node) guide):
72+
5. **Start the Node**: Now that the snapshot data is in place, return the root of your Base node folder and start the node:
7273

7374
```bash
74-
# Example for Mainnet Geth
75-
docker compose up --build -d
75+
cd ..
76+
docker compose up --build
7677
```
7778

78-
6. **Verify and Clean Up**: Monitor the node logs (`docker compose logs -f <service_name>`) or use the [sync monitoring](/base-chain/node-operators/run-a-base-node#monitoring-sync-progress) command to ensure the node starts syncing from the snapshot's block height. Once confirmed, you can safely delete the downloaded snapshot archive (`.tar.gz` file) to free up disk space.
79+
Your node should begin syncing from the last block in the snapshot.
7980

81+
6. **Verify and Clean Up**: Monitor the node logs (`docker compose logs -f <service_name>`) or use the [sync monitoring](/base-chain/node-operators/run-a-base-node#syncing) command to ensure the node starts syncing from the snapshot's block height. Once confirmed, you can safely delete the downloaded snapshot archive (`.tar.gz` file) to free up disk space.

docs/base-chain/node-operators/troubleshooting.mdx

Lines changed: 11 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -10,9 +10,8 @@ This guide covers common issues encountered when setting up and running a Base n
1010
Before diving into specific issues, here are some general steps that often help:
1111

1212
1. **Check Container Logs**: This is usually the most informative step. Use `docker compose logs -f <service_name>` to view the real-time logs for a specific container.
13-
- L2 Client (Geth): `docker compose logs -f op-geth`
14-
- L2 Client (Reth): `docker compose logs -f op-reth`
15-
- Rollup Node: `docker compose logs -f op-node`. Look for errors, warnings, or repeated messages.
13+
- L2 Client (Reth/Geth): `docker compose logs -f execution`
14+
- Rollup Node: `docker compose logs -f node`. Look for errors, warnings, or repeated messages.
1615

1716
2. **Check Container Status**: Ensure the relevant Docker containers are running: `docker compose ps`. If a container is restarting frequently or exited, check its logs.
1817

@@ -42,15 +41,15 @@ Before diving into specific issues, here are some general steps that often help:
4241
- **Issue**: Errors related to JWT secret or authentication between `op-node` and L2 client.
4342
- **Check**: Ensure you haven't manually modified the `OP_NODE_L2_ENGINE_AUTH` variable or the JWT file path (`$OP_NODE_L2_ENGINE_AUTH`) unless you know what you're doing. The `docker-compose` setup usually handles this automatically.
4443

45-
- **Issue**: Permission errors related to data volumes (`./geth-data`, `./reth-data`).
46-
- **Check**: Ensure the user running `docker compose` has write permissions to the directory where the `node` repository was cloned. Docker needs to be able to write to `./geth-data` or `./reth-data`. Sometimes running Docker commands with `sudo` can cause permission issues later; try running as a non-root user added to the `docker` group.
44+
- **Issue**: Permission errors related to data volumes (`./reth-data`, `./geth-data`).
45+
- **Check**: Ensure the user running `docker compose` has write permissions to the directory where the `node` repository was cloned. Docker needs to be able to write to `./reth-data` or `./geth-data`. Sometimes running Docker commands with `sudo` can cause permission issues later; try running as a non-root user added to the `docker` group.
4746

4847
### Syncing Problems
4948

5049
- **Issue**: Node doesn't start syncing or appears stuck (block height not increasing).
5150
- **Check**: `op-node` logs. Look for errors connecting to L1 endpoints or the L2 client.
52-
- **Check**: L2 client (`op-geth`/`op-reth`) logs. Look for errors connecting to `op-node` via the Engine API (port `8551`) or P2P issues.
53-
- **Check**: L1 node health and sync status. Is the L1 node accessible and fully synced?
51+
- **Check**: Look at logs for the execution client. Look for errors connecting to `op-node` via the Engine API (port `8551`) or P2P issues.
52+
- **Check**: L1 node health and sync status. Is the L1 node accessible and fully synced?
5453
- **Check**: System time. Ensure the server’s clock is accurately synchronized (use `ntp` or `chrony`). Significant time drift can cause P2P issues.
5554

5655
- **Issue**: Syncing is extremely slow.
@@ -60,7 +59,7 @@ Before diving into specific issues, here are some general steps that often help:
6059
- **Check**: `op-node` and L2 client logs for any performance warnings or errors.
6160

6261
- **Issue**: `optimism_syncStatus` (port `7545` on `op-node`) shows a large time difference or errors.
63-
- **Action**: Check the logs for both `op-node` and the L2 client (`op-geth`/`op-reth`) around the time the status was checked to identify the root cause (e.g., L1 connection issues, L2 client issues).
62+
- **Action**: Check the logs for both the rollup node and the L2 execution client around the time the status was checked to identify the root cause (e.g., L1 connection issues, L2 client issues).
6463

6564
- **Issue**: `Error: nonce has already been used` when trying to send transactions.
6665
- **Cause**: The node is not yet fully synced to the head of the chain.
@@ -69,10 +68,10 @@ Before diving into specific issues, here are some general steps that often help:
6968
### Performance Issues
7069

7170
- **Issue**: High CPU, RAM, or Disk I/O usage.
71+
- **Action**: If running Geth, we highly recommend migrating to Reth, as it’s the recommended client and generally more performant for Base.
7272
- **Check**: Hardware specifications against recommendations in the [Node Performance](/base-chain/node-operators/performance-tuning). Upgrade if necessary. Local NVMe SSDs are critical.
7373
- **Check**: (Geth) Review Geth cache settings and LevelDB tuning options mentioned in [Node Performance – Geth Performance Tuning](/base-chain/node-operators/performance-tuning#geth-performance-tuning) and [Advanced Configuration](/base-chain/node-operators/run-a-base-node#geth-configuration-via-environment-variables).
7474
- **Check**: Review client logs for specific errors or bottlenecks.
75-
- **Action**: Consider using Reth if running Geth, as it’s generally more performant for Base.
7675

7776
### Snapshot Restoration Problems
7877

@@ -90,8 +89,8 @@ Refer to the [Snapshots](/base-chain/node-operators/snapshots) guide for the cor
9089

9190
- **Issue**: Node fails to start after restoring snapshot; logs show database errors or missing files.
9291
- **Check**: Did you stop the node (`docker compose down`) _before_ modifying the data directory?
93-
- **Check**: Did you remove the _contents_ of the old data directory (`./geth-data/*` or `./reth-data/*`) before extracting/moving the snapshot data?
94-
- **Check**: Was the snapshot data moved correctly? The chain data needs to be directly inside `./geth-data` or `./reth-data`, not in a nested subfolder (e.g., `./geth-data/geth/...`). Verify the folder structure.
92+
- **Check**: Did you remove the _contents_ of the old data directory (`./reth-data/*` or `./geth-data/*`) before extracting/moving the snapshot data?
93+
- **Check**: Was the snapshot data moved correctly? The chain data needs to be directly inside `./reth-data` or `./geth-data`, not in a nested subfolder (e.g., `./reth-data/reth/...`). Verify the folder structure.
9594

9695
- **Issue**: Ran out of disk space during download or extraction.
9796
- **Action**: Free up disk space or provision a larger volume. Remember the storage formula:
@@ -102,7 +101,7 @@ Refer to the [Snapshots](/base-chain/node-operators/snapshots) guide for the cor
102101
### Networking / Connectivity Issues
103102

104103
- **Issue**: RPC/WS connection refused (e.g., `curl` to `localhost:8545` fails).
105-
- **Check**: Is the L2 client container (`op-geth`/`op-reth`) running (`docker compose ps`)?
104+
- **Check**: Is the L2 client container running (`docker compose ps`)?
106105
- **Check**: Are you using the correct port (`8545` for HTTP, `8546` for WS by default)?
107106
- **Check**: L2 client logs. Did it fail to start the RPC server?
108107
- **Check**: Are the `--http.addr` and `--ws.addr` flags set to `0.0.0.0` in the client config/entrypoint to allow external connections (within the Docker network)?

0 commit comments

Comments
 (0)