You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/base-chain/node-operators/performance-tuning.mdx
+8-11Lines changed: 8 additions & 11 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -28,13 +28,13 @@ If utilizing Amazon Elastic Block Store (EBS), io2 Block Express volumes are rec
28
28
29
29
The following are the hardware specifications used for Base production nodes:
30
30
31
-
-**Geth Full Node:**
32
-
- Instance: AWS `i4i.12xlarge`
31
+
-**Reth Archive Node (recommended):**
32
+
- Instance: AWS `i7i.12xlarge`or larger
33
33
- Storage: RAID 0 of all local NVMe drives (`/dev/nvme*`)
34
34
- Filesystem: ext4
35
35
36
-
-**Reth Archive Node:**
37
-
- Instance: AWS `i4ie.6xlarge`
36
+
-**Geth Full Node:**
37
+
- Instance: AWS `i7i.12xlarge` or larger
38
38
- Storage: RAID 0 of all local NVMe drives (`/dev/nvme*`)
39
39
- Filesystem: ext4
40
40
@@ -46,16 +46,13 @@ Using a recent [snapshot](/base-chain/node-operators/snapshots) can significantl
46
46
47
47
The [Base Node](https://github.com/base/node) repository contains the current stable configurations and instructions for running different client implementations.
48
48
49
-
### Supported Clients
50
-
51
49
Reth is currently the most performant client for running Base nodes. Future optimizations will primarily focus on Reth. You can read more about the migration to Reth [here](https://blog.base.dev/scaling-base-with-reth).
Geth is no longer supported and Reth is the recommended client and shown to be more performant. We recommend migrating Geth nodes to Reth, especially if you are experiencing performance issues.
Copy file name to clipboardExpand all lines: docs/base-chain/node-operators/run-a-base-node.mdx
+2-29Lines changed: 2 additions & 29 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -23,7 +23,7 @@ If you're just getting started and need an RPC URL, you can use our free endpoin
23
23
24
24
**Note:** Our RPCs are rate-limited, they are not suitable for production apps.
25
25
26
-
If you're looking to harden your app and avoid rate-limiting for your users, please check out one of our [partners](/base-chain/tools/node-providers).
26
+
If you're looking to harden your app and avoid rate-limiting for your users, please consider using an endpoint from one of our [partners](/base-chain/tools/node-providers).
Syncing your node may take **days** and will consume a vast amount of your requests quota. Be sure to monitor usage and up your plan if needed.
66
66
</Warning>
67
67
68
-
69
68
### Snapshots
70
69
71
70
<Note>
72
71
Geth Archive Nodes are no longer supported. For Archive functionality, use Reth, which provides significantly better performance in Base’s high-throughput environment.
73
72
</Note>
74
73
75
-
76
-
If you're a prospective or current Base Node operator and would like to restore from a snapshot to save time on the initial sync, it's possible to always get the latest available snapshot of the Base chain on mainnet and/or testnet by using the following CLI commands. The snapshots are updated every week.
77
-
78
-
#### Restoring from snapshot
79
-
80
-
In the home directory of your Base Node, create a folder named `geth-data` or `reth-data`. If you already have this folder, remove it to clear the existing state and then recreate it. Next, run the following code and wait for the operation to complete.
You'll then need to untar the downloaded snapshot and place the `geth` subfolder inside of it in the `geth-data` folder you created (unless you changed the location of your data directory).
90
-
91
-
Return to the root of your Base node folder and start your node.
92
-
93
-
```bash Terminal
94
-
cd ..
95
-
docker compose up --build
96
-
```
97
-
98
-
Your node should begin syncing from the last block in the snapshot.
99
-
100
-
Check the latest block to make sure you're syncing from the snapshot and that it restored correctly. If so, you can remove the snapshot archive that you downloaded.
74
+
If you're a Base Node operator and would like to save significant time on the initial sync, you may [restore from a snapshot](/base-chain/node-operators/snapshots#restoring-from-snapshot). The snapshots are updated every week.
Copy file name to clipboardExpand all lines: docs/base-chain/node-operators/snapshots.mdx
+25-23Lines changed: 25 additions & 23 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -17,24 +17,26 @@ These steps assume you are in the cloned `node` directory (the one containing `d
17
17
18
18
1.**Prepare Data Directory**:
19
19
-**Before running Docker for the first time**, create the data directory on your host machine that will be mapped into the Docker container. This directory must match the `volumes` mapping in the `docker-compose.yml` file for the client you intend to use.
20
-
- For Geth:
20
+
- For Reth (recommended):
21
21
```bash
22
-
mkdir ./geth-data
22
+
mkdir ./reth-data
23
23
```
24
-
- For Reth:
24
+
- For Geth:
25
25
```bash
26
-
mkdir ./reth-data
26
+
mkdir ./geth-data
27
27
```
28
-
- If you have previously run the node and have an existing data directory, **stop the node** (`docker compose down`), remove the _contents_ of the existing directory (e.g. `rm -rf ./geth-data/*`), and proceed.
28
+
- If you have previously run the node and have an existing data directory, **stop the node** (`docker compose down`), remove the _contents_ of the existing directory (e.g. `rm -rf ./reth-data/*`), and proceed.
29
29
30
30
2.**Download Snapshot**: Choose the appropriate snapshot for your network and client from the table below. Use `wget` (or similar) to download it into the `node` directory.
| Mainnet | Geth | Full |`wget https://mainnet-full-snapshots.base.org/$(curl https://mainnet-full-snapshots.base.org/latest)`|
38
40
39
41
<Note>
40
42
Ensure you have enough free disk space to download the snapshot archive (`.tar.gz` file) _and_ extract its contents. The extracted data will be significantly larger than the archive.
@@ -46,9 +48,16 @@ These steps assume you are in the cloned `node` directory (the one containing `d
46
48
tar -xzvf <snapshot-filename.tar.gz>
47
49
```
48
50
49
-
4.**Move Data**: The extraction process will likely create a directory (e.g., `geth` or `reth`).
51
+
4.**Move Data**: The extraction process will likely create a directory (e.g., `reth` or `geth`).
50
52
51
53
* Move the *contents* of that directory into the data directory you created in Step 1.
54
+
* Example (if archive extracted to a reth folder - **verify actual folder name**):
55
+
56
+
```bash
57
+
# For Reth
58
+
mv ./reth/* ./reth-data/
59
+
rm -rf ./reth # Clean up empty extracted folder
60
+
```
52
61
53
62
* Example (if archive extracted to a geth folder):
54
63
@@ -58,22 +67,15 @@ These steps assume you are in the cloned `node` directory (the one containing `d
58
67
rm -rf ./geth # Clean up empty extracted folder
59
68
```
60
69
61
-
* Example (if archive extracted to a reth folder - **verify actual folder name**):
62
-
63
-
```bash
64
-
# For Reth
65
-
mv ./reth/* ./reth-data/
66
-
rm -rf ./reth # Clean up empty extracted folder
67
-
```
68
-
69
-
* The goal is to have the chain data directories (e.g., `chaindata`, `nodes`, `segments`, etc.) directly inside `./geth-data` or `./reth-data`, not nested within another subfolder.
70
+
* The goal is to have the chain data directories (e.g., `chaindata`, `nodes`, `segments`, etc.) directly inside `./reth-data` or `./geth-data`, not nested within another subfolder.
70
71
71
-
5.**Start the Node**: Now that the snapshot data is in place, start the node using the appropriate command (see the [Running a Base Node](/base-chain/node-operators/run-a-base-node#setting-up-and-running-the-node) guide):
72
+
5.**Start the Node**: Now that the snapshot data is in place, return the root of your Base node folder and start the node:
72
73
73
74
```bash
74
-
# Example for Mainnet Geth
75
-
docker compose up --build -d
75
+
cd ..
76
+
docker compose up --build
76
77
```
77
78
78
-
6.**Verify and Clean Up**: Monitor the node logs (`docker compose logs -f <service_name>`) or use the [sync monitoring](/base-chain/node-operators/run-a-base-node#monitoring-sync-progress) command to ensure the node starts syncing from the snapshot's block height. Once confirmed, you can safely delete the downloaded snapshot archive (`.tar.gz` file) to free up disk space.
79
+
Your node should begin syncing from the last block in the snapshot.
79
80
81
+
6.**Verify and Clean Up**: Monitor the node logs (`docker compose logs -f <service_name>`) or use the [sync monitoring](/base-chain/node-operators/run-a-base-node#syncing) command to ensure the node starts syncing from the snapshot's block height. Once confirmed, you can safely delete the downloaded snapshot archive (`.tar.gz` file) to free up disk space.
Copy file name to clipboardExpand all lines: docs/base-chain/node-operators/troubleshooting.mdx
+11-12Lines changed: 11 additions & 12 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -10,9 +10,8 @@ This guide covers common issues encountered when setting up and running a Base n
10
10
Before diving into specific issues, here are some general steps that often help:
11
11
12
12
1.**Check Container Logs**: This is usually the most informative step. Use `docker compose logs -f <service_name>` to view the real-time logs for a specific container.
- Rollup Node: `docker compose logs -f node`. Look for errors, warnings, or repeated messages.
16
15
17
16
2.**Check Container Status**: Ensure the relevant Docker containers are running: `docker compose ps`. If a container is restarting frequently or exited, check its logs.
18
17
@@ -42,15 +41,15 @@ Before diving into specific issues, here are some general steps that often help:
42
41
-**Issue**: Errors related to JWT secret or authentication between `op-node` and L2 client.
43
42
-**Check**: Ensure you haven't manually modified the `OP_NODE_L2_ENGINE_AUTH` variable or the JWT file path (`$OP_NODE_L2_ENGINE_AUTH`) unless you know what you're doing. The `docker-compose` setup usually handles this automatically.
44
43
45
-
-**Issue**: Permission errors related to data volumes (`./geth-data`, `./reth-data`).
46
-
-**Check**: Ensure the user running `docker compose` has write permissions to the directory where the `node` repository was cloned. Docker needs to be able to write to `./geth-data` or `./reth-data`. Sometimes running Docker commands with `sudo` can cause permission issues later; try running as a non-root user added to the `docker` group.
44
+
-**Issue**: Permission errors related to data volumes (`./reth-data`, `./geth-data`).
45
+
-**Check**: Ensure the user running `docker compose` has write permissions to the directory where the `node` repository was cloned. Docker needs to be able to write to `./reth-data` or `./geth-data`. Sometimes running Docker commands with `sudo` can cause permission issues later; try running as a non-root user added to the `docker` group.
47
46
48
47
### Syncing Problems
49
48
50
49
-**Issue**: Node doesn't start syncing or appears stuck (block height not increasing).
51
50
-**Check**: `op-node` logs. Look for errors connecting to L1 endpoints or the L2 client.
52
-
-**Check**: L2 client (`op-geth`/`op-reth`) logs. Look for errors connecting to `op-node` via the Engine API (port `8551`) or P2P issues.
53
-
-**Check**: L1 node health and sync status. Is the L1 node accessible and fully synced?
51
+
-**Check**: Look at logs for the execution client. Look for errors connecting to `op-node` via the Engine API (port `8551`) or P2P issues.
52
+
-**Check**: L1 node health and sync status. Is the L1 node accessible and fully synced?
54
53
-**Check**: System time. Ensure the server’s clock is accurately synchronized (use `ntp` or `chrony`). Significant time drift can cause P2P issues.
55
54
56
55
-**Issue**: Syncing is extremely slow.
@@ -60,7 +59,7 @@ Before diving into specific issues, here are some general steps that often help:
60
59
-**Check**: `op-node` and L2 client logs for any performance warnings or errors.
61
60
62
61
-**Issue**: `optimism_syncStatus` (port `7545` on `op-node`) shows a large time difference or errors.
63
-
-**Action**: Check the logs for both `op-node` and the L2 client (`op-geth`/`op-reth`) around the time the status was checked to identify the root cause (e.g., L1 connection issues, L2 client issues).
62
+
-**Action**: Check the logs for both the rollup node and the L2 execution client around the time the status was checked to identify the root cause (e.g., L1 connection issues, L2 client issues).
64
63
65
64
-**Issue**: `Error: nonce has already been used` when trying to send transactions.
66
65
-**Cause**: The node is not yet fully synced to the head of the chain.
@@ -69,10 +68,10 @@ Before diving into specific issues, here are some general steps that often help:
69
68
### Performance Issues
70
69
71
70
-**Issue**: High CPU, RAM, or Disk I/O usage.
71
+
-**Action**: If running Geth, we highly recommend migrating to Reth, as it’s the recommended client and generally more performant for Base.
72
72
-**Check**: Hardware specifications against recommendations in the [Node Performance](/base-chain/node-operators/performance-tuning). Upgrade if necessary. Local NVMe SSDs are critical.
73
73
-**Check**: (Geth) Review Geth cache settings and LevelDB tuning options mentioned in [Node Performance – Geth Performance Tuning](/base-chain/node-operators/performance-tuning#geth-performance-tuning) and [Advanced Configuration](/base-chain/node-operators/run-a-base-node#geth-configuration-via-environment-variables).
74
74
-**Check**: Review client logs for specific errors or bottlenecks.
75
-
-**Action**: Consider using Reth if running Geth, as it’s generally more performant for Base.
76
75
77
76
### Snapshot Restoration Problems
78
77
@@ -90,8 +89,8 @@ Refer to the [Snapshots](/base-chain/node-operators/snapshots) guide for the cor
90
89
91
90
-**Issue**: Node fails to start after restoring snapshot; logs show database errors or missing files.
92
91
-**Check**: Did you stop the node (`docker compose down`) _before_ modifying the data directory?
93
-
-**Check**: Did you remove the _contents_ of the old data directory (`./geth-data/*` or `./reth-data/*`) before extracting/moving the snapshot data?
94
-
-**Check**: Was the snapshot data moved correctly? The chain data needs to be directly inside `./geth-data` or `./reth-data`, not in a nested subfolder (e.g., `./geth-data/geth/...`). Verify the folder structure.
92
+
-**Check**: Did you remove the _contents_ of the old data directory (`./reth-data/*` or `./geth-data/*`) before extracting/moving the snapshot data?
93
+
-**Check**: Was the snapshot data moved correctly? The chain data needs to be directly inside `./reth-data` or `./geth-data`, not in a nested subfolder (e.g., `./reth-data/reth/...`). Verify the folder structure.
95
94
96
95
-**Issue**: Ran out of disk space during download or extraction.
97
96
-**Action**: Free up disk space or provision a larger volume. Remember the storage formula:
@@ -102,7 +101,7 @@ Refer to the [Snapshots](/base-chain/node-operators/snapshots) guide for the cor
102
101
### Networking / Connectivity Issues
103
102
104
103
-**Issue**: RPC/WS connection refused (e.g., `curl` to `localhost:8545` fails).
105
-
-**Check**: Is the L2 client container (`op-geth`/`op-reth`) running (`docker compose ps`)?
104
+
-**Check**: Is the L2 client container running (`docker compose ps`)?
106
105
-**Check**: Are you using the correct port (`8545` for HTTP, `8546` for WS by default)?
107
106
-**Check**: L2 client logs. Did it fail to start the RPC server?
108
107
-**Check**: Are the `--http.addr` and `--ws.addr` flags set to `0.0.0.0` in the client config/entrypoint to allow external connections (within the Docker network)?
0 commit comments