Skip to content

Commit f2a4177

Browse files
committed
Self hosting page
1 parent 3d2eb1d commit f2a4177

File tree

1 file changed

+108
-0
lines changed

1 file changed

+108
-0
lines changed

infrastructure/self-hosting.mdx

Lines changed: 108 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,114 @@ We are currently officially supporting self-hosting on Google Cloud Platform (GC
1616

1717
## Google Cloud Platform
1818

19+
### Prerequisites
20+
21+
**Tools**
22+
- [Packer](https://developer.hashicorp.com/packer/tutorials/docker-get-started/get-started-install-cli#installing-packer)
23+
- [Golang](https://go.dev/doc/install)
24+
- [Docker](https://docs.docker.com/engine/install/)
25+
- [Terraform](https://developer.hashicorp.com/terraform/tutorials/aws-get-started/install-cli) (v1.5.x)
26+
- [This version that is still using Mozilla Public License](https://github.com/hashicorp/terraform/commit/b145fbcaadf0fa7d0e7040eac641d9aef2a26433)
27+
- The last version of Terraform that supports Mozilla Public License is **v1.5.7**
28+
- You can install it with [tfenv](https://github.com/tfutils/tfenv) for easier version management
29+
- [Google Cloud CLI](https://cloud.google.com/sdk/docs/install)
30+
- Used for managing GCP resources deployed by Terraform
31+
- Authenticate with `gcloud auth login && gcloud auth application-default login`
32+
33+
**Accounts**
34+
- Cloudflare account with a domain
35+
- Google Cloud Platform account and project
36+
- Supabase account with PostgreSQL database
37+
- **(Optional)** Grafana account for monitoring and logging
38+
- **(Optional)** Posthog account for analytics
39+
40+
### Steps
41+
42+
1. Go to `console.cloud.google.com` and create a new GCP project
43+
> Make sure your Quota allows you to have at least 2500 GB for `Persistent Disk SSD (GB)` and at least 24 for `CPUs`.
44+
2. Create `.env.prod`, `.env.staging`, or `.env.dev` from [`.env.template`](https://github.com/e2b-dev/infra/blob/main/.env.template). You can pick any of them. Make sure to fill in the values. All are required if not specified otherwise.
45+
> Get Postgres database connection string from your database, e.g. [from Supabase](https://supabase.com/docs/guides/database/connecting-to-postgres#direct-connection): Create a new project in Supabase and go to your project in Supabase -> Settings -> Database -> Connection Strings -> Postgres -> Direct.
46+
47+
> Your Postgres database needs to have IPv4 access enabled. You can do that in the Connect screen.
48+
3. Run `make switch-env ENV={prod,staging,dev}` to start using your env
49+
4. Run `make login-gcloud` to login to `gcloud` CLI so Terraform and Packer can communicate with GCP API.
50+
5. Run `make init`
51+
> If this error, run it a second time. It's due to a race condition on Terraform enabling API access for the various GCP services; this can take several seconds.
52+
53+
> A full list of services that will be enabled for API access: [Secret Manager API](https://console.cloud.google.com/apis/library/secretmanager.googleapis.com), [Certificate Manager API](https://console.cloud.google.com/apis/library/certificatemanager.googleapis.com), [Compute Engine API](https://console.cloud.google.com/apis/library/compute.googleapis.com), [Artifact Registry API](https://console.cloud.google.com/apis/library/artifactregistry.googleapis.com), [OS Config API](https://console.cloud.google.com/apis/library/osconfig.googleapis.com), [Stackdriver Monitoring API](https://console.cloud.google.com/apis/library/monitoring.googleapis.com), [Stackdriver Logging API](https://console.cloud.google.com/apis/library/logging.googleapis.com)
54+
55+
6. Run `make build-and-upload`
56+
7. Run `make copy-public-builds`
57+
8. Run `make migrate`
58+
9. Secrets are created and stored in GCP Secrets Manager. Once created, that is the source of truth--you will need to update values there to make changes. Create a secret value for the following secrets:
59+
10. Update `e2b-cloudflare-api-token` in GCP Secrets Manager with a value taken from Cloudflare.
60+
> Get Cloudflare API Token: go to the [Cloudflare dashboard](https://dash.cloudflare.com/) -> Manage Account -> Account API Tokens -> Create Token -> Edit Zone DNS -> in "Zone Resources" select your domain and generate the token
61+
11. Run `make plan-without-jobs` and then `make apply`
62+
12. Fill out the following secret in the GCP Secrets Manager:
63+
- e2b-supabase-jwt-secrets (optional / required to self-host the [E2B dashboard](https://github.com/e2b-dev/dashboard))
64+
> Get Supabase JWT Secret: go to the [Supabase dashboard](https://supabase.com/dashboard) -> Select your Project -> Project Settings -> Data API -> JWT Settings
65+
- e2b-postgres-connection-string
66+
> This is the same value as for the `POSTGRES_CONNECTION_STRING` env variable.
67+
13. Run `make plan` and then `make apply`
68+
> Note: This will work after the TLS certificates are issued. It can take some time; you can check the status in the Google Cloud Console.
69+
14. Setup data in the cluster by following one of the two
70+
- `make prep-cluster` in `packages/shared` to create an initial user, etc. (You need to be logged in via [`e2b` CLI](https://www.npmjs.com/package/@e2b/cli)). It will create a user with same information (access token, api key, etc.) as you have in E2B.
71+
- You can also create a user in the database, it will automatically also create a team, an API key, and an access token. You will need to build template(s) for your cluster. Use [`e2b` CLI](https://www.npmjs.com/package/@e2b/cli?activetab=versions)) and run `E2B_DOMAIN=<your-domain> e2b template build`.
72+
73+
74+
### Interacting with the cluster
75+
76+
#### SDK
77+
When using SDK pass the domain when creating a new `Sandbox` in the JS/TS SDK
78+
```javascript
79+
import { Sandbox } from "@e2b/sdk";
80+
81+
const sandbox = new Sandbox({domain: "<your-domain>"});
82+
```
83+
84+
or in Python SDK
85+
86+
```python
87+
from e2b import Sandbox
88+
89+
sandbox = Sandbox(domain="<your-domain>")
90+
```
91+
92+
#### CLI
93+
When using CLI, you can pass the domain as well
94+
```sh
95+
E2B_DOMAIN=<your-domain> e2b <command>
96+
```
97+
98+
### Monitoring and logging jobs
99+
100+
To access the Nomad web UI, go to `https://nomad.<your-domain.com>`. Go to sign in, and when prompted for an API token, you can find this in GCP Secrets Manager.
101+
From here, you can see nomad jobs and tasks for both client and server, including logging.
102+
103+
To update jobs running in the cluster, look inside packages/nomad for config files. This can be useful for setting your logging and monitoring agents.
104+
105+
### Deployment Troubleshooting
106+
107+
If any problems arise, open a [GitHub issue on the repo](https://github.com/e2b-dev/infra/issues) and we'll look into it.
108+
109+
110+
### Google Cloud Troubleshooting
111+
**Quotas not available**
112+
113+
If you can't find the quota in `All Quotas` in GCP's Console, then create and delete a dummy VM before proceeding to step 2 in self-deploy guide. This will create additional quotas and policies in GCP
114+
```
115+
gcloud compute instances create dummy-init --project=YOUR-PROJECT-ID --zone=YOUR-ZONE --machine-type=e2-medium --boot-disk-type=pd-ssd --no-address
116+
```
117+
Wait a minute and destroy the VM:
118+
```
119+
gcloud compute instances delete dummy-init --zone=YOUR-ZONE --quiet
120+
```
121+
Now, you should see the right quota options in `All Quotas` and be able to request the correct size.
122+
123+
124+
125+
126+
19127
## Linux Machine
20128
All E2B services are AMD64 compatible and ready to be deployed on Ubuntu 22.04 machines.
21129
Tooling for on-premise clustering and load-balancing is **not yet officially supported**.

0 commit comments

Comments
 (0)