diff --git a/src/content/docs/r2/api/s3/presigned-urls.mdx b/src/content/docs/r2/api/s3/presigned-urls.mdx index de198f7095879e..cdea7896643491 100644 --- a/src/content/docs/r2/api/s3/presigned-urls.mdx +++ b/src/content/docs/r2/api/s3/presigned-urls.mdx @@ -3,227 +3,219 @@ title: Presigned URLs pcx_content_type: concept --- -import {Tabs, TabItem } from "~/components"; +import {Tabs, TabItem, LinkCard } from "~/components"; -Presigned URLs are an [S3 concept](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-presigned-url.html) for sharing direct access to your bucket without revealing your token secret. A presigned URL authorizes anyone with the URL to perform an action to the S3 compatibility endpoint for an R2 bucket. By default, the S3 endpoint requires an `AUTHORIZATION` header signed by your token. Every presigned URL has S3 parameters and search parameters containing the signature information that would be present in an `AUTHORIZATION` header. The performable action is restricted to a specific resource, an [operation](/r2/api/s3/api/), and has an associated timeout. +Presigned URLs are an [S3 concept](https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-presigned-url.html) for granting temporary access to objects without exposing your API credentials. A presigned URL includes signature parameters in the URL itself, authorizing anyone with the URL to perform a specific operation (like `GetObject` or `PutObject`) on a specific object until the URL expires. -There are three kinds of resources in R2: +They are ideal for granting temporary access to specific objects, such as allowing users to upload files directly to R2 or providing time-limited download links. -1. **Account**: For account-level operations (such as `CreateBucket`, `ListBuckets`, `DeleteBucket`) the identifier is the account ID. -2. **Bucket**: For bucket-level operations (such as `ListObjects`, `PutBucketCors`) the identifier is the account ID, and bucket name. -3. **Object**: For object-level operations (such as `GetObject`, `PutObject`, `CreateMultipartUpload`) the identifier is the account ID, bucket name, and object path. +To generate a presigned URL, you specify: -All parts of the identifier are part of the presigned URL. +1. **Resource identifier**: Account ID, bucket name, and object path +2. **Operation**: The S3 API operation permitted (GET, PUT, HEAD, or DELETE) +3. **Expiry**: Timeout from 1 second to 7 days (604,800 seconds) -You cannot change the resource being accessed after the request is signed. For example, trying to change the bucket name to access the same object in a different bucket will return a `403` with an error code of `SignatureDoesNotMatch`. +Presigned URLs are generated client-side with no communication with R2, requiring only your R2 API credentials and an implementation of the AWS Signature Version 4 signing algorithm. -Presigned URLs must have a defined expiry. You can set a timeout from one second to 7 days (604,800 seconds) into the future. The URL will contain the time when the URL was generated (`X-Amz-Date`) and the timeout (`X-Amz-Expires`) as search parameters. These search parameters are signed and tampering with them will result in `403` with an error code of `SignatureDoesNotMatch`. +## Generate a presigned URL -Presigned URLs are generated with no communication with R2 and must be generated by an application with access to your R2 bucket's credentials. +### Prerequisites -## Presigned URL use cases +- [Account ID](/fundamentals/account/find-account-and-zone-ids/) (for constructing the S3 endpoint URL) +- [R2 API token](/r2/api/tokens/) (Access Key ID and Secret Access Key) +- AWS SDK or compatible S3 client library -There are three ways to grant an application access to R2: +### SDK examples -1. The application has its own copy of an [R2 API token](/r2/api/tokens/). -2. The application requests a copy of an R2 API token from a vault application and promises to not permanently store that token locally. -3. The application requests a central application to give it a presigned URL it can use to perform an action. - -In scenarios 1 and 2, if the application or vault application is compromised, the holder of the token can perform arbitrary actions. - -Scenario 3 keeps the credential secret. If the application making a presigned URL request to the central application leaks that URL, but the central application does not have its key storage system compromised, the impact is limited to one operation on the specific resource that was signed. - -Additionally, the central application can perform monitoring, auditing, logging tasks so you can review when a request was made to perform an operation on a specific resource. In the event of a security incident, you can use a central application's logging functionality to review details of the incident. - -The central application can also perform policy enforcement. For example, if you have an application responsible for uploading resources, you can restrict the upload to a specific bucket or folder within a bucket. The requesting application can obtain a JSON Web Token (JWT) from your authorization service to sign a request to the central application. The central application then uses the information contained in the JWT to validate the inbound request parameters. - -The central application can be, for example, a Cloudflare Worker. Worker secrets are cryptographically impossible to obtain outside of your script running on the Workers runtime. If you do not store a copy of the secret elsewhere and do not have your code log the secret somewhere, your Worker secret will remain secure. However, as previously mentioned, presigned URLs are generated outside of R2 and all that's required is the secret + an implementation of the signing algorithm, so you can generate them anywhere. - -Another potential use case for presigned URLs is debugging. For example, if you are debugging your application and want to grant temporary access to a specific test object in a production environment, you can do this without needing to share the underlying token and remembering to revoke it. - -## Supported HTTP methods - -R2 currently supports the following methods when generating a presigned URL: - -- `GET`: allows a user to fetch an object from a bucket -- `HEAD`: allows a user to fetch an object's metadata from a bucket -- `PUT`: allows a user to upload an object to a bucket -- `DELETE`: allows a user to delete an object from a bucket - -`POST`, which performs uploads via native HTML forms, is not currently supported. - -## Presigned URL alternative with Workers + + -A valid alternative design to presigned URLs is to use a Worker with a [binding](/workers/runtime-apis/bindings/) that implements your security policy. +```ts +import { S3Client, GetObjectCommand, PutObjectCommand } from "@aws-sdk/client-s3"; +import { getSignedUrl } from "@aws-sdk/s3-request-presigner"; + +const S3 = new S3Client({ + region: "auto", // Required by SDK but not used by R2 + // Provide your Cloudflare account ID + endpoint: `https://.r2.cloudflarestorage.com`, + // Retrieve your S3 API credentials for your R2 bucket via API tokens (see: https://developers.cloudflare.com/r2/api/tokens) + credentials: { + accessKeyId: '', + secretAccessKey: '', + }, +}); -:::note[Bindings] +// Generate presigned URL for reading (GET) +const getUrl = await getSignedUrl( + S3, + new GetObjectCommand({ Bucket: "my-bucket", Key: "image.png" }), + { expiresIn: 3600 }, // Valid for 1 hour +); +// https://my-bucket..r2.cloudflarestorage.com/image.png?X-Amz-Algorithm=... + +// Generate presigned URL for writing (PUT) +// Specify ContentType to restrict uploads to a specific file type +const putUrl = await getSignedUrl( + S3, + new PutObjectCommand({ + Bucket: "my-bucket", + Key: "image.png", + ContentType: "image/png", + }), + { expiresIn: 3600 }, +); +``` -A binding is how your Worker interacts with external resources such as [KV Namespaces](/kv/concepts/kv-namespaces/), [Durable Objects](/durable-objects/), or [R2 Buckets](/r2/buckets/). A binding is a runtime variable that the Workers runtime provides to your code. You can declare a variable name in your Wrangler file that will be bound to these resources at runtime, and interact with them through this variable. Every binding's variable name and behavior is determined by you when deploying the Worker. Refer to [Environment Variables](/workers/configuration/environment-variables/) for more information. + + + +```python +import boto3 + +s3 = boto3.client( + service_name="s3", + # Provide your Cloudflare account ID + endpoint_url='https://.r2.cloudflarestorage.com', + # Retrieve your S3 API credentials for your R2 bucket via API tokens (see: https://developers.cloudflare.com/r2/api/tokens) + aws_access_key_id='', + aws_secret_access_key='', + region_name="auto", # Required by SDK but not used by R2 +) + +# Generate presigned URL for reading (GET) +get_url = s3.generate_presigned_url( + 'get_object', + Params={'Bucket': 'my-bucket', 'Key': 'image.png'}, + ExpiresIn=3600 # Valid for 1 hour +) +# https://my-bucket..r2.cloudflarestorage.com/image.png?X-Amz-Algorithm=... + +# Generate presigned URL for writing (PUT) +# Specify ContentType to restrict uploads to a specific file type +put_url = s3.generate_presigned_url( + 'put_object', + Params={ + 'Bucket': 'my-bucket', + 'Key': 'image.png', + 'ContentType': 'image/png' + }, + ExpiresIn=3600 +) +``` -A binding is defined in the Wrangler file of your Worker project's directory. + + -::: +```sh +# Generate presigned URL for reading (GET) +# The AWS CLI presign command defaults to GET operations +aws s3 presign --endpoint-url https://.r2.cloudflarestorage.com \ + s3://my-bucket/image.png \ + --expires-in 3600 -Refer to [Use R2 from Workers](/r2/api/workers/workers-api-usage/) to learn how to bind a bucket to a Worker and use the binding to interact with your bucket. +# Output: +# https://.r2.cloudflarestorage.com/my-bucket/image.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=... -## Generate presigned URLs +# Note: The AWS CLI presign command only supports GET operations. +# For PUT operations, use one of the SDK examples above. +``` -Generate a presigned URL by referring to the following examples: + + +For complete examples and additional operations, refer to the SDK-specific documentation: +- [AWS SDK for JavaScript](/r2/examples/aws/aws-sdk-js-v3/#generate-presigned-urls) +- [AWS SDK for Python (Boto3)](/r2/examples/aws/boto3/#generate-presigned-urls) +- [AWS CLI](/r2/examples/aws/aws-cli/#generate-presigned-urls) - [AWS SDK for Go](/r2/examples/aws/aws-sdk-go/#generate-presigned-urls) -- [AWS SDK for JS v3](/r2/examples/aws/aws-sdk-js-v3/#generate-presigned-urls) -- [AWS SDK for JS](/r2/examples/aws/aws-sdk-js/#generate-presigned-urls) - [AWS SDK for PHP](/r2/examples/aws/aws-sdk-php/#generate-presigned-urls) -- [AWS CLI](/r2/examples/aws/aws-cli/#generate-presigned-urls) -### Example of generating presigned URLs +### Best practices -A possible use case may be restricting an application to only be able to upload to a specific URL. With presigned URLs, your central signing application might look like the following JavaScript code running on Cloudflare Workers, `workerd`, or another platform (you might have to update the code based on the platform you are using). +When generating presigned URLs, you can limit abuse and misuse by: -If the application received a request for `https://example.com/uploads/dog.png`, it would respond with a presigned URL allowing a user to upload to your R2 bucket at the `/uploads/dog.png` path. +- **Restricting Content-Type**: Specify the allowed `Content-Type` in your SDK's parameters. The signature will include this header, so uploads will fail with a `403/SignatureDoesNotMatch` error if the client sends a different `Content-Type` for an upload request. +- **Configuring CORS**: If your presigned URLs will be used from a browser, set up [CORS rules](/r2/buckets/cors/#use-cors-with-a-presigned-url) on your bucket to control which origins can make requests. -To create a presigned URL, you will need to either use a package that implements the signing algorithm, or implement the signing algorithm yourself. In this example, the `aws4fetch` package is used. You also need to have an access key ID and a secret access key. Refer to [R2 API tokens](/r2/api/tokens/) for more information. +## Using a presigned URL -```ts -import { AwsClient } from "aws4fetch"; - -// Create a new client -// Replace with your own access key ID and secret access key -// Make sure to store these securely and not expose them -const client = new AwsClient({ - accessKeyId: "", - secretAccessKey: "", -}); +Once generated, use a presigned URL like any HTTP endpoint. The signature is embedded in the URL, so no additional authentication headers are required. -export default { - async fetch(req): Promise { - // This is just an example to demonstrating using aws4fetch to generate a presigned URL. - // This Worker should not be used as-is as it does not authenticate the request, meaning - // that anyone can upload to your bucket. - // - // Consider implementing authorization, such as a preshared secret in a request header. - const requestPath = new URL(req.url).pathname; - - // Cannot upload to the root of a bucket - if (requestPath === "/") { - return new Response("Missing a filepath", { status: 400 }); - } - - // Replace with your bucket name and account ID - const bucketName = ""; - const accountId = ""; - - const url = new URL( - `https://${bucketName}.${accountId}.r2.cloudflarestorage.com`, - ); - - // preserve the original path - url.pathname = requestPath; - - // Specify a custom expiry for the presigned URL, in seconds - url.searchParams.set("X-Amz-Expires", "3600"); - - const signed = await client.sign( - new Request(url, { - method: "PUT", - }), - { - aws: { signQuery: true }, - }, - ); - - // Caller can now use this URL to upload to that object. - return new Response(signed.url, { status: 200 }); - }, +```sh +# Download using a GET presigned URL +curl "https://my-bucket..r2.cloudflarestorage.com/image.png?X-Amz-Algorithm=..." - // ... handle other kinds of requests -} satisfies ExportedHandler; +# Upload using a PUT presigned URL +curl -X PUT "https://my-bucket..r2.cloudflarestorage.com/image.png?X-Amz-Algorithm=..." \ + --data-binary @image.png ``` -## Differences between presigned URLs and R2 binding - -- When using an R2 binding, you will not need any token secrets in your Worker code. Instead, in your [Wrangler configuration file](/workers/wrangler/configuration/), you will create a [binding](/r2/api/workers/workers-api-usage/#3-bind-your-bucket-to-a-worker) to your R2 bucket. Additionally, authorization is handled in-line, which can reduce latency. -- When using presigned URLs, you will need to create and use the token secrets in your Worker code. - -In some cases, R2 bindings let you implement certain functionality more easily. For example, if you wanted to offer a write-once guarantee so that users can only upload to a path once: - -- With R2 binding: You only need to pass the header once. -- With presigned URLs: You need to first sign specific headers, then request the user to send the same headers. +You can also use presigned URLs directly in web browsers, mobile apps, or any HTTP client. The same presigned URL can be reused multiple times until it expires. - - +## Presigned URL example -If you are using R2 bindings, you would change your upload to: +The following is an example of a presigned URL that was created using R2 API credentials and following the AWS Signature Version 4 signing process: -```ts -const existingObject = await env.R2_BUCKET.put(key, request.body, { - onlyIf: { - // No objects will have been uploaded before September 28th, 2021 which - // is the initial R2 announcement. - uploadedBefore: new Date(1632844800000), - }, -}); -if (existingObject?.etag !== request.headers.get("etag")) { - return new Response("attempt to overwrite object", { status: 400 }); -} +``` +https://my-bucket.123456789abcdef0123456789abcdef.r2.cloudflarestorage.com/photos/cat.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=CFEXAMPLEKEY12345%2F20251201%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20251201T180512Z&X-Amz-Expires=3600&X-Amz-Signature=8c3ac40fa6c83d64b4516e0c9e5fa94c998bb79131be9ddadf90cefc5ec31033&X-Amz-SignedHeaders=host&x-amz-checksum-mode=ENABLED&x-id=GetObject ``` -When using R2 bindings, you may need to consider the following limitations: - -- You cannot upload more than 100 MiB (200 MiB for Business customers) when using R2 bindings. -- Enterprise customers can upload 500 MiB by default and can ask their account team to raise this limit. -- Detecting [precondition failures](/r2/api/s3/extensions/#conditional-operations-in-putobject) is currently easier with presigned URLs as compared with R2 bindings. +In this example, this presigned url performs a `GetObject` on the object `photos/cat.png` within bucket `my-bucket` in the account with id `123456789abcdef0123456789abcdef`. The key signature parameters that compose this presigned URL are: -Note that these limitations depend on R2's extension for conditional uploads. Amazon's S3 service does not offer such functionality at this time. - - -You can modify the previous example to sign additional headers: +- `X-Amz-Algorithm`: Identifies the algorithm used to sign the URL. +- `X-Amz-Credential`: Contains information about the credentials used to calculate the signature. +- `X-Amz-Date`: The date and time (in ISO 8601 format) when the signature was created. +- `X-Amz-Expires`: The duration in seconds that the presigned URL remains valid, starting from `X-Amz-Date`. +- `X-Amz-Signature`: The signature proving the URL was signed using the secret key. +- `X-Amz-SignedHeaders`: Lists the HTTP headers that were included in the signature calculation. -```ts -const signed = await client.sign( - new Request(url, { - method: "PUT", - }), - { - aws: { signQuery: true }, - headers: { - "If-Unmodified-Since": "Tue, 28 Sep 2021 16:00:00 GMT", - }, - }, -); -``` - -```ts -// Use the presigned URL to upload the file -const response = await fetch(signed.url, { - method: "PUT", - body: file, - headers: { - "If-Unmodified-Since": "Tue, 28 Sep 2021 16:00:00 GMT", - }, -}); -``` +:::note +The signature parameters (e.g. `X-Amz-Algorithm`, `X-Amz-Credential`, `X-Amz-Date`, `X-Amz-Expires`, `X-Amz-Signature`) cannot be tampered with. Attempting to modify the resource, operation, or expiry will result in a `403/SignatureDoesNotMatch` error. +::: -Note that the caller has to add the same `If-Unmodified-Since` header to use the URL. The caller cannot omit the header or use a different header, since the signature covers the headers. If the caller uses a different header, the presigned URL signature would not match, and they would receive a `403/SignatureDoesNotMatch`. +## Supported operations - - +R2 supports presigned URLs for the following HTTP methods: -## Differences between presigned URLs and public buckets +- `GET`: Fetch an object from a bucket +- `HEAD`: Fetch an object's metadata from a bucket +- `PUT`: Upload an object to a bucket +- `DELETE`: Delete an object from a bucket -Presigned URLs share some superficial similarity with public buckets. If you give out presigned URLs only for `GET`/`HEAD` operations on specific objects in a bucket, then your presigned URL functionality is mostly similar to public buckets. The notable exception is that any custom metadata associated with the object is rendered in headers with the `x-amz-meta-` prefix. Any error responses are returned as XML documents, as they would with normal non-presigned S3 access. +`POST` (multipart form uploads via HTML forms) is not currently supported. -Presigned URLs can be generated for any S3 operation. After a presigned URL is generated it can be reused as many times as the holder of the URL wants until the signed expiry date. +## Security considerations -[Public buckets](/r2/buckets/public-buckets/) are available on a regular HTTP endpoint. By default, there is no authorization or access controls associated with a public bucket. Anyone with a public bucket URL can access an object in that public bucket. If you are using a custom domain to expose the R2 bucket, you can manage authorization and access controls as you would for a Cloudflare zone. Public buckets only provide `GET`/`HEAD` on a known object path. Public bucket errors are rendered as HTML pages. +Treat presigned URLs as bearer tokens. Anyone with the URL can perform the specified operation until it expires. Share presigned URLs only with intended recipients and consider using short expiration times for sensitive operations. -Choosing between presigned URLs and public buckets is dependent on your specific use case. You can also use both if your architecture should use public buckets in one situation and presigned URLs in another. It is useful to note that presigned URLs will expose your account ID and bucket name to whoever gets a copy of the URL. Public bucket URLs do not contain the account ID or bucket name. Typically, you will not share presigned URLs directly with end users or browsers, as presigned URLs are used more for internal applications. +## Custom domains -## Limitations +Presigned URLs work with the S3 API domain (`.r2.cloudflarestorage.com`) and cannot be used with custom domains. -Presigned URLs can only be used with the `.r2.cloudflarestorage.com` S3 API domain and cannot be used with custom domains. Instead, you can use the [general purpose HMAC validation feature of the WAF](/ruleset-engine/rules-language/functions/#hmac-validation), which requires a Pro plan or above. +If you need authentication with R2 buckets accessed via custom domains (public buckets), use the [WAF HMAC validation feature](/ruleset-engine/rules-language/functions/#hmac-validation) (requires Pro plan or above). ## Related resources -- [Create a public bucket](/r2/buckets/public-buckets/) -- [Storing user generated content](/reference-architecture/diagrams/storage/storing-user-generated-content/) + + + + + + + diff --git a/src/content/docs/r2/buckets/cors.mdx b/src/content/docs/r2/buckets/cors.mdx index ac6c6dc0e6195a..71c1f299213061 100644 --- a/src/content/docs/r2/buckets/cors.mdx +++ b/src/content/docs/r2/buckets/cors.mdx @@ -33,44 +33,34 @@ Next, [add a CORS policy](#add-cors-policies-from-the-dashboard) to your bucket ## Use CORS with a presigned URL -Presigned URLs are an S3 concept that contain a special signature that encodes details of an S3 action, such as `GetObject` or `PutObject`. Presigned URLs are only used for authentication, which means they are generally safe to distribute publicly without revealing any secrets. - -### Create a presigned URL - -You will need a pair of S3-compatible credentials to use when you generate the presigned URL. - -The example below shows how to generate a presigned `PutObject` URL using the [`@aws-sdk/client-s3`](https://www.npmjs.com/package/@aws-sdk/client-s3) package for JavaScript. - -```js -import { PutObjectCommand, S3Client } from "@aws-sdk/client-s3"; -import { getSignedUrl } from "@aws-sdk/s3-request-presigner"; -const S3 = new S3Client({ - endpoint: "https://.r2.cloudflarestorage.com", - credentials: { - accessKeyId: "", - secretAccessKey: "", - }, - region: "auto", -}); -const url = await getSignedUrl( - S3, - new PutObjectCommand({ - Bucket: bucket, - Key: object, - }), - { - expiresIn: 60 * 60 * 24 * 7, // 7d - }, -); -console.log(url); -``` +[Presigned URLs](/r2/api/s3/presigned-urls/) allow temporary access to perform specific actions on your bucket without exposing your credentials. While presigned URLs handle authentication, you still need to configure CORS when making requests from a browser. -### Test the presigned URL +When a browser makes a request to a presigned URL on a different origin, the browser enforces CORS. Without a CORS policy, browser-based uploads and downloads using presigned URLs will fail, even though the presigned URL itself is valid. -Test the presigned URL by uploading an object using cURL. The example below would upload the `123` text to R2 with a `Content-Type` of `text/plain`. +To enable browser-based access with presigned URLs: -```sh -curl --request PUT --header "Content-Type: text/plain" --data "123" +1. [Add a CORS policy](#add-cors-policies-from-the-dashboard) to your bucket that allows requests from your application's origin. + +2. Set `AllowedMethods` to match the operations your presigned URLs perform, use `GET`, `PUT`, `HEAD`, and/or `DELETE`. + +3. Set `AllowedHeaders` to include any headers the client will send when using the presigned URL, such as headers for content type, checksums, caching, or custom metadata. + +4. (Optional) Set `ExposeHeaders` to allow your JavaScript to read response headers like `ETag`, which contains the object's hash and is useful for verifying uploads. + +5. (Optional) Set `MaxAgeSeconds` to cache the preflight response and reduce the number of preflight requests the browser makes. + +The following example allows browser-based uploads from `https://example.com` with a `Content-Type` header: + +```json +[ + { + "AllowedOrigins": ["https://example.com"], + "AllowedMethods": ["PUT"], + "AllowedHeaders": ["Content-Type"], + "ExposeHeaders": ["ETag"], + "MaxAgeSeconds": 3600 + } +] ``` ## Add CORS policies from the dashboard @@ -86,6 +76,37 @@ curl --request PUT --header "Content-Type: text/plain" --data "123" Your policy displays on the **Settings** page for your bucket. +## Add CORS policies via Wrangler CLI + +You can configure CORS rules using the [Wrangler CLI](/r2/reference/wrangler-commands/). + +1. Create a JSON file with your CORS configuration: + +```json title="cors.json" +{ + "rules": [ + { + "allowed": { + "origins": ["https://example.com"], + "methods": ["GET"] + } + } + ] +} +``` + +2. Apply the CORS policy to your bucket: + +```sh +npx wrangler r2 bucket cors set --file cors.json +``` + +3. Verify the CORS policy was applied: + +```sh +npx wrangler r2 bucket cors list +``` + ## Response headers The following fields in an R2 CORS policy map to HTTP response headers. These response headers are only returned when the incoming HTTP request is a valid CORS request. diff --git a/src/content/docs/r2/examples/aws/aws-cli.mdx b/src/content/docs/r2/examples/aws/aws-cli.mdx index 28b2eb81b164ba..1484e448a3c53b 100644 --- a/src/content/docs/r2/examples/aws/aws-cli.mdx +++ b/src/content/docs/r2/examples/aws/aws-cli.mdx @@ -15,20 +15,23 @@ aws configure ``` ```sh output -AWS Access Key ID [None]: -AWS Secret Access Key [None]: +AWS Access Key ID [None]: +AWS Secret Access Key [None]: Default region name [None]: auto Default output format [None]: json ``` +The `region` value can be set to `auto` since it is required by the SDK but not used by R2. + You may then use the `aws` CLI for any of your normal workflows. ```sh -aws s3api list-buckets --endpoint-url https://.r2.cloudflarestorage.com +# Provide your Cloudflare account ID +aws s3api list-buckets --endpoint-url https://.r2.cloudflarestorage.com # { # "Buckets": [ # { -# "Name": "sdk-example", +# "Name": "my-bucket", # "CreationDate": "2022-05-18T17:19:59.645000+00:00" # } # ], @@ -38,7 +41,7 @@ aws s3api list-buckets --endpoint-url https://.r2.cloudflarestorage.c # } # } -aws s3api list-objects-v2 --endpoint-url https://.r2.cloudflarestorage.com --bucket sdk-example +aws s3api list-objects-v2 --endpoint-url https://.r2.cloudflarestorage.com --bucket my-bucket # { # "Contents": [ # { @@ -58,8 +61,6 @@ You can also generate presigned links which allow you to share public access to ```sh # You can pass the --expires-in flag to determine how long the presigned link is valid. -$ aws s3 presign --endpoint-url https://.r2.cloudflarestorage.com s3://sdk-example/ferriswasm.png --expires-in 3600 -# https://.r2.cloudflarestorage.com/sdk-example/ferriswasm.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=&X-Amz-Date=&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature= -aws s3 presign --endpoint-url https://.r2.cloudflarestorage.com s3://sdk-example/ferriswasm.png --expires-in 3600 -# https://.r2.cloudflarestorage.com/sdk-example/ferriswasm.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=&X-Amz-Date=&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature= +aws s3 presign --endpoint-url https://.r2.cloudflarestorage.com s3://my-bucket/ferriswasm.png --expires-in 3600 +# https://.r2.cloudflarestorage.com/my-bucket/ferriswasm.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=&X-Amz-Date=&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature= ``` diff --git a/src/content/docs/r2/examples/aws/aws-sdk-go.mdx b/src/content/docs/r2/examples/aws/aws-sdk-go.mdx index 44adecb5f48a6b..21bae673efe6ca 100644 --- a/src/content/docs/r2/examples/aws/aws-sdk-go.mdx +++ b/src/content/docs/r2/examples/aws/aws-sdk-go.mdx @@ -26,13 +26,16 @@ import ( func main() { var bucketName = "sdk-example" - var accountId = "" - var accessKeyId = "" - var accessKeySecret = "" + // Provide your Cloudflare account ID + var accountId = "" + // Retrieve your S3 API credentials for your R2 bucket via API tokens + // (see: https://developers.cloudflare.com/r2/api/tokens) + var accessKeyId = "" + var accessKeySecret = "" cfg, err := config.LoadDefaultConfig(context.TODO(), config.WithCredentialsProvider(credentials.NewStaticCredentialsProvider(accessKeyId, accessKeySecret, "")), - config.WithRegion("auto"), + config.WithRegion("auto"), // Required by SDK but not used by R2 ) if err != nil { log.Fatal(err) diff --git a/src/content/docs/r2/examples/aws/aws-sdk-java.mdx b/src/content/docs/r2/examples/aws/aws-sdk-java.mdx index f15b830cd3b27c..3de9924c5055c0 100644 --- a/src/content/docs/r2/examples/aws/aws-sdk-java.mdx +++ b/src/content/docs/r2/examples/aws/aws-sdk-java.mdx @@ -35,6 +35,9 @@ public class CloudflareR2Client { /** * Configuration class for R2 credentials and endpoint + * - accountId: Your Cloudflare account ID + * - accessKey: Your R2 Access Key ID (see: https://developers.cloudflare.com/r2/api/tokens) + * - secretKey: Your R2 Secret Access Key (see: https://developers.cloudflare.com/r2/api/tokens) */ public static class S3Config { private final String accountId; @@ -70,7 +73,7 @@ public class CloudflareR2Client { return S3Client.builder() .endpointOverride(URI.create(config.getEndpoint())) .credentialsProvider(StaticCredentialsProvider.create(credentials)) - .region(Region.of("auto")) + .region(Region.of("auto")) // Required by SDK but not used by R2 .serviceConfiguration(serviceConfiguration) .build(); } @@ -103,9 +106,9 @@ public class CloudflareR2Client { public static void main(String[] args) { S3Config config = new S3Config( - "your_account_id", - "your_access_key", - "your_secret_key" + "", + "", + "" ); CloudflareR2Client r2Client = new CloudflareR2Client(config); @@ -165,7 +168,7 @@ public class CloudflareR2Client { return S3Presigner.builder() .endpointOverride(URI.create(config.getEndpoint())) .credentialsProvider(StaticCredentialsProvider.create(credentials)) - .region(Region.of("auto")) + .region(Region.of("auto")) // Required by SDK but not used by R2 .serviceConfiguration(S3Configuration.builder() .pathStyleAccessEnabled(true) .build()) diff --git a/src/content/docs/r2/examples/aws/aws-sdk-js-v3.mdx b/src/content/docs/r2/examples/aws/aws-sdk-js-v3.mdx index 965dd18d06d049..33f14b11ee953c 100644 --- a/src/content/docs/r2/examples/aws/aws-sdk-js-v3.mdx +++ b/src/content/docs/r2/examples/aws/aws-sdk-js-v3.mdx @@ -24,8 +24,10 @@ import { } from "@aws-sdk/client-s3"; const S3 = new S3Client({ - region: "auto", + region: "auto", // Required by SDK but not used by R2 + // Provide your Cloudflare account ID endpoint: `https://${ACCOUNT_ID}.r2.cloudflarestorage.com`, + // Retrieve your S3 API credentials for your R2 bucket via API tokens (see: https://developers.cloudflare.com/r2/api/tokens) credentials: { accessKeyId: ACCESS_KEY_ID, secretAccessKey: SECRET_ACCESS_KEY, @@ -44,7 +46,7 @@ console.log(await S3.send(new ListBucketsCommand({}))); // }, // Buckets: [ // { Name: 'user-uploads', CreationDate: 2022-04-13T21:23:47.102Z }, -// { Name: 'my-bucket-name', CreationDate: 2022-05-07T02:46:49.218Z } +// { Name: 'my-bucket', CreationDate: 2022-05-07T02:46:49.218Z } // ], // Owner: { // DisplayName: '...', @@ -53,7 +55,7 @@ console.log(await S3.send(new ListBucketsCommand({}))); // } console.log( - await S3.send(new ListObjectsV2Command({ Bucket: "my-bucket-name" })), + await S3.send(new ListObjectsV2Command({ Bucket: "my-bucket" })), ); // { // '$metadata': { @@ -91,7 +93,7 @@ console.log( // IsTruncated: false, // KeyCount: 8, // MaxKeys: 1000, -// Name: 'my-bucket-name', +// Name: 'my-bucket', // NextContinuationToken: undefined, // Prefix: undefined, // StartAfter: undefined @@ -109,24 +111,72 @@ import { getSignedUrl } from "@aws-sdk/s3-request-presigner"; console.log( await getSignedUrl( S3, - new GetObjectCommand({ Bucket: "my-bucket-name", Key: "dog.png" }), + new GetObjectCommand({ Bucket: "my-bucket", Key: "dog.png" }), { expiresIn: 3600 }, ), ); -// https://my-bucket-name..r2.cloudflarestorage.com/dog.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential&X-Amz-Date=&X-Amz-Expires=3600&X-Amz-Signature=&X-Amz-SignedHeaders=host&x-id=GetObject - -// You can also create links for operations such as putObject to allow temporary write access to a specific key. +// You can also create links for operations such as PutObject to allow temporary write access to a specific key. +// Specify ContentType to restrict uploads to a specific file type. console.log( await getSignedUrl( S3, - new PutObjectCommand({ Bucket: "my-bucket-name", Key: "dog.png" }), + new PutObjectCommand({ + Bucket: "my-bucket", + Key: "dog.png", + ContentType: "image/png", + }), { expiresIn: 3600 }, ), ); ``` -You can use the link generated by the `putObject` example to upload to the specified bucket and key, until the presigned link expires. +```sh output +https://my-bucket..r2.cloudflarestorage.com/dog.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=&X-Amz-Date=&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature=&x-id=GetObject +https://my-bucket..r2.cloudflarestorage.com/dog.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=&X-Amz-Date=&X-Amz-Expires=3600&X-Amz-SignedHeaders=content-type%3Bhost&X-Amz-Signature=&x-id=PutObject +``` + +You can use the link generated by the `PutObject` example to upload to the specified bucket and key, until the presigned link expires. When using a presigned URL with `ContentType`, the client must include a matching `Content-Type` header in the request. ```sh -curl -X PUT https://my-bucket-name..r2.cloudflarestorage.com/dog.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential&X-Amz-Date=&X-Amz-Expires=3600&X-Amz-Signature=&X-Amz-SignedHeaders=host&x-id=PutObject -F "data=@dog.png" +curl -X PUT "https://my-bucket..r2.cloudflarestorage.com/dog.png?X-Amz-Algorithm=..." \ + -H "Content-Type: image/png" \ + --data-binary @dog.png ``` + +## Restrict uploads with CORS and Content-Type + +When generating presigned URLs for uploads, you can limit abuse and misuse by: + +1. **Restricting Content-Type**: Specify the allowed content type in the `PutObjectCommand`. The upload will fail if the client sends a different `Content-Type` header. + +2. **Configuring CORS**: Set up [CORS rules](/r2/buckets/cors/#add-cors-policies-from-the-dashboard) on your bucket to control which origins can upload files. Configure CORS via the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/r2/overview) by adding a JSON policy to your bucket settings: + +```json +[ + { + "AllowedOrigins": ["https://example.com"], + "AllowedMethods": ["PUT"], + "AllowedHeaders": ["Content-Type"], + "ExposeHeaders": ["ETag"], + "MaxAgeSeconds": 3600 + } +] +``` + +Then generate a presigned URL with a Content-Type restriction: + +```ts +const putUrl = await getSignedUrl( + S3, + new PutObjectCommand({ + Bucket: "my-bucket", + Key: "dog.png", + ContentType: "image/png", + }), + { expiresIn: 3600 }, +); +``` + +When a client uses this presigned URL, they must: +- Make the request from an allowed origin (enforced by CORS) +- Include the `Content-Type: image/png` header (enforced by the signature) diff --git a/src/content/docs/r2/examples/aws/aws-sdk-js.mdx b/src/content/docs/r2/examples/aws/aws-sdk-js.mdx index d60ad4c610df5f..b4b811204f00d5 100644 --- a/src/content/docs/r2/examples/aws/aws-sdk-js.mdx +++ b/src/content/docs/r2/examples/aws/aws-sdk-js.mdx @@ -16,9 +16,11 @@ JavaScript or TypeScript users may continue to use the [`aws-sdk`](https://www.n import S3 from "aws-sdk/clients/s3.js"; const s3 = new S3({ - endpoint: `https://${accountid}.r2.cloudflarestorage.com`, - accessKeyId: `${access_key_id}`, - secretAccessKey: `${access_key_secret}`, + // Provide your Cloudflare account ID + endpoint: `https://${ACCOUNT_ID}.r2.cloudflarestorage.com`, + // Retrieve your S3 API credentials for your R2 bucket via API tokens (see: https://developers.cloudflare.com/r2/api/tokens) + accessKeyId: `${ACCESS_KEY_ID}`, + secretAccessKey: `${SECRET_ACCESS_KEY}`, signatureVersion: "v4", }); @@ -26,7 +28,7 @@ console.log(await s3.listBuckets().promise()); //=> { //=> Buckets: [ //=> { Name: 'user-uploads', CreationDate: 2022-04-13T21:23:47.102Z }, -//=> { Name: 'my-bucket-name', CreationDate: 2022-05-07T02:46:49.218Z } +//=> { Name: 'my-bucket', CreationDate: 2022-05-07T02:46:49.218Z } //=> ], //=> Owner: { //=> DisplayName: '...', @@ -34,10 +36,10 @@ console.log(await s3.listBuckets().promise()); //=> } //=> } -console.log(await s3.listObjects({ Bucket: "my-bucket-name" }).promise()); +console.log(await s3.listObjects({ Bucket: "my-bucket" }).promise()); //=> { //=> IsTruncated: false, -//=> Name: 'my-bucket-name', +//=> Name: 'my-bucket', //=> CommonPrefixes: [], //=> MaxKeys: 1000, //=> Contents: [ @@ -68,26 +70,68 @@ You can also generate presigned links that can be used to share public read or w ```ts // Use the expires property to determine how long the presigned link is valid. console.log( - await s3.getSignedUrlPromise("getObject", { - Bucket: "my-bucket-name", - Key: "dog.png", - Expires: 3600, - }), +await s3.getSignedUrlPromise("getObject", { + Bucket: "my-bucket", + Key: "dog.png", + Expires: 3600, +}), ); -// https://my-bucket-name..r2.cloudflarestorage.com/dog.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=&X-Amz-Date=&X-Amz-Expires=3600&X-Amz-Signature=&X-Amz-SignedHeaders=host - // You can also create links for operations such as putObject to allow temporary write access to a specific key. +// Specify ContentType to restrict uploads to a specific file type. console.log( await s3.getSignedUrlPromise("putObject", { - Bucket: "my-bucket-name", + Bucket: "my-bucket", Key: "dog.png", Expires: 3600, + ContentType: "image/png", }), ); ``` -You can use the link generated by the `putObject` example to upload to the specified bucket and key, until the presigned link expires. +```sh output +https://my-bucket..r2.cloudflarestorage.com/dog.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=&X-Amz-Date=&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature= +https://my-bucket..r2.cloudflarestorage.com/dog.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=&X-Amz-Date=&X-Amz-Expires=3600&X-Amz-SignedHeaders=content-type%3Bhost&X-Amz-Signature= +``` + +You can use the link generated by the `putObject` example to upload to the specified bucket and key, until the presigned link expires. When using a presigned URL with `ContentType`, the client must include a matching `Content-Type` header in the request. ```sh -curl -X PUT https://my-bucket-name..r2.cloudflarestorage.com/dog.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=&X-Amz-Date=&X-Amz-Expires=3600&X-Amz-Signature=&X-Amz-SignedHeaders=host --data-binary @dog.png +curl -X PUT "https://my-bucket..r2.cloudflarestorage.com/dog.png?X-Amz-Algorithm=..." \ + -H "Content-Type: image/png" \ + --data-binary @dog.png +``` + +## Restrict uploads with CORS and Content-Type + +When generating presigned URLs for uploads, you can limit abuse and misuse by: + +1. **Restricting Content-Type**: Specify the allowed content type in the presigned URL parameters. The upload will fail if the client sends a different `Content-Type` header. + +2. **Configuring CORS**: Set up [CORS rules](/r2/buckets/cors/#add-cors-policies-from-the-dashboard) on your bucket to control which origins can upload files. Configure CORS via the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/r2/overview) by adding a JSON policy to your bucket settings: + +```json +[ + { + "AllowedOrigins": ["https://example.com"], + "AllowedMethods": ["PUT"], + "AllowedHeaders": ["Content-Type"], + "ExposeHeaders": ["ETag"], + "MaxAgeSeconds": 3600 + } +] ``` + +Then generate a presigned URL with a Content-Type restriction: + +```ts +const putUrl = await s3.getSignedUrlPromise("putObject", { + Bucket: "my-bucket", + Key: "user-upload.png", + Expires: 3600, + ContentType: "image/png", +}); +``` + +When a client uses this presigned URL, they must: +- Make the request from an allowed origin (enforced by CORS) +- Include the `Content-Type: image/png` header (enforced by the signature) diff --git a/src/content/docs/r2/examples/aws/aws-sdk-net.mdx b/src/content/docs/r2/examples/aws/aws-sdk-net.mdx index 208ba4248c1573..f201fd177653ed 100644 --- a/src/content/docs/r2/examples/aws/aws-sdk-net.mdx +++ b/src/content/docs/r2/examples/aws/aws-sdk-net.mdx @@ -19,11 +19,13 @@ private static IAmazonS3 s3Client; public static void Main(string[] args) { - var accessKey = ""; - var secretKey = ""; + // Retrieve your S3 API credentials for your R2 bucket via API tokens (see: https://developers.cloudflare.com/r2/api/tokens) + var accessKey = ""; + var secretKey = ""; var credentials = new BasicAWSCredentials(accessKey, secretKey); s3Client = new AmazonS3Client(credentials, new AmazonS3Config { + // Provide your Cloudflare account ID ServiceURL = "https://.r2.cloudflarestorage.com", }); } @@ -43,8 +45,11 @@ static async Task ListBuckets() Console.WriteLine("{0}", s3Bucket.BucketName); } } -// sdk-example -// my-bucket-name +``` + +```sh output +sdk-example +my-bucket ``` ```csharp @@ -52,7 +57,7 @@ static async Task ListObjectsV2() { var request = new ListObjectsV2Request { - BucketName = "sdk-example" + BucketName = "my-bucket" }; var response = await s3Client.ListObjectsV2Async(request); @@ -62,8 +67,11 @@ static async Task ListObjectsV2() Console.WriteLine("{0}", s3Object.Key); } } -// dog.png -// cat.png +``` + +```sh output +dog.png +cat.png ``` ## Upload and retrieve objects @@ -80,29 +88,35 @@ static async Task PutObject() var request = new PutObjectRequest { FilePath = @"/path/file.txt", - BucketName = "sdk-example", + BucketName = "my-bucket", DisablePayloadSigning = true, - DisableDefaultChecksumValidation = true + DisableDefaultChecksumValidation = true }; var response = await s3Client.PutObjectAsync(request); Console.WriteLine("ETag: {0}", response.ETag); } -// ETag: "186a71ee365d9686c3b98b6976e1f196" +``` + +```sh output +ETag: "186a71ee365d9686c3b98b6976e1f196" ``` ```csharp static async Task GetObject() { - var bucket = "sdk-example"; - var key = "file.txt" + var bucket = "my-bucket"; + var key = "file.txt"; var response = await s3Client.GetObjectAsync(bucket, key); Console.WriteLine("ETag: {0}", response.ETag); } -// ETag: "186a71ee365d9686c3b98b6976e1f196" +``` + +```sh output +ETag: "186a71ee365d9686c3b98b6976e1f196" ``` ## Generate presigned URLs @@ -115,7 +129,7 @@ static string? GeneratePresignedUrl() AWSConfigsS3.UseSignatureVersion4 = true; var presign = new GetPreSignedUrlRequest { - BucketName = "sdk-example", + BucketName = "my-bucket", Key = "file.txt", Verb = HttpVerb.GET, Expires = DateTime.Now.AddDays(7), @@ -127,5 +141,8 @@ static string? GeneratePresignedUrl() return presignedUrl; } -// URL: https://.r2.cloudflarestorage.com/sdk-example/file.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=&X-Amz-Date=&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature= +``` + +```sh output +https://.r2.cloudflarestorage.com/my-bucket/file.txt?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=&X-Amz-Date=&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature= ``` diff --git a/src/content/docs/r2/examples/aws/aws-sdk-php.mdx b/src/content/docs/r2/examples/aws/aws-sdk-php.mdx index 4f0a1ae1299dd7..23a5acc63d9156 100644 --- a/src/content/docs/r2/examples/aws/aws-sdk-php.mdx +++ b/src/content/docs/r2/examples/aws/aws-sdk-php.mdx @@ -16,15 +16,17 @@ This example uses version 3 of the [aws-sdk-php](https://packagist.org/packages/ "; -$access_key_id = ""; -$access_key_secret = ""; +$bucket_name = "my_bucket"; +// Provide your Cloudflare account ID +$account_id = ""; +// Retrieve your S3 API credentials for your R2 bucket via API tokens (see: https://developers.cloudflare.com/r2/api/tokens) +$access_key_id = ""; +$access_key_secret = ""; $credentials = new Aws\Credentials\Credentials($access_key_id, $access_key_secret); $options = [ - 'region' => 'auto', + 'region' => 'auto', // Required by SDK but not used by R2 'endpoint' => "https://$account_id.r2.cloudflarestorage.com", 'version' => 'latest', 'credentials' => $credentials @@ -69,7 +71,7 @@ var_dump($buckets['Buckets']); // [0]=> // array(2) { // ["Name"]=> -// string(11) "sdk-example" +// string(11) "my-bucket" // ["CreationDate"]=> // object(Aws\Api\DateTimeResult)#212 (3) { // ["date"]=> @@ -99,7 +101,7 @@ $cmd = $s3_client->getCommand('GetObject', [ $request = $s3_client->createPresignedRequest($cmd, '+1 hour'); print_r((string)$request->getUri()) -// https://sdk-example..r2.cloudflarestorage.com/ferriswasm.png?X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=&X-Amz-Date=&X-Amz-SignedHeaders=host&X-Amz-Expires=3600&X-Amz-Signature= +// https://my-bucket..r2.cloudflarestorage.com/ferriswasm.png?X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=&X-Amz-Date=&X-Amz-SignedHeaders=host&X-Amz-Expires=3600&X-Amz-Signature= // You can also create links for operations such as putObject to allow temporary write access to a specific key. $cmd = $s3_client->getCommand('PutObject', [ @@ -115,5 +117,5 @@ print_r((string)$request->getUri()) You can use the link generated by the `putObject` example to upload to the specified bucket and key, until the presigned link expires. ```sh -curl -X PUT https://sdk-example..r2.cloudflarestorage.com/ferriswasm.png?X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=&X-Amz-Date=&X-Amz-SignedHeaders=host&X-Amz-Expires=3600&X-Amz-Signature= --data-binary @ferriswasm.png +curl -X PUT https://my-bucket..r2.cloudflarestorage.com/ferriswasm.png?X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=&X-Amz-Date=&X-Amz-SignedHeaders=host&X-Amz-Expires=3600&X-Amz-Signature= --data-binary @ferriswasm.png ``` diff --git a/src/content/docs/r2/examples/aws/aws-sdk-ruby.mdx b/src/content/docs/r2/examples/aws/aws-sdk-ruby.mdx index 4b4292aa18eca3..67f8435b80ab2a 100644 --- a/src/content/docs/r2/examples/aws/aws-sdk-ruby.mdx +++ b/src/content/docs/r2/examples/aws/aws-sdk-ruby.mdx @@ -22,10 +22,12 @@ Then you can use Ruby to operate on R2 buckets: require "aws-sdk-s3" @r2 = Aws::S3::Client.new( - access_key_id: "#{access_key_id}", - secret_access_key: "#{secret_access_key}", - endpoint: "https://#{cloudflare_account_id}.r2.cloudflarestorage.com", - region: "auto", + # Retrieve your S3 API credentials for your R2 bucket via API tokens (see: https://developers.cloudflare.com/r2/api/tokens) + access_key_id: "#{ACCESS_KEY_ID}", + secret_access_key: "#{SECRET_ACCESS_KEY}", + # Provide your Cloudflare account ID + endpoint: "https://#{ACCOUNT_ID}.r2.cloudflarestorage.com", + region: "auto", # Required by SDK but not used by R2 ) # List all buckets on your account diff --git a/src/content/docs/r2/examples/aws/aws-sdk-rust.mdx b/src/content/docs/r2/examples/aws/aws-sdk-rust.mdx index d4488ac75ecad1..909e8e39fa9682 100644 --- a/src/content/docs/r2/examples/aws/aws-sdk-rust.mdx +++ b/src/content/docs/r2/examples/aws/aws-sdk-rust.mdx @@ -19,9 +19,12 @@ use aws_smithy_types::date_time::Format::DateTime; #[tokio::main] async fn main() -> Result<(), s3::Error> { let bucket_name = "sdk-example"; - let account_id = ""; - let access_key_id = ""; - let access_key_secret = ""; + // Provide your Cloudflare account ID + let account_id = ""; + // Retrieve your S3 API credentials for your R2 bucket via API tokens + // (see: https://developers.cloudflare.com/r2/api/tokens) + let access_key_id = ""; + let access_key_secret = ""; // Configure the client let config = aws_config::from_env() @@ -33,7 +36,7 @@ async fn main() -> Result<(), s3::Error> { None, "R2", )) - .region("auto") + .region("auto") // Required by SDK but not used by R2 .load() .await; diff --git a/src/content/docs/r2/examples/aws/aws4fetch.mdx b/src/content/docs/r2/examples/aws/aws4fetch.mdx index 56494606db5f08..42f7f6660f3564 100644 --- a/src/content/docs/r2/examples/aws/aws4fetch.mdx +++ b/src/content/docs/r2/examples/aws/aws4fetch.mdx @@ -15,9 +15,11 @@ You must pass in the R2 configuration credentials when instantiating your `S3` s ```ts import { AwsClient } from "aws4fetch"; +// Provide your Cloudflare account ID const R2_URL = `https://${ACCOUNT_ID}.r2.cloudflarestorage.com`; const client = new AwsClient({ + // Retrieve your S3 API credentials for your R2 bucket via API tokens (see: https://developers.cloudflare.com/r2/api/tokens) accessKeyId: ACCESS_KEY_ID, secretAccessKey: SECRET_ACCESS_KEY, }); @@ -32,7 +34,7 @@ console.log(await ListBucketsResult.text()); // // // 2022-05-07T02:46:49.218Z -// my-bucket-name +// my-bucket // // // @@ -42,11 +44,11 @@ console.log(await ListBucketsResult.text()); // const ListObjectsV2Result = await client.fetch( - `${R2_URL}/my-bucket-name?list-type=2`, + `${R2_URL}/my-bucket?list-type=2`, ); console.log(await ListObjectsV2Result.text()); // -// my-bucket-name +// my-bucket // // cat.png // 751832 @@ -75,33 +77,37 @@ You can also generate presigned links that can be used to share public read or w import { AwsClient } from "aws4fetch"; const client = new AwsClient({ - service: "s3", - region: "auto", + service: "s3", // Required by SDK but not used by R2 + region: "auto", // Required by SDK but not used by R2 + // Retrieve your S3 API credentials for your R2 bucket via API tokens (see: https://developers.cloudflare.com/r2/api/tokens) accessKeyId: ACCESS_KEY_ID, secretAccessKey: SECRET_ACCESS_KEY, }); +// Provide your Cloudflare account ID const R2_URL = `https://${ACCOUNT_ID}.r2.cloudflarestorage.com`; // Use the `X-Amz-Expires` query param to determine how long the presigned link is valid. console.log( ( await client.sign( - new Request(`${R2_URL}/my-bucket-name/dog.png?X-Amz-Expires=${3600}`), + new Request(`${R2_URL}/my-bucket/dog.png?X-Amz-Expires=${3600}`), { aws: { signQuery: true }, }, ) ).url.toString(), ); -// https://.r2.cloudflarestorage.com/my-bucket-name/dog.png?X-Amz-Expires=3600&X-Amz-Date=&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=&X-Amz-SignedHeaders=host&X-Amz-Signature= - // You can also create links for operations such as PutObject to allow temporary write access to a specific key. +// Specify Content-Type header to restrict uploads to a specific file type. console.log( ( await client.sign( - new Request(`${R2_URL}/my-bucket-name/dog.png?X-Amz-Expires=${3600}`, { + new Request(`${R2_URL}/my-bucket/dog.png?X-Amz-Expires=${3600}`, { method: "PUT", + headers: { + "Content-Type": "image/png", + }, }), { aws: { signQuery: true }, @@ -111,8 +117,56 @@ console.log( ); ``` -You can use the link generated by the `PutObject` example to upload to the specified bucket and key, until the presigned link expires. +```sh output +https://.r2.cloudflarestorage.com/my-bucket/dog.png?X-Amz-Expires=3600&X-Amz-Date=&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=&X-Amz-SignedHeaders=host&X-Amz-Signature= +https://.r2.cloudflarestorage.com/my-bucket/dog.png?X-Amz-Expires=3600&X-Amz-Date=&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=&X-Amz-SignedHeaders=content-type%3Bhost&X-Amz-Signature= +``` + +You can use the link generated by the `PutObject` example to upload to the specified bucket and key, until the presigned link expires. When using a presigned URL with `Content-Type`, the client must include a matching `Content-Type` header in the request. ```sh -curl -X PUT "https://.r2.cloudflarestorage.com/my-bucket-name/dog.png?X-Amz-Expires=3600&X-Amz-Date=&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=&X-Amz-SignedHeaders=host&X-Amz-Signature=" -F "data=@dog.png" +curl -X PUT "https://.r2.cloudflarestorage.com/my-bucket/dog.png?X-Amz-Expires=3600&X-Amz-Date=&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=&X-Amz-SignedHeaders=content-type%3Bhost&X-Amz-Signature=" \ + -H "Content-Type: image/png" \ + --data-binary @dog.png ``` + +## Restrict uploads with CORS and Content-Type + +When generating presigned URLs for uploads, you can limit abuse and misuse by: + +1. **Restricting Content-Type**: Specify the `Content-Type` header in the request when signing. The upload will fail if the client sends a different `Content-Type` header. + +2. **Configuring CORS**: Set up [CORS rules](/r2/buckets/cors/#add-cors-policies-from-the-dashboard) on your bucket to control which origins can upload files. Configure CORS via the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/r2/overview) by adding a JSON policy to your bucket settings: + +```json +[ + { + "AllowedOrigins": ["https://example.com"], + "AllowedMethods": ["PUT"], + "AllowedHeaders": ["Content-Type"], + "ExposeHeaders": ["ETag"], + "MaxAgeSeconds": 3600 + } +] +``` + +Then generate a presigned URL with a Content-Type restriction: + +```ts +const signedRequest = await client.sign( + new Request(`${R2_URL}/my-bucket/user-upload.png?X-Amz-Expires=${3600}`, { + method: "PUT", + headers: { + "Content-Type": "image/png", + }, + }), + { + aws: { signQuery: true }, + }, +); +const putUrl = signedRequest.url.toString(); +``` + +When a client uses this presigned URL, they must: +- Make the request from an allowed origin (enforced by CORS) +- Include the `Content-Type: image/png` header (enforced by the signature) diff --git a/src/content/docs/r2/examples/aws/boto3.mdx b/src/content/docs/r2/examples/aws/boto3.mdx index 54ca396c040037..6b9e74613773ba 100644 --- a/src/content/docs/r2/examples/aws/boto3.mdx +++ b/src/content/docs/r2/examples/aws/boto3.mdx @@ -14,9 +14,11 @@ You must configure [`boto3`](https://boto3.amazonaws.com/v1/documentation/api/la import boto3 s3 = boto3.resource('s3', - endpoint_url = 'https://.r2.cloudflarestorage.com', - aws_access_key_id = '', - aws_secret_access_key = '' + # Provide your Cloudflare account ID + endpoint_url = 'https://.r2.cloudflarestorage.com', + # Retrieve your S3 API credentials for your R2 bucket via API tokens (see: https://developers.cloudflare.com/r2/api/tokens) + aws_access_key_id = '', + aws_secret_access_key = '' ) ``` @@ -28,32 +30,116 @@ An example script may look like the following: import boto3 s3 = boto3.client( - service_name ="s3", - endpoint_url = 'https://.r2.cloudflarestorage.com', - aws_access_key_id = '', - aws_secret_access_key = '', - region_name="", # Must be one of: wnam, enam, weur, eeur, apac, auto + service_name="s3", + # Provide your Cloudflare account ID + endpoint_url='https://.r2.cloudflarestorage.com', + # Retrieve your S3 API credentials for your R2 bucket via API tokens (see: https://developers.cloudflare.com/r2/api/tokens) + aws_access_key_id='', + aws_secret_access_key='', + region_name="auto", # Required by SDK but not used by R2 ) # Get object information -object_information = s3.head_object(Bucket=, Key=) +object_information = s3.head_object(Bucket='my-bucket', Key='dog.png') # Upload/Update single file -s3.upload_fileobj(io.BytesIO(file_content), , ) +s3.upload_fileobj(io.BytesIO(file_content), 'my-bucket', 'dog.png') # Delete object -s3.delete_object(Bucket=, Key=) +s3.delete_object(Bucket='my-bucket', Key='dog.png') ``` -```sh -python main.py +## Generate presigned URLs + +You can also generate presigned links that can be used to share public read or write access to a bucket temporarily. + +```python +import boto3 + +s3 = boto3.client( + service_name="s3", + # Provide your Cloudflare account ID + endpoint_url='https://.r2.cloudflarestorage.com', + # Retrieve your S3 API credentials for your R2 bucket via API tokens (see: https://developers.cloudflare.com/r2/api/tokens) + aws_access_key_id='', + aws_secret_access_key='', + region_name="auto", # Required by SDK but not used by R2 +) + +# Generate presigned URL for reading (GET) +# The ExpiresIn parameter determines how long the presigned link is valid (in seconds) +get_url = s3.generate_presigned_url( + 'get_object', + Params={'Bucket': 'my-bucket', 'Key': 'dog.png'}, + ExpiresIn=3600 # Valid for 1 hour +) + +print(get_url) + +# Generate presigned URL for writing (PUT) +# Specify ContentType to restrict uploads to a specific file type +put_url = s3.generate_presigned_url( + 'put_object', + Params={ + 'Bucket': 'my-bucket', + 'Key': 'dog.png', + 'ContentType': 'image/png' + }, + ExpiresIn=3600 +) + +print(put_url) ``` ```sh output -Buckets: - - user-uploads - - my-bucket-name -Objects: - - cat.png - - todos.txt +https://.r2.cloudflarestorage.com/my-bucket/dog.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=...&X-Amz-Date=&X-Amz-Expires=3600&X-Amz-SignedHeaders=host&X-Amz-Signature= +https://.r2.cloudflarestorage.com/my-bucket/dog.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=...&X-Amz-Date=&X-Amz-Expires=3600&X-Amz-SignedHeaders=content-type%3Bhost&X-Amz-Signature= +``` + +You can use the link generated by the `put_object` example to upload to the specified bucket and key, until the presigned link expires. When using a presigned URL with `ContentType`, the client must include a matching `Content-Type` header in the request. + +```sh +curl -X PUT "https://.r2.cloudflarestorage.com/my-bucket/dog.png?X-Amz-Algorithm=..." \ + -H "Content-Type: image/png" \ + --data-binary @dog.png ``` + +## Restrict uploads with CORS and Content-Type + +When generating presigned URLs for uploads, you can limit abuse and misuse by: + +1. **Restricting Content-Type**: Specify the allowed content type in the presigned URL parameters. The upload will fail if the client sends a different `Content-Type` header. + +2. **Configuring CORS**: Set up [CORS rules](/r2/buckets/cors/#add-cors-policies-from-the-dashboard) on your bucket to control which origins can upload files. Configure CORS via the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/r2/overview) by adding a JSON policy to your bucket settings: + +```json +[ + { + "AllowedOrigins": ["https://example.com"], + "AllowedMethods": ["PUT"], + "AllowedHeaders": ["Content-Type"], + "ExposeHeaders": ["ETag"], + "MaxAgeSeconds": 3600 + } +] +``` + +Then generate a presigned URL with a Content-Type restriction: + +```python +# Generate a presigned URL with Content-Type restriction +# The upload will only succeed if the client sends Content-Type: image/png +put_url = s3.generate_presigned_url( + 'put_object', + Params={ + 'Bucket': 'my-bucket', + 'Key': 'dog.png', + 'ContentType': 'image/png' + }, + ExpiresIn=3600 +) +``` + +When a client uses this presigned URL, they must: +- Make the request from an allowed origin (enforced by CORS) +- Include the `Content-Type: image/png` header (enforced by the signature) diff --git a/src/content/docs/r2/examples/aws/custom-header.mdx b/src/content/docs/r2/examples/aws/custom-header.mdx index cc3dacf02d3e0d..67c9711451a0b5 100644 --- a/src/content/docs/r2/examples/aws/custom-header.mdx +++ b/src/content/docs/r2/examples/aws/custom-header.mdx @@ -18,9 +18,11 @@ When using certain functionality, like the `cf-create-bucket-if-missing` header, import boto3 client = boto3.resource('s3', - endpoint_url = 'https://.r2.cloudflarestorage.com', - aws_access_key_id = '', - aws_secret_access_key = '' + # Provide your Cloudflare account ID + endpoint_url = 'https://.r2.cloudflarestorage.com', + # Retrieve your S3 API credentials for your R2 bucket via API tokens (see: https://developers.cloudflare.com/r2/api/tokens) + aws_access_key_id = '', + aws_secret_access_key = '' ) event_system = client.meta.events @@ -46,8 +48,9 @@ import { } from "@aws-sdk/client-s3"; const client = new S3Client({ - region: "auto", + region: "auto", // Required by SDK but not used by R2 endpoint: `https://${ACCOUNT_ID}.r2.cloudflarestorage.com`, + // Retrieve your S3 API credentials for your R2 bucket via API tokens (see: https://developers.cloudflare.com/r2/api/tokens) credentials: { accessKeyId: ACCESS_KEY_ID, secretAccessKey: SECRET_ACCESS_KEY, @@ -87,9 +90,11 @@ To enable us to pass custom headers as an extra argument into the call to `clien import boto3 client = boto3.resource('s3', - endpoint_url = 'https://.r2.cloudflarestorage.com', - aws_access_key_id = '', - aws_secret_access_key = '' + # Provide your Cloudflare account ID + endpoint_url = 'https://.r2.cloudflarestorage.com', + # Retrieve your S3 API credentials for your R2 bucket via API tokens (see: https://developers.cloudflare.com/r2/api/tokens) + aws_access_key_id = '', + aws_secret_access_key = '' ) event_system = client.meta.events @@ -125,8 +130,10 @@ import { } from "@aws-sdk/client-s3"; const client = new S3Client({ - region: "auto", + region: "auto", // Required by SDK but not used by R2 + // Provide your Cloudflare account ID endpoint: `https://${ACCOUNT_ID}.r2.cloudflarestorage.com`, + // Retrieve your S3 API credentials for your R2 bucket via API tokens (see: https://developers.cloudflare.com/r2/api/tokens) credentials: { accessKeyId: ACCESS_KEY_ID, secretAccessKey: SECRET_ACCESS_KEY, diff --git a/src/content/docs/r2/objects/delete-objects.mdx b/src/content/docs/r2/objects/delete-objects.mdx index 145537dc70c2bb..05dfb021ea1e18 100644 --- a/src/content/docs/r2/objects/delete-objects.mdx +++ b/src/content/docs/r2/objects/delete-objects.mdx @@ -2,14 +2,14 @@ title: Delete objects pcx_content_type: how-to sidebar: - order: 3 + order: 5 --- -import { Render, DashButton } from "~/components"; +import { Render, Tabs, TabItem, DashButton } from "~/components"; -You can delete objects from your bucket from the Cloudflare dashboard or using the Wrangler. +You can delete objects from R2 using the dashboard, Workers API, S3 API, or command-line tools. -## Delete objects via the Cloudflare dashboard +## Delete via dashboard 1. In the Cloudflare dashboard, go to the **R2 object storage** page. @@ -19,26 +19,84 @@ You can delete objects from your bucket from the Cloudflare dashboard or using t 4. Select your objects and select **Delete**. 5. Confirm your choice by selecting **Delete**. -## Delete objects via Wrangler +## Delete via Workers API -:::caution +Use R2 [bindings](/workers/runtime-apis/bindings/) in Workers to delete objects: -Deleting objects from a bucket is irreversible. +```ts ins={3} +export default { + async fetch(request: Request, env: Env, ctx: ExecutionContext) { + await env.MY_BUCKET.delete("image.png"); + return new Response("Deleted"); + }, +} satisfies ExportedHandler; +``` -::: +For complete documentation, refer to [Workers API](/r2/api/workers/workers-api-usage/). -You can delete an object directly by calling `delete` against a `{bucket}/{path/to/object}`. +## Delete via S3 API -For example, to delete the object `foo.png` from bucket `test-bucket`: +Use S3-compatible SDKs to delete objects. You'll need your [account ID](/fundamentals/account/find-account-and-zone-ids/) and [R2 API token](/r2/api/tokens/). -```sh -wrangler r2 object delete test-bucket/foo.png + + + +```ts +import { S3Client, DeleteObjectCommand } from "@aws-sdk/client-s3"; + +const S3 = new S3Client({ + region: "auto", // Required by SDK but not used by R2 + // Provide your Cloudflare account ID + endpoint: `https://.r2.cloudflarestorage.com`, + // Retrieve your S3 API credentials for your R2 bucket via API tokens (see: https://developers.cloudflare.com/r2/api/tokens) + credentials: { + accessKeyId: '', + secretAccessKey: '', + }, +}); + +await S3.send( + new DeleteObjectCommand({ + Bucket: "my-bucket", + Key: "image.png", + }), +); ``` -```sh output + + + +```python +import boto3 -Deleting object "foo.png" from bucket "test-bucket". -Delete complete. +s3 = boto3.client( + service_name="s3", + # Provide your Cloudflare account ID + endpoint_url=f"https://{ACCOUNT_ID}.r2.cloudflarestorage.com", + # Retrieve your S3 API credentials for your R2 bucket via API tokens (see: https://developers.cloudflare.com/r2/api/tokens) + aws_access_key_id=ACCESS_KEY_ID, + aws_secret_access_key=SECRET_ACCESS_KEY, + region_name="auto", # Required by SDK but not used by R2 +) + +s3.delete_object(Bucket="my-bucket", Key="image.png") ``` - \ No newline at end of file + + + +For complete S3 API documentation, refer to [S3 API](/r2/api/s3/api/). + +## Delete via Wrangler + +:::caution + +Deleting objects from a bucket is irreversible. + +::: + +Use [Wrangler](/workers/wrangler/install-and-update/) to delete objects. Run the [`r2 object delete` command](/workers/wrangler/commands/#r2-object-delete): + +```sh +wrangler r2 object delete test-bucket/image.png +``` \ No newline at end of file diff --git a/src/content/docs/r2/objects/download-objects.mdx b/src/content/docs/r2/objects/download-objects.mdx index 8de8c695088f6f..83441b0a592289 100644 --- a/src/content/docs/r2/objects/download-objects.mdx +++ b/src/content/docs/r2/objects/download-objects.mdx @@ -2,37 +2,107 @@ title: Download objects pcx_content_type: how-to sidebar: - order: 2 + order: 4 --- -import { Render, DashButton } from "~/components"; +import { Render, Tabs, TabItem, DashButton } from "~/components"; -You can download objects from your bucket from the Cloudflare dashboard or using the Wrangler. +You can download objects from R2 using the dashboard, Workers API, S3 API, or command-line tools. -## Download objects via the Cloudflare dashboard +## Download via dashboard 1. In the Cloudflare dashboard, go to the **R2 object storage** page. -2. Locate and select your bucket. +2. Select your bucket. 3. Locate the object you want to download. -4. At the end of the object's row, select the menu button and click **Download**. +4. Select **...** for the object and click **Download**. -## Download objects via Wrangler +## Download via Workers API -You can download objects from a bucket, including private buckets in your account, directly. +Use R2 [bindings](/workers/runtime-apis/bindings/) in Workers to download objects: -For example, to download `file.bin` from `test-bucket`: +```ts ins={3} +export default { + async fetch(request: Request, env: Env, ctx: ExecutionContext) { + const object = await env.MY_BUCKET.get("image.png"); + return new Response(object.body); + }, +} satisfies ExportedHandler; +``` -```sh -wrangler r2 object get test-bucket/file.bin +For complete documentation, refer to [Workers API](/r2/api/workers/workers-api-usage/). + +## Download via S3 API + +Use S3-compatible SDKs to download objects. You'll need your [account ID](/fundamentals/account/find-account-and-zone-ids/) and [R2 API token](/r2/api/tokens/). + + + + +```ts +import { S3Client, GetObjectCommand } from "@aws-sdk/client-s3"; + +const S3 = new S3Client({ + region: "auto", // Required by SDK but not used by R2 + // Provide your Cloudflare account ID + endpoint: `https://.r2.cloudflarestorage.com`, + // Retrieve your S3 API credentials for your R2 bucket via API tokens (see: https://developers.cloudflare.com/r2/api/tokens) + credentials: { + accessKeyId: '', + secretAccessKey: '', + }, +}); + +const response = await S3.send( + new GetObjectCommand({ + Bucket: "my-bucket", + Key: "image.png", + }), +); ``` -```sh output -Downloading "file.bin" from "test-bucket". -Download complete. + + + +```python +import boto3 + +s3 = boto3.client( + service_name="s3", + # Provide your Cloudflare account ID + endpoint_url=f"https://{ACCOUNT_ID}.r2.cloudflarestorage.com", + # Retrieve your S3 API credentials for your R2 bucket via API tokens (see: https://developers.cloudflare.com/r2/api/tokens) + aws_access_key_id=ACCESS_KEY_ID, + aws_secret_access_key=SECRET_ACCESS_KEY, + region_name="auto", # Required by SDK but not used by R2 +) + +response = s3.get_object(Bucket="my-bucket", Key="image.png") +image_data = response["Body"].read() ``` -The file will be downloaded into the current working directory. You can also use the `--file` flag to set a new name for the object as it is downloaded, and the `--pipe` flag to pipe the download to standard output (stdout). + + + +Refer to R2's [S3 API documentation](/r2/api/s3/api/) for all S3 API methods. + +### Presigned URLs + +For client-side downloads where users download directly from R2, use presigned URLs. Your server generates a temporary download URL that clients can use without exposing your API credentials. + +1. Your application generates a presigned GET URL using an S3 SDK +2. Send the URL to your client +3. Client downloads directly from R2 using the presigned URL + +For details on generating and using presigned URLs, refer to [Presigned URLs](/r2/api/s3/presigned-urls/). + +## Download via Wrangler + +Use [Wrangler](/workers/wrangler/install-and-update/) to download objects. Run the [`r2 object get` command](/workers/wrangler/commands/#r2-object-get): + +```sh +wrangler r2 object get test-bucket/image.png +``` - \ No newline at end of file +The file will be downloaded into the current working directory. You can also use the `--file` flag to set a new name for the object as it is downloaded, and the `--pipe` flag to pipe the download to standard output (stdout). \ No newline at end of file diff --git a/src/content/docs/r2/objects/multipart-objects.mdx b/src/content/docs/r2/objects/multipart-objects.mdx index 295f9cf9e9bab0..f0c0f4043f3164 100644 --- a/src/content/docs/r2/objects/multipart-objects.mdx +++ b/src/content/docs/r2/objects/multipart-objects.mdx @@ -2,7 +2,7 @@ title: Multipart upload pcx_content_type: concept sidebar: - order: 1 + order: 3 --- diff --git a/src/content/docs/r2/objects/upload-objects.mdx b/src/content/docs/r2/objects/upload-objects.mdx index d077fb43fa98e5..f48013512a6a7a 100644 --- a/src/content/docs/r2/objects/upload-objects.mdx +++ b/src/content/docs/r2/objects/upload-objects.mdx @@ -2,77 +2,149 @@ title: Upload objects pcx_content_type: how-to sidebar: - order: 1 + order: 2 --- import { Steps, Tabs, TabItem, Render, DashButton } from "~/components" -You can upload objects to your bucket from using API (both [Workers Binding API](/r2/api/workers/workers-api-reference/) or [compatible S3 API](/r2/api/s3/api/)), rclone, Cloudflare dashboard, or Wrangler. +There are several ways to upload objects to R2: +1. Using the [S3 API](/r2/api/s3/api/), which is supported by a wide range of tools and libraries (recommended) +2. Directly from within a Worker using R2's [Workers API](/r2/api/workers/) +3. Using the [Cloudflare dashboard](https://dash.cloudflare.com/?to=/:account/r2/overview) +4. Using the [Wrangler](/r2/reference/wrangler-commands/) command-line (`wrangler r2`) -## Upload objects via Rclone +## Upload via dashboard -Rclone is a command-line tool which manages files on cloud storage. You can use rclone to upload objects to R2. Rclone is useful if you wish to upload multiple objects concurrently. +To upload objects to your bucket from the Cloudflare dashboard: -To use rclone, install it onto your machine using their official documentation - [Install rclone](https://rclone.org/install/). + +1. In the Cloudflare dashboard, go to the **R2 object storage** page. + + +2. Select your bucket. +3. Select **Upload**. +4. Drag and drop your file into the upload area or **select from computer**. + -Upload your files to R2 using the `rclone copy` command. +You will receive a confirmation message after a successful upload. -```sh -# Upload a single file -rclone copy /path/to/local/file.txt r2:bucket_name +## Upload via Workers API -# Upload everything in a directory -rclone copy /path/to/local/folder r2:bucket_name +Use R2 [bindings](/workers/runtime-apis/bindings/) in Workers to upload objects server-side: + +```ts ins={3} +export default { + async fetch(request: Request, env: Env, ctx: ExecutionContext) { + await env.MY_BUCKET.put("image.png", request.body); + return new Response("Uploaded"); + }, +} satisfies ExportedHandler; ``` -Verify that your files have been uploaded by listing the objects stored in the destination R2 bucket using `rclone ls` command. +For complete documentation, refer to [Workers API](/r2/api/workers/workers-api-usage/). -```sh -rclone ls r2:bucket_name +## Upload via S3 API + +Use S3-compatible SDKs to upload objects. You'll need your [account ID](/fundamentals/account/find-account-and-zone-ids/) and [R2 API token](/r2/api/tokens/). + + + + +```ts +import { S3Client, PutObjectCommand } from "@aws-sdk/client-s3"; + +const S3 = new S3Client({ + region: "auto", // Required by SDK but not used by R2 + // Provide your Cloudflare account ID + endpoint: `https://.r2.cloudflarestorage.com`, + // Retrieve your S3 API credentials for your R2 bucket via API tokens (see: https://developers.cloudflare.com/r2/api/tokens) + credentials: { + accessKeyId: '', + secretAccessKey: '', + }, +}); + +await S3.send( + new PutObjectCommand({ + Bucket: "my-bucket", + Key: "image.png", + Body: fileContent, + }), +); ``` -For more information, refer to our [rclone example](/r2/examples/rclone/). + + -## Upload objects via the Cloudflare dashboard +```python +import boto3 -To upload objects to your bucket from the Cloudflare dashboard: +s3 = boto3.client( + service_name="s3", + # Provide your Cloudflare account ID + endpoint_url=f"https://{ACCOUNT_ID}.r2.cloudflarestorage.com", + # Retrieve your S3 API credentials for your R2 bucket via API tokens (see: https://developers.cloudflare.com/r2/api/tokens) + aws_access_key_id=ACCESS_KEY_ID, + aws_secret_access_key=SECRET_ACCESS_KEY, + region_name="auto", # Required by SDK but not used by R2 +) - -1. In the Cloudflare dashboard, go to the **R2 object storage** page. +s3.put_object(Bucket="my-bucket", Key="image.png", Body=file_content) +``` - -2. Select your bucket. -3. Select **Upload**. -4. Choose to either drag and drop your file into the upload area or **select from computer**. - + + -You will receive a confirmation message after a successful upload. +Refer to R2's [S3 API documentation](/r2/api/s3/api/) for all S3 API methods. -## Upload objects via Wrangler +### Presigned URLs -:::note +For client-side uploads where users upload directly to R2, use presigned URLs. Your server generates a temporary upload URL that clients can use without exposing your API credentials. -Wrangler only supports uploading files up to 315MB in size. To upload large files, we recommend [rclone](/r2/examples/rclone/) or an [S3-compatible](/r2/api/s3/) tool of your choice. +1. Your application generates a presigned PUT URL using an S3 SDK +2. Send the URL to your client +3. Client uploads directly to R2 using the presigned URL -::: +For details on generating and using presigned URLs, refer to [Presigned URLs](/r2/api/s3/presigned-urls/). + +## Upload via CLI -To upload a file to R2, call `put` and provide a name (key) for the object, as well as the path to the file via `--file`: +### Rclone + +[Rclone](https://rclone.org/) is a command-line tool for managing files on cloud storage. Rclone works well for uploading multiple files from your local machine or copying data from other cloud storage providers. + +To use rclone, install it onto your machine using their official documentation - [Install rclone](https://rclone.org/install/). + +Upload files with the `rclone copy` command: ```sh -wrangler r2 object put test-bucket/dataset.csv --file=dataset.csv +# Upload a single file +rclone copy /path/to/local/image.png r2:bucket_name + +# Upload everything in a directory +rclone copy /path/to/local/folder r2:bucket_name ``` -```sh output -Creating object "dataset.csv" in bucket "test-bucket". -Upload complete. +Verify the upload with `rclone ls`: + +```sh +rclone ls r2:bucket_name ``` -You can set the `Content-Type` (MIME type), `Content-Disposition`, `Cache-Control` and other HTTP header metadata through optional flags. +For more information, refer to our [rclone example](/r2/examples/rclone/). + +### Wrangler :::note -Wrangler's `object put` command only allows you to upload one object at a time. -Use rclone if you wish to upload multiple objects to R2. +Wrangler supports uploading files up to 315MB and only allows one object at a time. For large files or bulk uploads, use [rclone](/r2/examples/rclone/) or another [S3-compatible](/r2/api/s3/) tool. + ::: - \ No newline at end of file +Use [Wrangler](/workers/wrangler/install-and-update/) to upload objects. Run the [`r2 object put` command](/workers/wrangler/commands/#r2-object-put): + +```sh +wrangler r2 object put test-bucket/image.png --file=image.png +``` + +You can set the `Content-Type` (MIME type), `Content-Disposition`, `Cache-Control` and other HTTP header metadata through optional flags.