Skip to content

Cloud Storage for PMTiles

Concepts

PMTiles is designed to work on any S3-compatible cloud storage platform that supports HTTP Range Requests. Proper support for Cross-Origin Resource Sharing is required if your map frontend is hosted on a different domain than your storage platform.

Uploading

  • Most cloud storage platforms support moderate-size uploads through a web interface.

  • The pmtiles command line tool has a pmtiles upload command for moving files to cloud storage. This requires credentials with your specific platform.

  • RClone is another recommended tool for managing large files on S3-compatible storage.

sh
rclone copyto my-filename my-configuration:my-bucket/my-folder/my-filename.pmtiles --progress --s3-chunk-size=256M
rclone copyto my-filename my-configuration:my-bucket/my-folder/my-filename.pmtiles --progress --s3-chunk-size=256M

RClone is also available via the rclone/rclone Docker image.

  1. rclone config and follow the on screen questions. In Docker, the config is located at /etc/rclone.
  2. rclone copyto <FILE> <rclone configuration name>:<BUCKET_NAME>/<FILE> --progress --s3-no-check-bucket --s3-chunk-size=256M to upload to the root of the bucket.
  • The aws command-line tool can be used for uploads, as well as setting CORS configuration on any S3-compatible platform.

INFO

Storage services usually bill by number of GET requests and the total number of bytes stored. It's important to understand these costs when hosting PMTiles, as each Range tile request will count as a GET.

Cloudflare R2

R2 is the recommended storage platform for PMTiles because it does not have bandwidth fees, only per-request fees: see R2 Pricing.

  • R2 supports HTTP/2.

  • R2 CORS can be configured through a command-line utility, like the aws tool, or from your S2 bucket's "Settings" tab's "CORS Policy" section:

json
{
  "CORSRules": [
    {
      "AllowedOrigins": ["https://example.com"],
      "AllowedMethods": ["GET","HEAD"],
      "AllowedHeaders": ["range","if-match"],
      "ExposeHeaders": ["etag"],
      "MaxAgeSeconds": 3000
    }
  ]
}
{
  "CORSRules": [
    {
      "AllowedOrigins": ["https://example.com"],
      "AllowedMethods": ["GET","HEAD"],
      "AllowedHeaders": ["range","if-match"],
      "ExposeHeaders": ["etag"],
      "MaxAgeSeconds": 3000
    }
  ]
}

Example of using the aws command line tool to configure R2 CORS:

aws s3api put-bucket-cors --bucket MY_BUCKET --cors-configuration file:///home/user/cors_rules.json --endpoint-url https://S3_COMPATIBLE_ENDPOINT
aws s3api put-bucket-cors --bucket MY_BUCKET --cors-configuration file:///home/user/cors_rules.json --endpoint-url https://S3_COMPATIBLE_ENDPOINT

Amazon S3

  • only HTTP/1.1 supported

  • From your S3 Bucket's "Permissions" tab, scroll to the Cross-origin resource sharing (CORS) editor.

S3 Policy for public reads:

json
{
    "Version": "2012-10-17",
    "Id": "",
    "Statement": [
        {
            "Sid": "PublicRead",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::example-bucket/*"
        }
    ]
}
{
    "Version": "2012-10-17",
    "Id": "",
    "Statement": [
        {
            "Sid": "PublicRead",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::example-bucket/*"
        }
    ]
}

S3 CORS Configuration:

  • Using the AWS interface:
json
[
    {
      "AllowedOrigins": ["https://example.com"],
      "AllowedMethods": ["GET","HEAD"],
      "AllowedHeaders": ["range","if-match"],
      "ExposeHeaders": ["etag"],
      "MaxAgeSeconds": 3000
    }
]
[
    {
      "AllowedOrigins": ["https://example.com"],
      "AllowedMethods": ["GET","HEAD"],
      "AllowedHeaders": ["range","if-match"],
      "ExposeHeaders": ["etag"],
      "MaxAgeSeconds": 3000
    }
]
  • The CORS can also be set using aws s3api put-bucket-cors --bucket MY_BUCKET --cors-configuration file:///home/user/cors_rules.json using the JSON structure shown above for Cloudflare R2.

Google Cloud

CORS: Google Cloud Shell

echo '[{"maxAgeSeconds": 300, "method": ["GET", "HEAD"], "origin": ["https://example.com"], "responseHeader": ["range","etag","if-match"]}]' > cors.json
gsutil cors set cors.json gs://my-bucket-name
echo '[{"maxAgeSeconds": 300, "method": ["GET", "HEAD"], "origin": ["https://example.com"], "responseHeader": ["range","etag","if-match"]}]' > cors.json
gsutil cors set cors.json gs://my-bucket-name

CORS: gsutil tool

Install the gsutil tool

json
[
    {
      "origin": ["https://example.com"],
      "method": ["GET","HEAD"],
      "responseHeader": ["range","etag","if-match"],
      "maxAgeSeconds": 300
    }
]
[
    {
      "origin": ["https://example.com"],
      "method": ["GET","HEAD"],
      "responseHeader": ["range","etag","if-match"],
      "maxAgeSeconds": 300
    }
]
bash
gsutil cors set gcors.json gs://my-bucket-name
gsutil cors set gcors.json gs://my-bucket-name

Microsoft Azure

  • only HTTP/1.1 supported

  • Configuration through Web Portal

  • CORS Configuration - in left sidebar Resource Sharing (CORS)

    • Set Allowed Headers to range,if-match
    • Set Exposed Headers to range,accept-ranges,etag

DigitalOcean Spaces

  • only HTTP/1.1 supported (even with CDN enabled)

  • CORS is configured via Web UI.

  • use S3Cmd config to expose etag header

Backblaze B2

Sample CORS Configuration:

json
[
    {
      "corsRuleName": "allowHeaders",
      "allowedOrigins": ["https://example.com"],
      "allowedOperations":["b2_download_file_by_name"],
      "allowedHeaders": ["range","if-match"],
      "maxAgeSeconds": 300
    }
]
[
    {
      "corsRuleName": "allowHeaders",
      "allowedOrigins": ["https://example.com"],
      "allowedOperations":["b2_download_file_by_name"],
      "allowedHeaders": ["range","if-match"],
      "maxAgeSeconds": 300
    }
]

Supabase Storage

CORS

Currently, limiting access to certain domains is only possible by proxying requests to Private Buckets through Supabase Edge Functions, which has an additional billing model.

This proxy Edge Function validates the request origin and attaches a header with your project's service role key. This allows you to serve files from private buckets while still benefitting from the built in smart CDN.

ts
const ALLOWED_ORIGINS = ["http://localhost:3000"];
const corsHeaders = {
  "Access-Control-Allow-Origin": ALLOWED_ORIGINS.join(","),
  "Access-Control-Allow-Headers":
    "authorization, x-client-info, apikey, content-type, range, if-match",
  "Access-Control-Expose-Headers": "range, accept-ranges, etag",
  "Access-Control-Max-Age": "300",
};

Deno.serve(async (req) => {
  if (req.method === "OPTIONS") {
    return new Response("ok", { headers: corsHeaders });
  }

  // Validate request origin.
  const origin = req.headers.get("Origin");
  console.log(origin);
  if (!origin || !ALLOWED_ORIGINS.includes(origin)) {
    return new Response("Not Allowed", { status: 405 });
  }

  // Construct private bucket storage URL.
  const reqUrl = new URL(req.url);
  const url = `${
    Deno.env.get("SUPABASE_URL")
  }/storage/v1/object/authenticated${reqUrl.pathname}`;
  console.log(url);

  const { method, headers } = req;
  // Add auth header to access file in private bucket.
  const modHeaders = new Headers(headers);
  modHeaders.append(
    "authorization",
    `Bearer ${Deno.env.get("SUPABASE_SERVICE_ROLE_KEY")!}`,
  );
  return fetch(url, { method, headers: modHeaders });
});
const ALLOWED_ORIGINS = ["http://localhost:3000"];
const corsHeaders = {
  "Access-Control-Allow-Origin": ALLOWED_ORIGINS.join(","),
  "Access-Control-Allow-Headers":
    "authorization, x-client-info, apikey, content-type, range, if-match",
  "Access-Control-Expose-Headers": "range, accept-ranges, etag",
  "Access-Control-Max-Age": "300",
};

Deno.serve(async (req) => {
  if (req.method === "OPTIONS") {
    return new Response("ok", { headers: corsHeaders });
  }

  // Validate request origin.
  const origin = req.headers.get("Origin");
  console.log(origin);
  if (!origin || !ALLOWED_ORIGINS.includes(origin)) {
    return new Response("Not Allowed", { status: 405 });
  }

  // Construct private bucket storage URL.
  const reqUrl = new URL(req.url);
  const url = `${
    Deno.env.get("SUPABASE_URL")
  }/storage/v1/object/authenticated${reqUrl.pathname}`;
  console.log(url);

  const { method, headers } = req;
  // Add auth header to access file in private bucket.
  const modHeaders = new Headers(headers);
  modHeaders.append(
    "authorization",
    `Bearer ${Deno.env.get("SUPABASE_SERVICE_ROLE_KEY")!}`,
  );
  return fetch(url, { method, headers: modHeaders });
});

Other Platforms

GitHub Pages

GitHub pages supports repositories up to 1 GB. If your PMTiles file fits, it's an easy way to host.

Scaleway

Scaleway Object Storage only supports HTTP/1.1.

HTTP Servers

  • Caddy is highly recommended for serving PMTiles because of its built-in HTTPS support. Use the file_server configuration to serve .pmtiles from a static directory.

CORS configuration:

  Access-Control-Allow-Methods GET,HEAD
  Access-Control-Expose-Headers ETag
  Access-Control-Allow-Headers Range,If-Match
  Access-Control-Allow-Origin http://example.com
  Access-Control-Allow-Methods GET,HEAD
  Access-Control-Expose-Headers ETag
  Access-Control-Allow-Headers Range,If-Match
  Access-Control-Allow-Origin http://example.com

As an alternative, consider using the pmtiles_proxy plugin for Caddy.

  • Nginx supports HTTP Range Requests. CORS headers should be set by configuration files.

Next steps

An open source mapping system released under the BSD and ODbL licenses.