Run an S3-Compatible Object Store Locally

Every modern application eventually deals with object storage—images, documents, logs or arbitrary blobs. In production you might use Amazon S3, Google Cloud Storage or MinIO. But during development there's often no easy way to simulate these services without provisioning cloud resources or relying on heavyweight emulators. Hard-coding bucket names or skipping storage integration altogether leads to surprises later.

This is why we built Local Object Storage, a fully local, configurable S3-like storage service. It lets you run an object store on your laptop, define buckets in a YAML file and interact with it using standard S3-style operations. The project is open source and designed specifically for development and testing.

What Does It Offer?

Local Object Storage supports the core operations you need when integrating with S3:

Bucket-based organization - Buckets are declared in the config file; the service automatically creates directories for objects and metadata.

S3-like object operations - Listing, uploading, downloading, deleting and inspecting objects via HTTP endpoints.

Metadata & content type handling - Each object stores metadata (e.g., custom attributes) and content type.

Dockerized & portable - Images are available for linux/amd64 and linux/arm64 architectures. You can run the service in any environment without installing additional dependencies.

All state is stored on disk under /data/objects and /data/meta; this makes the storage deterministic and persistent across restarts. Because the service runs locally and never calls out to external APIs, it's ideal for testing, CI/CD pipelines or air-gapped environments.

Minimal Configuration

At the heart of Local Object Storage is a simple YAML file. You define the port and the buckets you need:

port: 8080
buckets:
  - name: photos
  - name: userdata

That's it. Each bucket line declares a directory for storing objects and metadata. Defining buckets this way forces you to think about every storage dependency your application has. When you compare your local configuration to your Terraform or infrastructure code, you immediately see which buckets must exist in the cloud. This practice encourages good architecture: configuration variables become obvious and there are no hidden or ad-hoc buckets.

You can mount this configuration file into the container or let the service read it from /config.yaml using the CONFIG_PATH environment variable.

Starting the Server

To get going, pull the pre-built image and mount your configuration:

docker pull siocode/local-object-storage
docker run -p 8080:8080 \
  -v $PWD/local-object-storage.config.yaml:/config.yaml \
  -v storagedata:/data \
  siocode/local-object-storage

A Docker Compose setup looks like this:

services:
  storage:
    image: siocode/local-object-storage:latest
    ports:
      - "8080:8080"
    volumes:
      - ./local-object-storage.config.yaml:/config.yaml
      - storagedata:/data
volumes:
  storagedata: {}

Run docker compose up and the service will listen on port 8080, creating the configured buckets if they don't exist.

API Overview

Local Object Storage exposes a small, predictable API that mirrors common S3 operations:

OperationMethod & PathDescription
List objectsGET /buckets/{bucket}/objectsReturns a list of objects in a bucket; supports optional prefix, start_after and limit query parameters
Retrieve objectGET /buckets/{bucket}/objects/{key}Returns the raw content of an object along with content type, hash and metadata headers
Upload object (stream)PUT /buckets/{bucket}/objects/{key}Uploads a file directly; accepts Content-Type and X-Object-Metadata headers
Upload object (base64)POST /buckets/{bucket}/objects/{key}Accepts base64-encoded data and optional metadata in JSON
Get metadataHEAD /buckets/{bucket}/objects/{key}Returns object metadata (size, hash, last modified, custom metadata) without the content
Delete objectDELETE /buckets/{bucket}/objects/{key}Removes an object, returning 204 No Content on success or 404 Not Found if it doesn't exist

In addition, a GET /healthz endpoint returns { "status": "OK" } for liveness checks, and there are no authentication requirements. Because the API is intentionally small, it's easy to mock or integrate with existing SDKs.

Why It's Useful

Developing with local object storage brings several advantages:

Clear visibility into bucket dependencies - Declaring buckets in YAML makes every storage dependency explicit. When the time comes to write infrastructure scripts, you already know which buckets must be provisioned.

Minimal setup, maximum fidelity - You can spin up the service next to your database and API in a few lines of Docker configuration. The API mimics S3 semantics, so your application code stays the same.

Deterministic local testing - With on-disk storage, objects persist across container restarts. You can easily clear state by removing the data volume.

CI/CD and offline support - Because the service is self-contained and MIT-licensed, it can run in your CI pipeline or on developers' machines without network access.

Why Local Object Storage Matters

Object storage is a foundational piece of every modern application, but testing it shouldn't require cloud dependencies or complicated mock setups. Local Object Storage brings production-like storage flows to your local machine with:

  • Zero external dependencies - No internet connection required during development
  • Instant reset - Restart the container or clear the volume to return to a known state
  • Complete control - Define buckets and their contents in a single YAML file
  • Production parity - Uses the same S3-style API patterns as your production code

The image is MIT-licensed and available at github.com/SIOCODE-Open/local-object-storage. Whether you're building a new app with file uploads, migrating from one storage provider to another, or just want to test storage flows without waiting on cloud services, Local Object Storage gives you a realistic testing environment in seconds.

Local Object Storage follows the same philosophy as our Local Identity Provider: provide a realistic drop-in service for development without the complexity of a full production system. By specifying buckets in a single config file and exposing a handful of S3-style endpoints, it helps teams build storage integrations early and catch configuration mistakes before they reach production.

What storage flows could you test more thoroughly if you had a local object store that behaved exactly like S3?


Have questions about Local Object Storage or implementing file storage in your applications? Contact us at info@siocode.hu.