Using S3-style object storage on my "undoxxable" home server
In a previous blogpost I outlined the way I host web services from my (laptop-based) home server with reasonable anonymity, using Cloudflare Tunnel and a trusted VPN:

I also mentioned that though this architecture works well for most single-server websites, a rather common use case that it doesn't fully cover is services that rely on third-party object storage, like Amazon S3 and Backblaze B2. For services with lots of data (e.g. a media server), using object storage of some sort is hard to get around, at least as an off-site backup.
Why is this a problem? Nothing actually prevents your web app from depending on an S3 bucket on the backend or frontend – for instance, a video-hosting website whose webpages are hosted on your laptop could easily have the actual video files hosted on Amazon S3.
The problem is anonymity: no matter how "undoxxable" your home server is, if the S3 bucket can be linked to your identity – and it surely can, at least by the S3 provider itself – then that identity will be connected to whatever your hosting on your homeserver. This might not be a problem (probably 90% of "don't dox my home" needs are fine with Amazon AWS as a trusted party), but if it is, the typical way S3 is used is not going to work.
rclone to the rescue
I briefly entertained the idea of trying to find an S3 provider that accepted anonymous customers paying in crypto, but there doesn't seem to be anything like that that's reputable/reliable, and reliability is pretty much the whole point of S3-like object storage.
Instead, I settled on client-side encryption: encrypting the files placed in the S3 bucket so that the S3 provider cannot link the files stored in the bucket with files hosted through the home server, which makes it totally okay to use a S3 bucket linked to your identity.
This does mean that on the home server you have to store the encryption key to the files and host a service that decrypts and serves the files – direct links to files in the bucket would serve encrypted garbage – but that's generally an acceptable tradeoff, as long as the home server has enough bandwidth to handle serving files to all the users.
The nice thing is that there's actually already a ready-made tool for this sort of encryption: rclone, a popular cloud-storage backup tool. You can set up any S3-compatible bucket (or even more exotic forms of storage, like a Google Drive account) as an rclone backend and then wrap it in a layer of encryption to produce a "virtual backend" that transparently presents the same S3-like interface as any rclone backend.
For instance, I use Cloudflare R2 as my provider, so I set up my rclone config file with something like this:
[r2]
type = s3
provider = Cloudflare
access_key_id = a2f3........................467c
secret_access_key = 9750............................................c2ea
endpoint = https://b5e397a549f0e6543dba8e1d83ca9924.r2.cloudflarestorage.com
[demo-encrypted]
type = crypt
remote = r2:nullchinchilla/encrypted
password = ................................................
The really cool thing is that this encrypted virtual backend can then be served over HTTP using rclone serve
:
rclone serve http demo-encrypted: --addr 127.0.0.1:9091
and given a publicly reachable URL through Cloudflare Tunnel:

We've now constructed an effectively undoxxable large file server that uses a doxxed S3 bucket as a storage backend, accessible from any web browser. Uploading files to this server can be done on any machine with an rclone
client, or even on machines without rclone
clients by using some of the fancier rclone serve
options to expose the encrypted bucket as as an authenticated WebDAV or SFTP server.
Tweaking some knobs for caching and streaming
By default, rclone serve
has reasonable defaults for low-traffic browsing on a single machine, but I had to tweak a few knobs to make it work well as a Cloudflare-cached, production-scale server:
Fixing rclone's inappropriate Last-Modified header
rclone serve
serves all paths that don't have a backend-provided last modified time – for S3-compatible backends, that's all folder listings –with a fixed Last-Modified HTTP header of January 1, 2000:
curl -I https://rclone.nullchinchilla.me/hahala/
HTTP/2 200
date: Mon, 25 Aug 2025 23:51:27 GMT
content-type: text/html; charset=utf-8
accept-ranges: bytes
last-modified: Sat, 01 Jan 2000 00:00:00 GMT
...
This majorly messes with how Cloudflare and the browser caches folder listing pages, since Cloudflare/the browser never sees a newer Last-Modified header when trying to revalidate a cached listing page, causing it to be indefinitely cached no matter how stale it actually becomes. Sadly, there does not seem to be a way of turning off this behavior.
Fortunately, the fix here is easy: use Cloudflare Rules to remove the Last-Modified HTTP header and disable caching for all pages ending in /
:

Disabling Cloudflare caching for Range requests
One issue I noticed was when opening a long video file (e.g. a movie) in a browser, the video starts playing and buffering immediately, but if I fast-forward to a later part of the video, playback is stuck for a very long time.
In other words, HTTP Range requests for later parts of files seem to take much longer than Range requests for earlier parts of files.
It turns out this is due to the way Cloudflare caching interacts with Range requests.. Cloudflare caches files at a whole-file granularity, so if the video file requested isn't already in the Cloudflare cache, and the user requests, say, bytes 100000-110000 of the file, Cloudflare will actually load the entire file up to the 100000th byte before streaming 10000 bytes to the user. And given that the Cloudflare—home-server link, and the home-server—bucket link, aren't that fast, this can take quite a long time.
I ended up simply having all Range requests bypass the cache by modifying my cache-bypass rule:

This does increase load on the server if your users seek a lot in videos, but unfortunately, there doesn't seem to be a way of fixing this in a way that fully preserves caching. Hopefully your FTTH connection has enough upload bandwidth to deal with uncached video streaming!
Optimizing rclone serve performance
rclone serve might not be able to fill your network connection, especially for single-threaded downloads, if the latency to the S3 provider is high. And given that the entire home server is behind a VPN, the latency often is pretty high (my setup has ~200ms latency to Backblaze B2).
An easy fix is to enable "VFS chunked reading" in rclone:
rclone serve http demo-encrypted: \
--addr 127.0.0.1:9091 \
--read-only -vv --use-server-modtime \
--dir-cache-time 5m \
--vfs-read-chunk-size 4M \
--vfs-read-chunk-size-limit 32M \
--vfs-read-chunk-streams 8
This makes rclone open up to 16 connections to the backend to concurrently read backend files in 4-megabyte chunks, even if only one file is being downloaded. It instantly makes rclone serve fill the pipe, even when latency is very high.
Using chunked reading does increase the number of requests to the backend, and thus the cost incurred. This can be mitigated greatly by using rclone's built in caching to avoid repeatedly downloading files from the backend if they haven't changed, which also saves a lot of download bandwidth:
rclone serve http demo-encrypted: \
--addr 127.0.0.1:9091 \
--read-only -vv --use-server-modtime \
--dir-cache-time 5m \
--vfs-read-chunk-size 4M \
--vfs-read-chunk-size-limit 32M \
--vfs-read-chunk-streams 8 \
--cache-dir /var/cache/rclone \
--vfs-cache-max-age 720h \
--vfs-cache-mode full
Note that the options here are a little confusing: vfs-cache-max-age
set to an insanely long time does not lead to files becoming stale, but only sets the maximum time in which files are forced to be evicted from the cache. Since I want files to stick around as long as they're fresh and the cache size is under 50 GB, this parameter is set to a very long time. The option that actually controls how frequently files are checked for freshness is dir-cache-time
, which defaults to a sane 5 minutes.
The final result
You can check out the final result, including a large-ish video file for streaming and a variety of random files, at https://rclone.nullchinchilla.me/