DIY experiments. Every week.

Raw projects. No spam.

Can My $12 Box Survive the Reddit Hug of Death?

Cover Image for Can My $12 Box Survive the Reddit Hug of Death?

I build indie applications of very modest size.
On a good day, one of my posts might do alright on Reddit and send me ~300 visitors.

But what if I actually made it to Reddit’s front page?
Could my humble box - 1 CPU core and 2 GB of RAM - handle the dreaded hug of death?

Let’s find out:


My Reality Check ($12 Box)

  • Specs: 1 CPU core, 2 GB RAM, 50 GB storage
  • Stack: Python (FastAPI), HTMX, SQLite
  • Cost: $12/month

This isn’t a Kubernetes cluster. It’s not a microservices playground.
It’s one small VPS where all my apps live.


Constraints

To keep this experiment honest, I set some rules:

  • No upgrades - no extra CPU/RAM.
  • No language switch - no C++/Rust.
  • No database swap - SQLite stays.

The challenge is to see what my David can do against Goliath (Reddit).


What Really Matters

Total visitors per day doesn’t tell you much.
What matters is how many requests hit your server at the same time — the bursts.

Not American sniper - more like John Wick avenging his dog.


From RPS to Users

Assuming a typical web user makes 1 request every 10–30 seconds (≈0.05–0.1 RPS per user).

If my server can handle 100 requests per second (RPS): that translates to about 1,000–2,000 simultaneous users.

Simultaneous users1000.05 to 0.1=1,0002,000\text{Simultaneous users} \approx \frac{100}{0.05 \text{ to } 0.1} = 1{,}000 \text{–} 2{,}000

That’s the ballpark.


The Test Setup

To keep it simple, I used a tiny CRUD app as the guinea pig. It covers all the basic operations:

  • Read: load homepage entries
  • Search: run queries
  • Write: add a new entry

For load testing, I used bombardier from my laptop, pointing at my live server. Page 4 / 20 connections, CPU barely moving


First Test: 100 RPS

The big question: can my $12 box hold steady under 100 requests per second, the rough equivalent of 1,000–2,000 simultaneous users?

Observation

  • Avg latency: ~140–160 ms
  • CPU: ~60%
  • No errors

Page 1 / 200 connections, still holding steady

Interpretation
At 100 RPS, the box didn’t break a sweat. This suggests that for reads and simple pages, the $12 server could realistically handle front-page Reddit traffic in bursts.


Search: Where Things Break

When I hit /search?q=fastapi with 100 RPS, the picture changed:

Observation

  • Avg latency: ~174 ms
  • CPU: pegged at 100%
  • Some slowdown under load Page 1 / 200 connections, /search?q=fastapi, CPU pinned 100%

Interpretation
Search is the bottleneck. Heavy queries push the CPU to its limit, meaning this is where the $12 box would choke under front-page load.


Test RPS Target Avg Latency CPU Usage Outcome
Homepage (Idle) 1 RPS ~158 ms 11% Stable
Homepage (Burst) 100 RPS ~140 ms 60% Still holding
Search (Burst) 100 RPS ~174 ms 100% Bottleneck found

I'm a CRUD monkey. And this app reflects the majority of my apps. A simple get, a tiny write (INSERT + COMMIT) + search (usually it’s LIKE %term% on a text column).

It makes sense for the search to be the problem. SQLite’s B-tree index can’t help when you search %term%. It has to scan rows one by one.

So I reached for SQLite’s secret weapon: FTS5. It gives you a full-text index, turning those full scans into fast lookups.

Make search fast: switch to FTS5

FTS5 gives you an inverted index; queries become index lookups instead of table scans.

What I had:

@app.get("/search", response_class=HTMLResponse)
def search(request: Request, q: str = ""):
    con = get_db()
    results = []
    if q:
        results = con.execute(
            "SELECT * FROM entries WHERE content LIKE ? ORDER BY created_at DESC LIMIT 20",
            (f"%{q}%",),
        ).fetchall()
    con.close()
    return templates.TemplateResponse(
        "search.html", {"request": request, "results": results, "q": q}
    )

After migrating the DB to FTS5 and updating the search endpoint:

@app.get("/search", response_class=HTMLResponse)
def search(request: Request, q: str = Query(..., min_length=2, max_length=64)):
        tokens = [t for t in q.split() if t]
        match = " ".join(f"{t}*" for t in tokens)  # prefix search
        con = get_db()
        rows = con.execute("""
            SELECT e.*
            FROM entries_fts f
            JOIN entries e ON e.id = f.rowid
            WHERE f.entries_fts MATCH ?
            ORDER BY e.created_at DESC
            LIMIT 20
        """, (match,)).fetchall()
        con.close()
        return templates.get_template("search.html").render(request=request, results=rows, q=q)

Before: At 100 RPS, search pegged CPU and fell apart with 5xx errors.

After FTS5 + WAL: At 100 RPS, search ran clean with ~200 ms latency, 0 errors, and steady throughput.

Lessons for Indie Hackers

  • Reads are cheap, search is expensive.
  • SQLite is fine, just don’t abuse LIKE %term%.
  • WAL mode prevents "DB locked" pain.
  • A $12 box can take a Reddit punch — you don’t need Kubernetes for your MVP.

Outro

Ok, David can take a punch - about 2,000 simultaneous users

Will it survive thousands more - say 5,000–25,000 of simultaneous users raw (no caching)? Absolutely not.

But with Nginx micro-caching or a CDN in front - and decent SQL under the hood - even a $12 box can shrug off traffic that looks terrifying on paper.

For indie hackers, the lesson is simple: don’t overcomplicate — test your box, fix the bottlenecks, and ship.

The Repo, tests, and logs