Build a 'micro' dining app in a weekend using free cloud tiers
serverlessmicroappsrapid-prototyping

Build a 'micro' dining app in a weekend using free cloud tiers

ffrees
2026-01-21
11 min read
Advertisement

Recreate Rebecca Yu’s dining micro app in 48 hours using free serverless tiers, Cloudflare Workers, D1, and LLM copilots (ChatGPT/Claude).

Cut decision fatigue: build a micro dining app in 48 hours on free serverless tiers

Hook: You're an engineer or savvy IT admin who needs a fast, low-cost prototype — a tiny web app that recommends where a group should eat — without paying cloud bills or negotiating vendor contracts. In 48 hours, using only free-tier serverless hosting, Cloudflare Pages tooling, and LLM copilots (ChatGPT / Claude) you can reproduce Rebecca Yu’s week-long dining app as a modular, production-ready micro app you can extend later.

Why this matters in 2026 (short)

Micro apps — single-purpose utilities built for a creator or a small user group — are mainstream in 2026. Advances in LLM copilots, better free serverless tiers, and acquisitions like Cloudflare’s Human Native signal platforms are optimizing for rapid, low-cost developer workflows and data-aware tooling. That means you can prototype a useful, secure app quickly and keep cloud costs at zero for as long as you need to iterate.

What you'll ship in 48 hours (TL;DR)

  • A static single-page frontend deployed on Cloudflare Pages (React/Svelte/Vanilla).
  • A lightweight serverless API on Cloudflare Workers for search, recommendations, and state.
  • Serverless storage using D1 (SQLite) for restaurant metadata and KV for sessions/cache.
  • A recommendation algorithm that mixes simple heuristics with an optional LLM-enhanced ranking step — LLMs are used as build-time copilots (ChatGPT/Claude) and optionally at runtime using free LLM endpoints if you need natural-language explanations.
  • CI/CD via GitHub + Cloudflare Pages or Wrangler for one-click deploys and previews.

Before you start: accounts, assumptions, and limits

Time saver: create these free accounts before you begin.

  • Cloudflare account (Pages, Workers, D1, KV, R2 can be enabled). In 2026, Cloudflare's free dev stack is purpose-built for micro apps (verify your account has D1 and KV enabled).
  • GitHub or GitLab for repo + CI integration (free for public/private).
  • ChatGPT and Anthropic Claude accounts for copiloting — use the chat interfaces / code generation features (you can use free-tier chat sessions to generate code and prompts).

Practical note on limits: Free-tier quotas change; check Cloudflare’s dashboard for current request limits, D1 row limits, and KV size. This tutorial keeps traffic and storage modest so the free tier supports personal use and MVP testing.

48-hour roadmap (practical breakdown)

Work in focused blocks; each block has a deliverable you can demo. If you want to compress further, combine blocks.

Hour 0–2: Plan the MVP

  • Define core flows: Add restaurants, pick group members, generate a ranked suggestion list, vote to lock in choice.
  • Decide data model: restaurants (id, name, tags, lat, lng, rating, source), users (id, name, preferences), sessions (session id, participants, votes).
  • Pick a UI framework: go light — a single HTML page with vanilla JS or a minimal React/Vite template. Cloudflare Pages will serve static assets.

Hour 2–6: Scaffold repo and initial UI

  1. Create repo: git init / GitHub repo.
  2. Scaffold Pages project. Example minimal structure:
    /
      index.html
      /src
        app.js
        styles.css
      /api (local mocks)
    
  3. Use ChatGPT/Claude to generate UI components quickly: paste a short prompt describing the UI and ask for a minimal HTML/CSS/JS bundle. Iterate until the layout matches your desired flow (list + session + recommend button).

Hour 6–12: Cloudflare Workers API + Wrangler

Install Wrangler (Cloudflare’s CLI) and bootstrap a Worker project. Wrangler will also let you create and wire D1 and KV from CLI or the Cloudflare dashboard.

# install wrangler
npm install -g wrangler

# init worker
wrangler init where2eat-worker --template=javascript
cd where2eat-worker
wrangler dev

Write minimal Worker routes:

addEventListener('fetch', event => {
  event.respondWith(handle(event.request))
})

async function handle(request) {
  const url = new URL(request.url)
  if (url.pathname === '/api/restaurants') return getRestaurants(request)
  if (url.pathname === '/api/recommend') return recommend(request)
  return new Response('Not found', { status: 404 })
}

Use D1 for restaurant storage and KV for ephemeral session data. Use Wrangler to create them:

wrangler d1 create where2eat-db
wrangler kv:namespace create WHERE2EAT_CACHE

Hour 12–18: Seed data and simple recommendation logic

Use a small, curated seed list — either hand-curated or pulled from OpenStreetMap / Overpass API (free public endpoint). For a weekend hack, seed 50 restaurants in your city.

-- example SQL for D1
CREATE TABLE restaurants (
  id TEXT PRIMARY KEY,
  name TEXT,
  tags TEXT, -- json
  lat REAL,
  lng REAL,
  rating REAL
);

INSERT INTO restaurants (id,name,tags,lat,lng,rating) VALUES
('r1','Taqueria Azul','["tacos","mexican", "cheap"]',37.7749,-122.4194,4.1);

Recommendation algorithm (simple, explainable):

  1. Filter by user-selected tags and distance.
  2. Score = weighted sum: tagMatch * 0.6 + normalizedRating * 0.3 + recency/popularity * 0.1
  3. Return top N and allow tie-breaking with participant votes.

Hour 18–26: Add LLM-powered enhancements (build-time copiloting)

Use ChatGPT and Claude as copilots to speed development — not necessarily as the runtime engine. Useful tasks:

  • Generate SQL migration and seed scripts from a description.
  • Create unit tests for API endpoints.
  • Write client-side code for session management and localStorage caching.

Prompt examples:

Prompt ChatGPT: "Generate a Cloudflare Worker endpoint that queries a D1 SQLite table for restaurants matching a JSON array of tags and returns top 10 sorted by rating."

Using copilots at build-time reduces runtime LLM costs and avoids paid API usage.

Implement session creation endpoint that stores session info in KV and returns a short shareable path (like /s/abc123).

POST /api/session -> { sessionId: 'abc123' }
POST /api/session/abc123/join -> add participant
POST /api/session/abc123/vote -> record vote
GET /api/session/abc123/results -> computed results

Use KV for quick key-value operations (session → JSON), and D1 for persistent restaurant data. Keep the KV TTL short and persist votes to D1 periodically if you need durability.

Hour 34–42: Polish UI and deploy to Cloudflare Pages

  1. Wire frontend to /api endpoints with fetch(); display recommendations and voting UI.
  2. Add optimistic UI updates for voting and a basic share flow (copy session link).
  3. Set up Cloudflare Pages: connect GitHub repo, configure build command (if any), and publish.

Pages provides preview deployments for each PR — great for sharing with testers.

Hour 42–48: Test, harden, and ship

  • Test edge cases: empty results, concurrent votes, session expiry.
  • Rate-limit abusive endpoints with Cloudflare Workers rate limiting or basic token checks.
  • Enable Cloudflare Access if you want to restrict usage to invited emails for the personal app.
  • Document upgrade paths: enabling runtime LLM API calls (optional), adding R2 for images, or switching to a paid DB if the app grows.

Example Worker handler: recommend (concise)

async function recommend(request) {
  const body = await request.json()
  const tags = body.tags || []
  const sql = `SELECT id,name,tags,lat,lng,rating FROM restaurants` // add WHERE clause when tags present
  const res = await D1.prepare(sql).all()
  // simple tag scoring
  const scored = res.results.map(r => {
    const rtags = JSON.parse(r.tags)
    const tagMatch = tags.reduce((acc,t) => acc + (rtags.includes(t) ? 1:0), 0)
    const score = tagMatch*0.6 + (r.rating/5)*0.4
    return {...r, score}
  }).sort((a,b) => b.score - a.score)

  return new Response(JSON.stringify({ results: scored.slice(0,10) }), { headers: { 'Content-Type': 'application/json' }})
}

How and when to use LLMs at runtime (cost-aware patterns)

Two safe usage patterns keep runtime costs at zero or near-zero:

  1. Build-time copiloting — Use ChatGPT/Claude to write code, tests, SQL, and UI. This is free (chat interface) and massively accelerates development. It fits into modern studio ops workflows where copilots accelerate iteration.
  2. Optional low-volume runtime LLMs — If you want human-friendly explanations like "Why this place?", call a free-tier LLM inference endpoint (Hugging Face, Anthropic free quota, or a community LLM). Keep responses short, cache them in KV, and only call LLMs when a user explicitly requests an explanation.

Avoid using paid LLM inference (OpenAI / Anthropic paid APIs) as a requirement for your app unless you have budget. Instead, design the core recommendation engine deterministically and add LLM enhancements as optional UX sugar.

Data sources with no paid infra

If you need external restaurant data without paid APIs:

  • OpenStreetMap (Overpass API): public, queryable, and free for low-volume prototypes. Respect rate limits and cache queries in D1/KV.
  • Small scrape or CSV of local favorites: export a curated CSV and import to D1 — lightweight and reliable.
  • Yelp / Foursquare dev tiers: these often have free developer SDKs with rate limits; use them carefully and cache results.

Operational tips to stay on free tiers

  • Use caching aggressively. Store query results in KV with TTLs; serve stale results when needed. This pattern pairs well with hybrid edge–regional hosting strategies as you grow.
  • Limit heavy computation on the Worker. Move anything long-running to scheduled tasks or break into smaller requests.
  • Constrain images: use small thumbnails hosted on free R2 (if available) or link to external images with cache-control headers.
  • Monitor usage via Cloudflare dashboard; set alerts before you hit free-tier limits — pair this with a monitoring platform like the ones reviewed here.

Security and privacy (practical minimal steps)

  • Authenticate optional sessions with a single-use token stored in KV (no full OAuth required for a private micro app).
  • Sanitize inputs in Workers (avoid SQL injection; use parameterized queries with D1.prepare) — see guidance on privacy by design for TypeScript APIs.
  • Use HTTPS and HSTS (Cloudflare handles TLS by default).
  • If collecting user location, keep it optional and store it client-side unless explicitly needed server-side.

Scaling path (if the micro app grows)

How to upgrade without redesigning the app:

  • Move from D1 -> managed Postgres or DynamoDB if you need multi-region writes or huge datasets.
  • Switch KV caching to a paid plan or introduce a CDN-backed cache layer for static assets.
  • Introduce paid LLM usage for richer natural-language features, but only after measuring ROI.

Real-world example: rapid iteration using LLMs (case study)

When reproducing Rebecca Yu’s week-long Where2Eat in a weekend, the split looked like this:

  1. Hours 1–6: Use ChatGPT to produce a minimal React UI and SASS styles.
  2. Hours 6–12: Wrangler + D1 scaffolding generated by Claude; seed data imported via SQL scripts created by the copilot.
  3. Hours 12–24: Implement recommendation logic; use copilot to write edge-case unit tests.
  4. Hours 24–48: Polish, run usability tests with friends, deploy to Pages, and share preview links.

Result: a working app that recommends places and handles multi-person voting — deployable, secure, and free for personal use.

Developer checklist (quick)

  • [ ] Cloudflare account + Wrangler installed
  • [ ] GitHub repo with Pages integration
  • [ ] D1 database with restaurants table seeded
  • [ ] KV namespace for sessions/caching
  • [ ] Minimal frontend connected to /api endpoints
  • [ ] ChatGPT/Claude prompts and scratchpad used for tests and SQL generation
  • [ ] Monitoring + rate-limiting configured
  • Data-aware developer tooling: Cloudflare’s 2026 investments (including Human Native) mean more integrated pipelines for using creator data and training small models — useful if you later want private personalization without cloud vendor lock-in.
  • Edge LLMs: Emerging lightweight LLM runtimes on Workers or Wasm runtimes can power short explanations at the edge with minimal latency. Test local models via Hugging Face or community runtimes before paying for commercial APIs.
  • Composable Copilots: Combine ChatGPT/Claude for UI & SQL generation with CI checks — use PR previews to validate changes instantly. Component marketplaces and helper tools (see component marketplaces) speed frontend assembly.

Common pitfalls and how to avoid them

  • Avoid relying on a single paid API for core functionality — architect for graceful degradation.
  • Don’t run heavy LLM inference on each page load — cache generated text in KV and serve it until stale.
  • Watch cold-start patterns: big third-party SDKs in Workers increase bundle size and slow responses. Keep Workers lean.

Wrap-up: What you gain

In two days you convert a week-long build into an efficient micro app that solves decision fatigue for a group. You get a replicable pattern: deterministic core logic on free serverless infra plus optional LLM enhancements used as copilots. This preserves a zero-dollar runway for experimentation while keeping upgrade paths open.

Actionable takeaways

  • Use ChatGPT / Claude as build-time copilots — they accelerate development without increasing runtime costs.
  • Design deterministic recommendation logic first; add LLMs for UX only if necessary.
  • Leverage Cloudflare Pages + Workers + D1 + KV to keep the entire stack on free tiers.
  • Cache aggressively and test limits early to avoid surprises in production.

Next steps (call-to-action)

If you want a ready-to-clone starter repo, a tested Wrangler template, and a set of ChatGPT/Claude prompts tailored to this build, grab the free starter kit on frees.cloud and deploy the demo to Cloudflare Pages in under 10 minutes. Share your fork and join our community thread to compare optimizations and limit-sparing techniques.

Advertisement

Related Topics

#serverless#microapps#rapid-prototyping
f

frees

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-02-04T02:15:44.894Z