Case study: How Rebecca Yu shipped a dining app in seven days — architecture breakdown
case-studymicroappsprototyping

Case study: How Rebecca Yu shipped a dining app in seven days — architecture breakdown

ffrees
2026-01-24
11 min read
Advertisement

Reconstructing Rebecca Yu’s 7-day dining micro-app: architecture, free services, and a step-by-step guide to replicate and productionize it.

Hook: Ship fast, spend less — the pain point every engineer feels

Decision fatigue, tight timelines, and rising cloud bills stop good ideas from becoming shipped projects. Rebecca Yu’s week-long dining app — built with an LLM copilot and published as a tiny web micro-app — is an existence proof for rapid prototyping. This case study reconstructs the likely architecture and toolchain behind Where2Eat, shows how you can replicate it on primarily free services in 2026, and gives pragmatic steps to harden the same stack for production.

Why this matters in 2026: micro apps, vibe-coding and the new tooling landscape

Late 2025 and early 2026 accelerated three trends important to engineers and IT admins:

  • Micro apps — small, single-purpose web apps for a narrow audience (a friend group) are now common because creators want speed and low overhead.
  • LLM copilots — from pair programming to generating UX logic and server code, LLMs (open-source and hosted) are mainstream tools for prototyping.
  • Edge-first serverless — Edge Functions and serverless databases reduced friction for global low-latency micro-apps with cheaper idle costs.

Rebecca’s story is a concrete example: she used LLMs (Claude, ChatGPT) to “vibe-code” and published quickly. Below we reconstruct the likely architecture, suggest free-tier alternatives, and provide a step-by-step replication and productionization checklist.

Reconstructed architecture — the minimal, realistic stack

To build a dining recommendation app in seven days, you need a low-friction stack with components you can wire quickly. Here’s the likely architecture Rebecca used — and how to replace each component with free or generous-tier services in 2026:

1. Front end: Static single-page app (React + Tailwind)

Choice: React (Vite) or Solid + Tailwind CSS for fast UI iteration. Deploy as static assets to an edge CDN for free.

  • What it does: UI, preferences input, results display, local caching.
  • Free hosting options: Cloudflare Pages, Vercel (Hobby), or Netlify. Cloudflare Pages in 2026 is attractive for the global edge footprint and built-in Workers integration.

2. API/Edge logic: Serverless functions or Edge Functions

Choice: small endpoints to call LLMs, aggregate restaurant data, and manage sessions.

  • What it does: Proxy to LLMs (to avoid exposing keys), apply business rules (avoid recommending closed places), rate limiting.
  • Free hosting options: Cloudflare Workers (free tier), Supabase Edge Functions (free tier for developer usage), or Vercel Edge Functions.

3. Data: Preferences, short histories, and lightweight listings

Choice: Serverless Postgres-like store for structured data and object storage for images.

  • What it does: Store user profiles (name, taste tags), session history, crowdsourced place metadata.
  • Free hosting options: Supabase (free tier Postgres + Auth + Storage), Neon (serverless Postgres free tier in 2026), or PlanetScale (serverless MySQL free tier).

4. Places data and discovery

Choice: For a week-long build you avoid paying for Google Places by using a mix of public data and lightweight APIs.

  • Options: OpenStreetMap (Nominatim) for geocoding, Foursquare or Yelp developer endpoints for richer data (note quota), or a simple crowdsourced list entered by users.

5. LLM copilot: Recommendation generation and conversational UX

Choice: Light-weight prompt-driven LLM that suggests restaurants based on tags and constraints.

  • Options: Hosted LLMs (OpenAI/Anthropic) — cost-managed with usage caps; or open-source models served via local inference or hosted runtimes (Ollama, Hugging Face Inference, Replicate). In 2026 there are many community-hosted models that reduce cost and latency for small apps.
  • Important pattern: keep the LLM stateless where possible and feed user preferences as context — short prompts keep costs low.

6. Auth: lightweight identity for small groups

Choice: passwordless or social login to avoid heavy user management.

  • Options: Supabase Auth, Clerk (developer tier), or a simple magic link flow you implement with an email provider (e.g., SendGrid free tier).

7. CI/CD and observability

Choice: Git-based deploy with simple monitoring.

  • CI/CD: GitHub Actions (free for public and small private projects) or use Vercel/Cloudflare Git integration.
  • Monitoring: lightweight error tracking with Sentry (free developer tier) and basic log capture via built-in function logs or free layers like Logflare.

Why this stack is plausible for a 7-day build

Rebecca likely prioritized low setup time and high productivity:

  • Static front end + serverless backend: no ops, quick iteration.
  • Supabase/Neon: instant DB + Auth without infra overhead.
  • LLM copilots: write UX text and glue code quickly.
  • Edge deploys: zero-config TLS, CI integration, and instant rollbacks.

Actionable replication guide — build Where2Eat in a weekend (step-by-step)

Below are pragmatic steps to reproduce Rebecca’s result. I assume basic familiarity with Git, Node.js, and SQL.

Step 0 — Boilerplate choices

  • Front end: Vite + React + Tailwind
  • Backend: Cloudflare Workers or Supabase Edge Functions (JavaScript/TypeScript)
  • Database/Auth: Supabase (free tier) or Neon + Clerk
  • LLM: Start with a hosted trial key (OpenAI/Anthropic) or a free hosted open model via Hugging Face Inference

Step 1 — Scaffold the repo

  1. Initialize front end: npm create vite@latest where2eat --template react
  2. Add Tailwind: follow Tailwind setup for Vite
  3. Create a /api folder for edge functions
  4. Initialize Git and push to GitHub

Step 2 — Provision the database and auth

  1. Create a Supabase project (free tier). Record SUPABASE_URL and SUPABASE_ANON_KEY.
  2. Create a table for users, preferences, and place overrides. Example SQL:
-- preferences schema
  create table preferences (
    id uuid primary key default gen_random_uuid(),
    user_id uuid references auth.users(id),
    tag text[],
    created_at timestamptz default now()
  );

  create table places (
    id text primary key,
    name text,
    tags text[],
    lat numeric,
    lon numeric,
    source text,
    updated_at timestamptz default now()
  );
  

Step 3 — Add Auth and quick UI

  1. Use Supabase Auth or Clerk to enable email magic links for a zero-password flow.
  2. Wire the front end to read and update preferences in the DB.
  3. Use localStorage to cache recent choices so the app still feels snappy offline.

Step 4 — Build the recommendation endpoint

Keep logic minimal: combine user tags + nearby places + a short LLM prompt to generate a human-friendly explanation.

// pseudo-code for an Edge Function
  import { fetchLLM } from './llm';
  import { getNearbyPlaces } from './places';

  export async function handler(req) {
    const { userId, constraints } = await req.json();
    const prefs = await db.query('select tag from preferences where user_id = $1', [userId]);
    const places = await getNearbyPlaces(constraints.location);
    const prompt = buildPrompt(prefs, places, constraints);
    const llmResp = await fetchLLM(prompt);
    return new Response(JSON.stringify({ picks: parse(llmResp) }), { status: 200 });
  }
  

Step 5 — Choose and integrate places data

For an MVP:

  • Start with a short curated list or crowdsourced entries.
  • Integrate OSM Nominatim for geocoding as a free option.
  • If you need richer metadata, add Foursquare or Yelp selectively and cache results in your DB to stay under free quotas.

Step 6 — Deploy to the edge

  1. Deploy static front end to Cloudflare Pages or Vercel (link GitHub repo to auto-deploy).
  2. Deploy the API as Cloudflare Workers (fast) or Supabase Edge Function.
  3. Set environment variables (LLM_API_KEY, SUPABASE_URL, etc.) in the deployment settings — never commit keys.

Step 7 — Iterate and ship

  • Add simple analytics: record conversions (accepted suggestions) in the DB to tune prompts.
  • Share with friends via a private link or TestFlight if you build a wrapper mobile shell.

How Rebecca likely used LLMs and how you should too (cost-conscious)

Rebecca’s “vibe coding” approach used LLMs both as a coding assistant and as the recommendation engine. For production-ready micro-apps, apply these best practices:

  • Prompt budget: Keep prompts small and use structured output formats (JSON) so post-processing is minimal. See also guidance on managing generative AI outputs.
  • Cache LLM outputs: Cache recommendations per session and only call LLM when preferences change.
  • Use local or community-hosted models for predictable costs when latency and privacy allow — in 2026 there are robust GGUF and API-compatible runtimes (e.g., Ollama, Hugging Face Inference) that are cheaper for low-volume apps.
  • Guardrails: Validate LLM suggestions against your places DB and business rules to avoid recommending closed or unsafe venues.

Productionize: hardening, security, and scale checklist

Turning a weekend micro-app into a safe, maintainable service requires focused hardening. Prioritize the items below:

Security

  • Secrets management: Use environment variables in your deploy platform, rotate keys periodically, and use least-privilege keys (read-only where possible). See trends in developer experience and secret rotation.
  • API key restrictions: Restrict LLM/API keys by domain/IP where provider supports it.
  • CORS & CSP: Configure strict CORS policies and a Content Security Policy for the front end.
  • Rate limiting: Protect LLM endpoints and any third-party API with per-user rate limiting (Cloudflare Workers + KV + tokens is enough for most micro-apps).

Reliability

  • Health checks: Add simple /health endpoints and uptime monitoring (UptimeRobot free plan). For observability best practices, see modern observability.
  • Backups: Regular DB backups or automated backups from Supabase/Neon.
  • Observability: Integrate Sentry for exceptions and basic metrics to know when recommendations fail.

Performance and cost

  • Edge caching: Cache static and semi-static responses at the CDN to save LLM calls. See edge and latency patterns in the latency playbook.
  • Batch LLM requests: If you need multiple candidate prompts, batch them to reduce per-call overhead.

Data and compliance

  • Minimize PII storage. For friend groups, consider ephemeral sessions instead of persistent personal data — practice privacy-first techniques described in privacy-first personalization.
  • Encrypt sensitive fields at rest (Supabase lets you enable managed encryption where needed).

Advanced strategies — beyond the weekend MVP

After the MVP, engineers typically add features that improve UX and resilience while keeping costs controlled:

  • Hybrid LLM strategy: Use a small local model for standard phrasing and fall back to a stronger hosted LLM for complex reasoning.
  • Client-side personalization: Move preference fusion into the client where appropriate to reduce server compute.
  • Feature flags: Use LaunchDarkly alternatives (or simple database flags) to experiment with recommendation heuristics safely.
  • Observability-driven prompt tuning: Track which suggestions are accepted and use that data to refine prompts and ranking rules.

Real-world constraints & tradeoffs (what Rebecca likely balanced)

Rebecca’s goal was utility for a narrow group, so tradeoffs were deliberate:

  • Privacy vs convenience: Using LLMs implies data leaves your environment unless you host a model locally.
  • Cost vs accuracy: Hosted LLM calls are convenient but can add variable costs; caching and smaller prompts control this.
  • Time vs robustness: For a week build, tests and heavy auditing are trimmed — production hardening must come after validation.

“Once vibe-coding apps emerged, I started hearing about people with no tech backgrounds successfully building their own apps.” — Rebecca Yu (Substack), as reported by TechCrunch.

Quick reference: free services (2026 snapshot) to replicate this stack

  • Static hosting / edge: Cloudflare Pages, Vercel (Hobby), Netlify
  • Edge functions: Cloudflare Workers, Supabase Edge Functions
  • Database/Auth/Storage: Supabase, Neon, PlanetScale
  • LLM inference: hosted trials (OpenAI/Anthropic) or open-model inference (Hugging Face/Replicate/Ollama community runtimes)
  • Geocoding / Places: OpenStreetMap (Nominatim), Foursquare (developer), cached results
  • CI/CD: GitHub Actions, integration with Vercel/Cloudflare
  • Monitoring: Sentry free plan, UptimeRobot for uptime checks

Final checklist before handover or public release

  1. Rotate keys and restrict their scope.
  2. Implement rate limits on APIs and LLM endpoints.
  3. Set up backups and a rollback plan.
  4. Run a privacy review and minimize PII retention.
  5. Document the architecture and provide a small runbook for incidents. Consider resilient diagrams and offline-first exports when you document the architecture.

Takeaways — build fast, then harden

Rebecca’s seven-day dining app shows what’s possible in 2026: with the right components you can go from idea to usable micro-app in days. The pattern is consistent:

  • Prototype with edge-hosted static front ends and serverless functions to remove ops friction.
  • Use LLMs judiciously — keep prompts short, cache outputs, and validate results against your database.
  • Leverage modern free tiers for DB/Auth and CDN to keep costs near-zero during prototyping.
  • Plan for production — add security, rate-limiting, backups, and observability before opening to a wider audience.

If you want a hardened starter repo or a checklist customized to your team’s cloud providers, I’ve distilled the above into a reproducible template and a one-page runbook. Ready to replicate Where2Eat and ship your own micro-app in a weekend — but with production-safe defaults?

Call-to-action

Get the starter repo, deployment scripts, and a 20-minute checklist to productionize your micro-app: sign up for the free template package at frees.cloud/projects/where2eat or ping me with your preferred cloud stack and I’ll give you a tailored runbook.

Advertisement

Related Topics

#case-study#microapps#prototyping
f

frees

Contributor

Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.

Advertisement
2026-02-04T14:53:08.837Z