PPC Landing Page Builder

A chat-based platform that generates production-ready landing pages from natural language prompts. Built at Grownomic AI as sole engineer.

LangGraphAgentic AINext.jsSupabaseFly.ioAider

Overview

I was CTO and sole engineer at Grownomic AI, a two-person startup. We built a chat-based platform that lets small businesses generate production-ready landing pages from a natural language prompt, instead of hiring an agency or wrestling with page builders.

The core loop: user describes their page, optionally provides a brand URL for asset extraction, and the system generates a full Next.js landing page deployed to its own VM. Then they refine it through a streaming AI chat editor, similar to Replit or Lovable but specialized for PPC landing pages. Under the hood, LangGraph Cloud orchestrates the AI planning and Aider (open-source coding agent) runs on each VM to execute the actual code edits, with a webhook server managing the deployment lifecycle.

Result

Two paying law firm clients. Took page creation from weeks (agency model) to a single chat session.

Agency workflow~4 weeks
  1. 1Brief & scoping3 days
  2. 2Design mockups1 week
  3. 3Revision rounds5 days
  4. 4Development~2 weeks
  5. 5QA & launch3 days
Grownomic~50 min
  1. 1Describe your page1 min
  2. 2AI generates page3 min
  3. 3Chat refinement~45 min
  4. 4Liveinstant

Architecture

Five components: Next.js frontend (Vercel), Supabase (data + auth + queues), LangGraph Cloud (AI orchestration), Python workers (Fly.io), and per-page VMs (Fly.io).

LayerTechnologies
FrontendNext.js 15, React 19, TypeScript, Tailwind CSS v4, shadcn/ui
BackendSupabase (Postgres 17, Auth, Storage, RLS, PGMQ)
AI OrchestrationLangGraph Cloud, LangChain, Claude Sonnet 4.5
Code EditingAider (open-source coding agent) on each VM
WorkersPython 3.11, Docker, Fly.io
Page HostingFly.io VMs (warm pool + lease management)

The AI Pipeline

User chat hits a Next.js route handler, which proxies to LangGraph Cloud as a streaming run. The LangGraph ReAct agent (Claude Sonnet 4.5) receives the message plus a dynamic system prompt built with the repo's file tree and current page state. The agent calls query_code_base to read files, then edit_code_base to delegate edits to Aider on the VM. The dev server hot-reloads and the preview updates live.

Three stream modes run simultaneously: messages for LLM tokens (real-time typing), custom for tool progress updates, and updates for step completion events.

Page Lifecycle State Machine

Page creation runs through an explicit state machine: provision (lease resources from pool) → extract brand assets or collect generic assets → user approval → generate code via Aider → build → completed.

Each state transition is a discrete job on a PGMQ queue, making the system resumable after failures and giving the frontend clear states to poll against.

Key Decisions

Per-Page VMs Instead of Multi-Tenant Hosting

Every generated landing page runs on its own dedicated Fly.io VM, pulled from a warm pool. Isolation was non-negotiable: each page is a full Next.js project with its own dependencies. Multi-tenant hosting would have meant building a sandboxed runtime and taking on blast radius risk. Dedicated VMs gave me clean isolation, hot-reload previews during editing, and zero crosstalk between users.

The tradeoff is cost. At scale, I would move to containerized builds with a shared hosting layer or pre-built static exports served from a CDN.

PGMQ in Postgres Instead of a Dedicated Queue

Used PGMQ (Postgres-native message queue) inside Supabase for all async job processing. This kept the operational surface small: one database for state, auth, storage, and queues. I could read and write queue state in the same transactions as business data, which simplified consistency.

The tradeoff: queue throughput is bounded by Postgres. If I needed high fan-out or thousands of concurrent jobs, I would switch to a dedicated system.

Resource Pool and Leasing

Infrastructure resources (GitHub repos, GTM containers, dev servers, VMs) are pre-provisioned and tracked as rows in Postgres with explicit lease states. A dedicated resource-provisioning-service keeps pools warm by polling every 60 seconds.

Without pooling, page creation would block on resource setup. Pre-provisioning pushes that latency to the background. Page creation just leases an existing resource from the pool.

Aider as the Coding Agent

Aider handles repo-aware multi-file edits and produces clean diffs. LangGraph orchestrates the high-level decisions (what to change), and Aider executes the actual code edits. This separated "what to do" from "how to edit code." Building a diff/apply engine from scratch would have been a project in itself.

Each VM runs a lightweight HTTP webhook server that exposes endpoints for the full deployment lifecycle: prepare, install, run, stop, and switch. All endpoints are token-authenticated and idempotent with file-based locking.

By using Aider, I could focus on the product layer — the wizard, brand extraction, state machine, resource pooling — instead of reinventing code editing.

Worker Architecture

Two Python services on Fly.io:

page-service polls a PGMQ queue and routes messages to one of eight handlers based on job type. Handlers cover the full lifecycle: provision, brand extraction, generic assets, build, publish, delete, integrations, and brand updates. The code follows clean architecture: handler (extracts payload, instantiates repos) -> workflow (orchestrates logic, no IO) -> infra (all side effects). Workflows are testable with fakes.

resource-provisioning-service polls every 60 seconds and tops up resource pools when they fall below a threshold. This is what makes page creation fast.

Frontend Architecture

Feature-first structure with ESLint-enforced import boundaries. RSC for reads, Server Actions for mutations. Two patterns: flat for simple features, ports and adapters for complex domains (e.g., integrations). RLS on all user-owned tables enforced at the database level.

What I Learned

The technology worked. Every product we built at Grownomic shipped and performed. What killed the company was not engineering but go-to-market. Building on third-party platforms (OpenAI's ChatGPT Apps, Google Ads API) meant our distribution was always one policy change away from disappearing.

The architectural decisions I am most proud of are the ones that kept complexity manageable as a solo engineer: PGMQ instead of a separate queue, resource pooling as database rows, and delegating code editing to Aider instead of building it myself. Each one reduced the operational surface at the cost of some theoretical scalability I never needed.