Request for Comments · 2026-04-21

Fimo as a DX Platform
POC → Production

Status Draft
Author Alexandre Bodin
Summary

The POC proved that developers can scaffold, manage, and deploy a Fimo project entirely from the CLI using Claude Code. This RFC covers what needs to happen to take that from a working prototype to a shipping product — including the near-term items that are already designed but not yet implemented, and the medium/long-term platform bets.

What the POC proved
  • A fimo CLI binary can scaffold a project, link it to the Fimo backend, and manage the full content/asset/form/translation lifecycle from the terminal.
  • Claude Code can operate the CLI autonomously using a small skill bundle (rules + reference docs) installed alongside the project.
  • The Fimo git remote + sandbox pull model works: pushing to the Fimo remote triggers a sandbox sync in seconds.
  • The new CLI template (fimo/ui + fimo/vite) is cleaner than the old .fimo/-in-tree approach and should replace it everywhere.
  • Not solved: branch-based workflows · preview environments · translations dual-store divergence · two diverging templates · DX gap between fimo create and an AI-first flow · no npm publish.
Collaboration model

Fimo is designed for two personas working on the same project in parallel — a developer in code, and a content editor in a dashboard. Both converge on the same git-versioned source of truth without blocking each other. Preview environments extend the model to branch-scoped sandboxes with their own DB branch.

Developer
Terminal + Claude Code

Schemas, components, routes. Branches for experiments.

  • Scaffold with fimo create, or link an existing repo via fimo init
  • Code in TypeScript — schemas, forms, routes, components, t() keys
  • fimo sync reconciles local definitions with the DB — one-shot, run on demand or in CI
  • Branch, fimo preview, then fimo deploy --publish to ship
Content editor
Fimo studio / dashboard

Entries, media, copy. No local setup.

  • Edit blog posts, landing pages, marketing copy and translations
  • Upload and organise media, approve drafts, schedule publishes
  • Ask the in-dashboard AI: "write a new hero for the pricing page"
  • Work on main or switch to a preview branch to review WIP
Fimo platform
Git remote · Management API · Sandbox pull

Code lives in git. Entries and media live in Neon. Both are versioned per-environment. The sandbox watches its git ref and reloads on every push — so dashboard edits never conflict with local dev.

Branch → environment hierarchy
feature/hero
───▶
Preview · isolated
hero-abc123.preview.example.com · Neon branch (forked from develop) · draft content
feature/i18n
───▶
Preview · isolated
i18n-def456.preview.example.com · Neon branch · translation WIP
fimo promote merge branch into develop · replace develop's DB with the branch's DB
develop
───▶
Base preview · dashboard default
develop.example.com · Neon develop branch · what editors open every morning
fimo publish promote develop to main · goes live
main
───▶
Production · live
site.example.com · Neon main · published content — protected, only reached via publish
Why it works

The dashboard defaults to develop, not production. Editors work safely — their changes land on the base preview and only reach live via a deliberate fimo publish step. Production is protected: nothing edits it directly.

Developers work in feature branches on top of develop. Each branch gets its own preview env with its own Neon DB fork, so a schema change or content experiment stays isolated until fimo promote merges it back into develop — at which point develop's DB is replaced with the branch's DB (content + schemas cascade together).

Merging data from multiple preview branches into develop at once (when two editors work on different features in parallel) is a more advanced use case, tracked as a later M2 follow-up.

A day in the life · example journey
Time
Alex · developer
Marie · content
Mon 09:00
git checkout -b feature/hero-redesign, starts coding the new hero.
local · no env yet
Opens the dashboard, tweaks the homepage intro copy.
develop base preview · dashboard default
Mon 11:30
Adds a new schema field via defineSchema and runs fimo sync to push it to the develop DB.
local
Mon 14:00
Runs fimo preview. Server spins up hero-abc123.preview, forks Neon from develop (includes Marie's 09:00 edit), pushes the branch. Shares the URL with Marie.
hero-abc123 preview · isolated DB fork
Mon 15:30
Switches to Alex's preview env via the branch picker, sees the new hero live, fills in the new field, tweaks the copy to match the visual weight.
hero-abc123 her edits stay scoped to this preview
Mon 17:00
Reviews Marie's content change in the preview, clicks "merge PR" on GitHub → CI runs fimo promote.
feature/hero develop
Mon 17:02
Promote completes. develop now contains Alex's new hero component + new schema field + Marie's copy tweaks. develop's DB is replaced with the preview's DB. The preview env hero-abc123 is torn down.
develop dashboard auto-refreshes to new state
Tue 09:00
Re-opens the dashboard, now showing the promoted state on develop. Does a final proof-read of the hero copy, approves for launch.
develop
Tue 10:30
Team lead runs fimo publish. develop → main, production DB replaced, deployed. Live.
develop main
Note on concurrent work: this journey is the happy path — Marie's morning edit on develop was captured when Alex forked the preview at 14:00. If Marie had edited develop again between 14:00 and the 17:00 promote, her later edit would be dropped (develop's DB is replaced, not merged). Cross-branch content merging is the M2 follow-up that removes this gotcha.

Near-term

Tier 1 / 3

Finish the POC surface.

N1

Unified template

CLI and sandbox-created projects use the same template source.

Problem

Two templates exist and will drift apart: one sandbox-managed (still carries a committed .fimo/ runtime folder from the pre-CLI era) and one CLI-managed (uses fimo/ui + fimo/vite, nothing to commit). Every improvement to one has to be manually ported to the other. This is the POC shortcut we need to unwind before building anything else on top — every later item touches the template surface one way or another.

Proposal

The CLI template becomes the canonical source. The API copies it into new sandboxes the same way fimo create does locally. There is no .fimo/ folder in the template — everything that used to live there has already moved into package exports (fimo/ui, fimo/vite, fimo/client) and CLI commands. The project just imports from fimo, with nothing extra on disk.

Options considered
A. CLI template is canonical Preferred ✓

API reads from the CLI template directory for new sandboxes.

Cost  ·  Medium — API sandboxing path updated.
B. Shared packages/template

Extract into a workspace package both CLI and API reference.

Cost  ·  Low once extracted; cleanest long-term.

Option A is the right first step — it stops the drift immediately. It can evolve into B later without user-visible changes.

No more .fimo/ folder

The .fimo/ folder is gone — not relocated, not gitignored, not regenerated. Everything it used to host has already migrated into fimo/ui, fimo/vite, the incoming fimo/client (M4), and CLI commands. The project imports from fimo like any other npm dependency. Nothing to scaffold, nothing to keep in sync, no runtime artefact on disk. Upgrading the runtime is pnpm update fimo.

N2

Dashboard reflects CLI state

Git, sync, and environment context surfaced in the web UI so both personas know what's live.

Problem

The dashboard was built for sandbox-only projects where the sandbox is the source of truth. With the CLI, code lives in the user's local repo plus the Fimo git remote, and the sandbox just mirrors that state — but the dashboard shows none of it. An editor opening a project can't tell whether the code is up-to-date with the developer's latest push, whether there's unpushed work in progress, or which branch the environment tracks. The CLI workflow is silent from the dashboard side, which erodes trust in what's being edited.

Proposal

Teach the dashboard to reflect git and sync state through a small set of always-visible affordances. The goal isn't to turn the dashboard into a git UI — it's to give editors enough context to trust what they're seeing and to coordinate with developers without leaving the browser.

Surfaces
  • Environment badge in the project chrome — always visible. Production · main or (once M2 lands) Preview · feature/hero. One-glance answer to "what am I editing?".
  • Sync status panel — last git push (time, branch, commit subject), last sandbox reload, any divergence between the two. Backed by a new GET /projects/:id/sync-status endpoint.
  • Pending-push indicator — when the developer has committed locally but not pushed, or has uncommitted changes while dev is running, show a subtle banner: "Dev is mid-work, the environment may change soon." Requires a lightweight heartbeat from fimo/vite (or the CLI) to the API.
  • Activity feed — git commits and dashboard edits merged into one timeline per environment. "Alex pushed 3 commits at 14:23" next to "Marie updated the hero copy at 14:45". Lets both personas see what the other just did.
  • Branch switcher (depends on M2) — when previews are live, editors pick which environment they're editing from the same chrome element that shows the badge.

These ship incrementally. Badge and sync panel are independent and light. Pending-push needs the heartbeat hook. Activity feed needs unified indexing of git log + content change events. The branch switcher waits on M2. Ship in that order.

N3

Translations — DB-only with fimo/vite autoload

DB is the single source of truth; plugin fetches + watches; HMR on dashboard edits.

Problem

Translations currently live in two stores: translations/<locale>.json in git and a translations table in the tenant DB. They diverge under the CLI workflow. Making the file the source of truth creates a different problem: dashboard edits are never reflected locally unless the developer manually syncs.

Proposal

Make the DB the single source of truth. Remove translations/<locale>.json from git entirely. A new fimo/vite plugin fetches translations from the API at dev-server start and at build time, injecting them as a virtual module. New keys found in source code are pushed to the DB via fimo sync — a one-shot command, run manually or by an AI agent when it adds new t() calls. No manual extraction step.

Key extraction via fimo sync

fimo sync scans source files for t() calls and fully reconciles them against the DB: new keys are created (using the fallback string as the initial value), keys no longer present in source are deleted, and existing DB values are never overwritten. Runs on deploy and preview automatically. During dev, an AI agent or a human can run it at any time — the running dev server won't pick up the change until restart in v1 (see below).

Plugin flow
dev start      plugin fetches all locales from API once
                injects virtual:fimo/translations into module graph
                dev session uses this snapshot until restart

fimo sync      updates DB (new/removed keys)
                plugin logs: "translations changed — restart dev to see updates"

vite build     plugin fetches translations synchronously
                bakes into bundle (no runtime fetch in production)
v1 → v2 · live reload

v1 is restart-required. No watch channel, no HMR for translation edits. If an AI agent or the user runs fimo sync during dev, the plugin logs a prompt to restart the dev server. Keeps the implementation trivial and avoids a persistent SSE/poll connection.

v2 adds auto-HMR via a simple marker file. On every successful fimo sync the CLI writes a timestamp file inside node_modules/.cache/fimo/ (no project-visible artefact). The vite plugin watches that path using Vite's existing file watcher — no new server connection — invalidates the virtual module, and HMRs. Cheap, robust, no network plumbing.

Fallback when the API is unreachable

Not a special concern — the build needs the API for entries, media, and schemas anyway, so an unreachable API means the build can't complete for multiple reasons. No dedicated cache layer is added just for translations.

If translations specifically fail to load (while other API calls succeed, which is an unusual failure mode), the plugin emits an empty virtual:fimo/translations module. At runtime, t('hero.title', 'Welcome') falls back to its second argument — so the app stays functional in the source language rather than breaking outright. Same behaviour during dev if the initial fetch fails.

N4

fimo studio

Lightweight admin bundled in the fimo package — local content management without the web app.

Problem

Developers working locally have no way to manage content, media, or translations without opening the Fimo web app and finding their project. The terminal workflow is seamless for code; content management requires a context switch.

Proposal

Add a lightweight fimo/studio entrypoint to the fimo npm package. fimo studio opens a minimal admin UI using the stored CLI credentials — no second login. Covers at minimum: CMS collections + entries, media library, translations, form submissions.

To be refined with product

The exact feature scope and delivery model (locally-served SPA bundled in the fimo package vs hosted fimo.ai/studio/:projectId opened via OTT handoff) is an open product decision. The key constraint: it lives in the fimo package, ships with the CLI, and works without the Fimo web app being in the loop.

Scheduled last in the near-term tier because it's an additive feature, not a fix for a broken POC surface. The dashboard improvements in N2 already give developers a clear window into their project; fimo studio only matters for developers who prefer never leaving the terminal.

Medium-term

Tier 2 / 3

Git-native CLI + platform primitives.

M1

fimo push / pull / init

Make the Fimo remote feel like a first-class git remote.

fimo push [branch]

Push to the Fimo remote + trigger sandbox sync. Defaults to current branch. Lower-level primitive; fimo deploy wraps this.

fimo pull [branch]

Pull from the Fimo remote with rebase. Convenience for resolving non-fast-forward after a dashboard edit.

fimo init

Link an existing local project to Fimo. Creates the project on the API, writes settings, adds the git remote, pushes the current branch.

fimo deploy stays as the high-level human shortcut. fimo push is the low-level primitive for scripts and CI.

CI auth · API keys

Scripts and CI authenticate with FIMO_API_TOKEN. Tokens are issued from two places with identical capabilities:

  • DashboardSettings → API keys, at the org or project level. List, create, revoke, copy token on creation (shown once, hashed server-side thereafter).
  • CLIfimo token create --name ci-deploy [--project <id>] [--expires 90d] prints a fresh token to stdout. Companion commands: fimo token list, fimo token revoke <id>. The CLI path is what makes this AI-friendly: an agent can create a token and pipe it into gh secret set FIMO_API_TOKEN without a browser round-trip.
Key properties
  • Scope — org-level (all projects in the org) or project-level (one project). User picks at creation.
  • Name — human-readable label (e.g. ci-deploy, preview-bot) for audit clarity.
  • Expiration — optional; none by default, configurable in days or a fixed date.
  • Audit trail — each key records creation time, last-used timestamp, and last IP. Visible in the dashboard for compliance reviews.
M2

Environment hierarchy · preview, promote, publish

A base develop env for day-to-day work, per-branch previews on top, and a protected main that only a deliberate publish touches.

Problem

The POC has a single sandbox per project that serves as both "preview" and "production". Content edits happen directly against the live site. Developers can't test a schema change without risking live data, and editors have no notion of "draft vs published" — what they type is live. There's no place to try changes safely, and no clear publish step.

Proposal

Introduce a two-level environment hierarchy that mirrors the branch model. develop becomes the default environment the dashboard opens on — always available, always tracking the develop branch, safe to edit. feature/* branches can spawn their own isolated previews on top of develop for testing code + content changes together. main is protected — only reached by a deliberate fimo publish from develop.

Environment roles
develop Base preview

The dashboard's default view. Always on, tracks the develop branch. Editors open a project here and their work is safe — they're not touching production until someone publishes. Replaces "the sandbox" from the POC mental model.

feature/* Per-branch preview

Ephemeral previews spun up per feature branch. Each gets a unique subdomain and its own Neon DB branch forked from develop. Used for testing schema + code + content changes together, or for review during PRs. Torn down when the branch is deleted or the PR closes.

main Production

The live site. Protected — no direct dashboard editing, no direct pushes from feature branches. Only way in is fimo publish, which promotes develop to main. Emergency hotfixes can still push directly via fimo push main for break-glass cases.

CLI surface
fimo preview

Push current branch and return its preview URL. If the branch has no preview env yet, one is spun up (with a fresh Neon DB fork from develop).

fimo promote [branch]

Merge the branch (current by default) into develop, and replace develop's DB with the branch's DB. Code + content cascade into develop together. This is how work exits a preview and enters the base environment.

fimo publish

Promote develop to main: merge develop → main, replace production DB with develop's DB, deploy. Live. Replaces fimo deploy --publish from the POC.

fimo preview list | delete <branch>

List active preview environments; tear a specific one down.

Backend design
  • Multi-environment sandbox model: each project has at minimum a develop and a main environment; additional previews come and go per feature branch.
  • Branch-scoped URLs: develop.example.com, <branch>-<hash>.preview.example.com, site.example.com for main.
  • Neon branches forked along the hierarchy: main ← develop ← feature/*. Promote = merge git + fast-forward Neon branch with replacement. Publish = same, one level up.
  • Dashboard always opens on develop. Environment switcher (N2) lets editors pick a preview to review.
Out of scope for M2 — future follow-up

Environment merging. Combining content from multiple preview branches into develop (or into each other) — useful when two editors work on separate features in parallel and you want to preview the combined state before promoting either. Needs its own design spike: how do two DB forks merge when both have edits to the same entry? Three-way merge, conflict UI, …? Deferred past the initial M2 surface.

Key design question: sandbox lifecycle — who garbage-collects preview envs never explicitly deleted? The GitHub App (L1) is the natural cleanup trigger via PR close events.

M3
Needs research · broad direction

Plugin installer for AI harnesses

One command that wires up any AI harness — idea, not yet a committed design.

The idea is a fimo plugin install that auto-detects the AI agents on the user's machine (Claude, Cursor, Codex, …) and drops the Fimo skill bundle into the right place for each. Updates come via fimo plugin update, so skills can evolve without a CLI release. Nice in principle; several things to figure out before it becomes a concrete plan.

Things to consider
  • Cross-harness detection — every agent has its own install layout (Claude's ~/.claude/skills/, Cursor's .cursor/rules/, Codex configs). How much of this do we want to track, vs. defer to the ecosystem?
  • Bundle format — roll our own, or adopt an emerging standard (agentskills.io, skills.sh, plain SKILL.md)? Likely the latter once something stabilises.
  • Distribution — bundled with the CLI, fetched from a CDN on install, or published as separate skill packages? Each has a different upgrade story.
  • Overlap with MCP — L2 sketches fimo_install_skill as an MCP tool. For harnesses that speak MCP, the MCP path is simpler and works without a local CLI. A CLI-based installer may only be worth building for harnesses that don't support MCP.
  • Security — "install a skill" means different things in different harnesses: files on disk, runtime plugins, server-side config. What's the trust model?

Worth revisiting once (a) the agent-skills ecosystem settles on a format, and (b) L2's MCP server is running — that might render a separate installer unnecessary.

M4

fimo/client + defineSchema / defineForm

Replace generated JSON + .ts files with a typed TypeScript SDK.

Problem

Schemas and forms are defined as JSON files, then a code-gen step produces .ts files that must be committed. The generated files are awkward to review, types are indirect, and there's no clean way to bundle multiple schemas into a single typed client.

Proposal

Replace JSON schema and form files with defineSchema() and defineForm() TypeScript functions exported from the fimo package. Types are inferred directly — no code-gen step, no generated files to commit. A createFimoClient() factory bundles them into a single fully-typed client with hooks and submit helpers.

fimo sync

A new fimo sync command is the single sync primitive for all local definitions. It reads defineSchema / defineForm files and pushes them to the API, and it also reconciles t() translation keys with the DB (the translation-specific behaviour is described in N3). It runs automatically as part of fimo deploy and fimo preview; otherwise it's a one-shot, invoked manually or by an AI agent when local definitions change.

Creating new definitions

Schemas and forms are scaffolded by the CLI — no JSON files, ever:

fimo schema create <Name>

Writes src/schemas/<Name>.ts with a defineSchema stub. Dev fills in fields, runs fimo sync.

fimo form create <name>

Same idea for forms: writes src/forms/<name>.ts with a defineForm stub.

AI agents (via the MCP server or in-project skill) call the equivalent tools and produce the same .ts files. One code path for humans and agents; one file format everywhere.

Scope — every project, one path

fimo/client is the runtime for every Fimo project when M4 lands — CLI-created and sandbox-created alike. The generated hook files (hooks-template-v3.eta) go away at the same time as the unified template (N1) drops the two-template split. One template, one runtime, no parallel code paths.

Existing projects with legacy JSON schemas re-create them as .ts via fimo schema create (same CLI path new projects use). No dedicated fimo migrate command to maintain — the create command is the migration path. Mechanical and safe: once every schema has a .ts twin, the JSON files are deleted.

Code shape

Each resource is a single .ts file. Types flow from the definition — no code-gen, no generated files. The client factory bundles them into one typed entry point.

defineSchema src/schemas/BlogPost.ts
import { defineSchema } from 'fimo'

export default defineSchema({
  uid: 'BlogPost',
  fields: {
    title:       { type: 'string',   required: true },
    slug:        { type: 'string',   required: true, unique: true },
    body:        { type: 'richtext', required: true },
    coverImage:  { type: 'media',    mediaType: 'image' },
    publishedAt: { type: 'date' },
    author:      { type: 'relation', target: 'Author' },
    tags:        { type: 'relation', target: 'Tag', multiple: true },
  },
})
defineForm src/forms/contact-us.ts
import { defineForm } from 'fimo'

export default defineForm({
  name: 'contact-us',
  fields: [
    { name: 'email',   type: 'email',    required: true, label: 'Your email' },
    { name: 'subject', type: 'string',   required: true },
    { name: 'message', type: 'textarea', required: true, maxLength: 2000 },
  ],
  submitLabel: 'Send message',
})
createFimoClient src/fimo.ts
import { createFimoClient } from 'fimo/client'
import BlogPost   from './schemas/BlogPost'
import Author     from './schemas/Author'
import ContactUs  from './forms/contact-us'

export const fimo = createFimoClient({
  schemas: { BlogPost, Author },
  forms:   { ContactUs },
})

// Fully typed — imperative & React Query hook APIs derived from the schemas above.
Framework integration

Every resource exposes both an imperative API (fimo.X.list(), fimo.X.getBySlug()) for loaders, RSCs and static generation, and React Query hooks (fimo.X.useList(), fimo.X.useGetBySlug()) for client transitions. Server-fetched data hydrates the hook cache automatically — no double fetch.

React Router v7 loader → hook · SSR + SSG
// app/routes/blog._index.tsx
import type { Route } from './+types/blog._index'
import { fimo } from '~/fimo'

// Runs at request time (SSR) — or at build time if the route is pre-rendered (SSG)
export async function loader(_: Route.LoaderArgs) {
  const posts = await fimo.BlogPost.list({ sort: '-publishedAt', limit: 20 })
  return { posts }
}

export default function BlogIndex({ loaderData }: Route.ComponentProps) {
  // Hydrates from loaderData; stays fresh on the client via React Query.
  const { data: posts } = fimo.BlogPost.useList(
    { sort: '-publishedAt', limit: 20 },
    { initialData: loaderData.posts },
  )
  return <PostList posts={posts} />
}
Next.js (App Router) RSC · SSG · ISR
// app/blog/[slug]/page.tsx — Server Component
import { fimo } from '@/lib/fimo'

// Static generation at build — enumerate every post
export async function generateStaticParams() {
  const posts = await fimo.BlogPost.list({ fields: ['slug'] })
  return posts.map(({ slug }) => ({ slug }))
}

// ISR — rebuild hourly in the background
export const revalidate = 3600

export default async function Post({ params }: { params: { slug: string } }) {
  const post = await fimo.BlogPost.getBySlug(params.slug)
  return <Article post={post} />
}
TanStack Start route loader · RQ hydration
// app/routes/blog.tsx
import { createFileRoute } from '@tanstack/react-router'
import { fimo } from '~/fimo'

export const Route = createFileRoute('/blog')({
  loader: () => fimo.BlogPost.list({ sort: '-publishedAt' }),
  component: BlogIndex,
})

function BlogIndex() {
  const { data: posts } = fimo.BlogPost.useList(
    { sort: '-publishedAt' },
    { initialData: Route.useLoaderData() },
  )
  return <PostList posts={posts} />
}
M5

Unified DX — AI-first onboarding

Same experience whether you start from fimo create or from an AI agent.

Problem

There is a gap between starting from fimo create (well-defined CLI path) and an AI agent saying "create a website with Fimo" (no canonical entry point, no up-to-date docs the agent can read, no way to discover what Fimo can do without a bundled skill file).

llms.txt

A machine-readable index of all Fimo documentation following the llms.txt convention. Agents fetch this at session start to understand the API surface, CLI commands, schema format, and deploy pipeline — no skill bundle needed on disk.

Markdown-first docs

All documentation served as clean markdown at predictable URLs. Pairs with llms.txt as the discovery layer. Agents can fetch specific pages when they need depth on a topic.

MCP onboarding in fimo create / fimo init

After scaffolding, the CLI offers to wire up the AI agent automatically — running fimo plugin install for the detected agent. No separate step required.

AI ↔ CLI bridge

When an agent is inside a Fimo project, the skill bundle's rules direct it to use the CLI rather than raw API calls. When there is no project (global agent context via MCP), the agent uses MCP tools directly. Both paths converge on the same underlying API — no divergence.

Long-term

Tier 3 / 3

Platform bets — broad directions, not committed plans. Each one needs its own design spike before it leaves this tier.

L1
Needs research · broad direction

GitHub integration

PR previews + protected branch model + Neon branch isolation.

GitHub App events
  • Push to any feature branch → preview URL posted as PR commit status / GitHub Environment.
  • PR merge to developfimo promote runs in CI (branch → develop, DB replaced).
  • PR merge to main (separate release PR, gated) → fimo publish runs in CI (develop → main, live).
  • PR close → preview environment garbage-collected automatically.
Protected fimo/preview branch model

On top of M2's branch-scoped environments, the server maintains a fimo/preview/<env> tracking branch per environment. When a feature branch is pushed, the server merges it into the tracking branch rather than letting it touch sandbox main directly — so production code stays clean no matter what a preview PR does. DB isolation is inherited from M2 (Neon branch per environment).

L2
Needs research · broad direction

MCP server

Remote endpoint — any agent manages Fimo with no local setup.

Distribution

A single one-line install pointing to a hosted SSE endpoint at mcp.fimo.ai. Auth via OAuth or API token. No local server to run.

Shape — few tools, broad coverage

Rather than one tool per endpoint (brittle as the API grows, exhausts agent context with dozens of tool schemas), expose the whole API through a small set of general-purpose tools — inspired by the pattern Cloudflare uses for its MCP servers.

fimo_api Tool 1 · core

One tool that proxies any Fimo API call. Args: { method, path, body?, project_id? }. Returns the HTTP response. Covers the full surface — entries, media, schemas, forms, translations, projects, previews, publishing — through a single schema the agent can reason about.

fimo_docs Tool 2 · discovery

Search and fetch Fimo documentation — API reference, how-tos, schema conventions, OpenAPI excerpts. Agents call this first to learn what's available, then use fimo_api to act. Reference docs also exposed as MCP Resources for harnesses that support them.

fimo_install_skill Tool 3 · maybe

Install a Fimo skill bundle into the calling harness (when the harness supports it). Overlaps with M3's broader plugin installer direction — this is the MCP-native flavour of that same idea, and may end up being the simpler path forward.

Safety: destructive operations (publish, delete, token create) go through fimo_api like anything else, but carry an explicit confirmation requirement the server enforces — the agent must pass confirm: true in the body. The harness typically also prompts the user before any tool call, so there's a belt-and-suspenders setup.

CLI vs MCP: CLI for local dev and CI (things that need a filesystem, terminal, and git working tree — fimo create, fimo init, fimo push, fimo sync). MCP for agent-native workflows across machines, exposing the same API surface the CLI wraps. Both converge on the same backend.

L3
Needs research · broad direction

Full-stack support

SPA, SSG, SSR, server endpoints — whichever rendering model the framework ships. React Router v7, Next, TanStack Start, all via fimo/vite.

Extend fimo/vite to detect the framework in use and wrap it with the right rendering adapters — SPA for plain Vite, SSG for pre-rendered routes, SSR + server endpoints for full-stack frameworks. No new commands; the deploy pipeline adapts to what the framework ships.

Framework detection
  • React Router v7: detected via react-router.config.ts → Cloudflare adapter applied automatically.
  • TanStack Start: detected via app.config.ts → Cloudflare adapter applied automatically.
  • Fallback: plain Vite SPA (today's default).
Open questions
  • Secrets / env vars for Workers (currently only public VITE_* vars exist).
  • Wrangler config: ship in template or generate at build time?
  • Edge caching and CDN invalidation on publish — different model from static assets.
Milestones
M0
Close the POC — unified template (N1) + dashboard state (N2) + translations DB plugin (N3) + fimo studio (N4)
3–4 wks
M1
Git-native CLIfimo push/pull/init
3–5 days
M2
Environments — develop as base + per-branch previews + fimo preview/promote/publish
2–3 wks
M3
Plugin installerfimo plugin install
3–5 days
M4
fimo/clientdefineSchema/Form + createFimoClient + fimo sync
2–3 wks
M5
AI-first DXllms.txt, markdown docs, MCP onboarding in CLI
1–2 wks
M6
npm publish — publish fimo to npm; npx fimo create works
1 wk
M7
GitHub App — PR previews + fimo/preview branch model + Neon branches
3–4 wks
M8
MCP — remote endpoint + OAuth + tools + Resources
2–3 wks
M9
Full-stackfimo/vite framework detection + SPA/SSG/SSR rendering + Cloudflare Workers deploy
2–3 wks
Open questions
1

fimo studio scope. Local SPA served by the CLI, or hosted fimo.ai/studio opened via OTT? Feature scope for v1? Product team input needed.