← Blog
May 20268 min readreactvitedeveloper-toolsfrontend

Building Forge: why I made eleven developer tools with zero backend

The product thinking behind Forge, the browser-only architecture, and the frontend work behind features like the Before / After Comparator and Background Remover.

Forge came out of a pattern I kept hitting while working: I needed a tiny tool for one job, opened a site that could do it, and immediately ran into friction. Sometimes it wanted an account. Sometimes it was slow. Sometimes it was obviously sending the exact code or payload I pasted into someone else's server.

That felt wrong for a whole category of tooling that should be instant and private by default. So I built the version I wanted to use myself: a set of developer tools that run entirely in the browser.

The constraint came before the tool list

The key product decision was not “what tools should Forge include?” It was this:

Everything has to work with zero backend.

That constraint filtered the idea set fast. If a tool required a server to be useful, it did not belong in Forge. If it could run locally in the browser with acceptable performance, it was a candidate.

That is why the initial set looks the way it does:

  • JSON diff
  • regex explainer
  • cURL conversion
  • CSS specificity comparison
  • color palette extraction
  • SVG optimization
  • font pairing
  • breakpoint testing
  • API mocking through a Service Worker
  • before / after screenshot comparison for visual QA
  • background removal with browser-side ML and manual mask editing

They are different on the surface, but they all obey the same rule: your input stays with you.

Why I used a single SPA instead of separate microsites

Forge is built as a React + TypeScript SPA with Vite and React Router. I could have made each tool its own page in its own project, but that would have duplicated the boring parts:

  • shell layout
  • keyboard navigation
  • persistence patterns
  • deployment setup
  • visual system

Using one app let me build a shared frame once, then drop tools into it as route-level modules. Each tool is still largely self-contained, but the overall product feels coherent instead of like a folder of unrelated experiments.

That structure also keeps shipping cheap. One deploy, one static host, one routing setup.

Shared patterns across the tools

I tried to keep the architecture flat and repeatable.

Each tool gets:

  • its own route
  • its own UI state
  • optional local persistence via namespaced localStorage
  • a shared shell component for title, description, and content framing

This is not a platform with plugins, registries, or runtime module loading. It is intentionally simpler than that. A new tool gets added when it justifies its existence, not because I built infrastructure for endless expansion.

The interesting technical decisions

Some tools are straightforward text transforms. Others forced more specific technical decisions.

cURL Converter

The cURL converter uses curlconverter directly in the browser. That brought two implementation details with it:

  • Vite's build target had to be pushed to esnext because the dependency relies on top-level await
  • Tree-sitter WASM assets had to live in public/ so the parser could load them correctly at runtime

This is exactly the kind of tradeoff I like in frontend tools: a slightly unusual build constraint in exchange for keeping the transformation local.

Color Palette

The color palette tool loads an image into a canvas, samples pixels, and runs k-means clustering client-side. That kept the tool private and fast enough without any image upload step.

I also liked the product implication of that decision. If a designer drops a work-in-progress image into the tool, it never leaves the browser. For a utility like this, privacy is part of the UX.

API Mocker

The API mocker is probably the most interesting piece because it uses a Service Worker rather than a proxy server.

The UI stores mock endpoints locally, then posts the definitions to the Service Worker. The worker intercepts same-origin fetches and returns canned JSON responses with a marker header. That means the tool can simulate local APIs without asking you to run extra processes or edit config files.

That feature is a good example of the broader Forge rule: if the browser can already do something powerful, use it directly instead of rebuilding a server version out of habit.

Before / After Comparator

The newest Forge tool is a Before / After Comparator for design QA and screenshot review. It is a browser-only image comparison workflow aimed at the moment when you need to check whether an implementation actually matches a reference.

The core interaction is a custom overlay comparison UI: load a before image and an after image, stack them in the same preview, and drag a seam across the frame to reveal one against the other. I also added a range slider for more precise control when you want to inspect small spacing or alignment differences.

Because this belongs in Forge, it had to work entirely in the browser. The tool accepts uploaded or drag-and-dropped local images, persists both image selections and slider position in localStorage, and warns when the two images do not share the same dimensions. That mismatch check matters because pixel-perfect review breaks down fast if the inputs are not aligned.

I also treated it like a product feature instead of a demo widget. The comparator is wired into the app's route system, tool registry, icon set, homepage grid, command palette, and metadata copy, with swap and reset actions to make repeated comparisons faster.

Background Remover

The Background Remover pushed Forge into a different class of browser work. It uses MediaPipe Tasks Vision with WASM, loaded only on that route, so the whole segmentation flow runs locally in the page and the exported transparent PNG never depends on a server.

The first version was built around interactive segmentation prompts: click the subject you want to keep, or use a positive scribble to guide the model. That worked as an initial interaction, but it exposed a product mistake quickly. Prompting a segmentation model and manually editing a mask are related, but they are not the same job.

The model prompts are only guidance. They tell the segmenter what the user probably means, but they do not give the user direct control over the final edge. That became obvious on more complex images where the scribble interaction could feel inconsistent. MediaPipe also rejects invalid prompt combinations, so the interaction layer had real constraints about what could be sent together.

That changed the architecture. Instead of trying to make the model behave like a pixel editor, I split the workflow in two:

  • point prompt for initial segmentation
  • brush prompt for model-guided segmentation
  • Paint keep for deterministic add-back edits
  • Erase for deterministic mask removal

That distinction matters. Once the user moves from "identify the subject" to "fix this region," the right tool is not another model guess. It is a direct edit to the alpha mask that will actually be exported.

The implementation follows that product boundary. Manual edits modify the mask itself rather than round-tripping back through ML. I added a live mask overlay on the source image, adjustable brush size, and a rendering path that keeps thresholding separate from manual overrides so the preview does not thrash while painting.

The result is more reliable because it uses ML where ML is useful and removes it from the parts where determinism matters more. That is the broader Forge pattern in miniature: use the browser's advanced capabilities, but do not pretend probabilistic systems should handle work that really wants explicit user control.

Why I did not add auth, sync, or analytics

Those features would make the product worse.

Forge is supposed to feel disposable in the best sense: open it, do the job, leave. Accounts would add friction. Sync would create backend maintenance. Analytics would conflict with the value proposition.

The right persistence layer for this product is localStorage, namespaced by tool. Enough memory to remember your last inputs, no ceremony beyond that.

The broader reason I built it

Forge is partly about convenience, but it is also a reaction to how much web software now assumes every interaction deserves a backend. A lot of utility products do not need that weight. They need sharp constraints, good defaults, and respect for the fact that your input is often sensitive.

That is why I like the final shape of Forge. It is not trying to be a platform or a startup-shaped wrapper around basic transforms. It is just useful.

And that was the whole point.