Forge
A browser-based developer toolkit with privacy-first, client-side utilities for visual QA, local API mocking, text transforms, and in-browser background removal with deterministic mask editing.
What it is
Forge is a browser-based developer toolkit built around utilities that run entirely client-side. It includes tools for JSON diffing, regex explanation, cURL conversion, CSS specificity comparison, SVG optimization, color palette extraction, breakpoint testing, font pairing, local API mocking, a Before / After Comparator for visual QA, and a Background Remover that exports transparent PNGs without sending images to a server.
The constraint is the product: no backend, no account, no telemetry, and no waiting for a server round trip just to transform text, inspect screenshots, or compare implementation details you already have on your machine.
Why I built it
I kept reaching for tiny utilities during development and hitting the same friction every time: sign-up walls, rate limits, slow UIs, or the feeling that I was pasting code and payloads into somebody else's backend for no good reason.
Forge is the version of that workflow I actually wanted: fast, private, local-first, and opinionated toward everyday frontend, API, and design QA work.
How it works
The app is a Vite + React + TypeScript SPA with a shared shell and a set of self-contained tools behind their own routes. Tool state is persisted with namespaced localStorage, so each utility can remember inputs without needing a shared backend or auth layer.
Some tools are pure transforms over text. Others push the browser harder:
- The cURL converter uses
curlconverterin-browser, with the Tree-sitter WASM files served frompublic/ - The color palette tool extracts image pixels in a canvas and runs k-means clustering client-side
- The API mocker registers a Service Worker and intercepts same-origin requests with canned JSON responses
- The Before / After Comparator loads two local images, overlays them in the browser, and exposes both seam dragging and slider controls for pixel-perfect screenshot review
- The Background Remover uses MediaPipe Tasks Vision with WASM, loaded via dynamic import only on its route, then layers deterministic mask editing on top so users can refine the result locally before export
Background Remover case study
The first version of the remover used interactive segmentation prompts only: a point prompt and a positive scribble prompt. That was enough to prove the browser-side ML path worked, but it exposed the real product constraint quickly: prompt-based segmentation and manual editing are not the same problem.
Model prompts are guidance. They help the segmenter guess which subject the user means, but they are still probabilistic. On simpler images that felt fine. On harder edges, overlapping subjects, or noisy backgrounds, the scribble interaction could feel inconsistent because the user was trying to do precise correction with a tool that was only designed to influence the model. MediaPipe also rejects invalid prompt combinations, so interactions like mixing a keypoint and scribble in the same request had to be tightly controlled.
That changed the implementation direction. Instead of asking the model to keep re-interpreting every correction, I split the workflow into two layers:
- Point prompt for initial segmentation
- Brush prompt for model-guided segmentation
Paint keepmode to add regions back into the mask manuallyErasemode to remove unwanted masked areas manually
The important shift is where edits land. Manual adjustments no longer go back through the model. They modify the exported alpha mask directly. That makes the final corrections deterministic, which is what users actually need once they move from "find the subject" to "fix this edge."
I also added a live mask overlay on the source image, adjustable brush size, and a rendering flow that keeps ML thresholding separate from manual overrides so the preview does not thrash while painting. The model still does the heavy lifting, but the last mile belongs to the user.
The result is a better product, not just a more complex implementation. The ML layer is useful for fast initial selection. The manual layer is what makes the tool reliable. Everything still runs entirely in the browser, and user images never leave the page.
Key features
- Ten browser-based tools with no account required
- Background Remover with prompt-based segmentation, manual mask editing, and transparent PNG export
- Before / After Comparator for screenshot review, visual regression checks, and pixel-perfect design QA
- Drag-and-drop image loading with browser-only persistence for uploaded files and comparison state
- Custom interactive comparison UI with direct seam dragging, precise range control, swap, and reset actions
- Client-side persistence via
localStorage - Command palette style tool navigation
- Service Worker-powered local API mocking
- Route-level code splitting for heavier browser runtimes so the core app stays lean
- No telemetry, no backend, no data leaving the browser
Status
Live and fully usable. The newest iteration of Background Remover extends Forge's local-first rule to browser ML, but with a stricter product boundary: let the model propose the mask, then let the user edit it directly when precision matters.