← Back

Amplifier

2025

An AI-powered design reference system that connects intelligent image tagging with automated client briefings.

Amplifier dashboard showing overview stats — total images, tag vocabulary size, AI accuracy, and storage used

The Problem

Design studios face two recurring pain points:

The Solution

Amplifier is two connected systems:

  1. AI Image Tagger — Upload reference images in bulk. Claude Sonnet analyzes each image and suggests tags across dynamic categories (industry, style, mood, color, etc.). The designer reviews, corrects, and saves. Over time, the AI learns from corrections — tracking frequently missed and over-suggested tags to improve future accuracy.
  2. Visual Briefing Tool — Clients fill out a guided brand questionnaire. Claude Haiku extracts keywords from their responses. Those keywords automatically search the tagged reference library and surface relevant images. Clients favorite the ones that resonate. The complete brief — answers, keywords, selected references — is packaged into an HTML email and sent to the studio.

The tagger builds the library. The briefing tool makes it useful.

Image upload screen with duplicate detection checking Uploaded images grid with 22 images pending tagging

Key Features

AI-Powered Image Analysis

AI tagging interface — image preview with tag categories and AI-suggested tags

Smart Duplicate Detection

Dynamic Vocabulary System

Guided Briefing Workflow

Analytics Dashboard

AI Learning Analytics dashboard showing prompt learning status and per-category accuracy

Architecture

Client (React)
  ├── Briefing Flow (public)
  │     ├── Questionnaire → /api/extract-keywords (Claude Haiku)
  │     ├── Keywords → /api/search-references (weighted DB search)
  │     └── Submit → /api/send-briefing (HTML email)
  │
  └── Tagger System (auth-protected)
        ├── Upload → Duplicate Detection (SHA-256 + pHash)
        ├── Tag → /api/suggest-tags (Claude Sonnet Vision + prompt cache)
        ├── Save → Supabase (images + tags + corrections)
        └── Analytics → /api/retrain-prompt (correction analysis)

Supabase
  ├── PostgreSQL (reference_images, tag_vocabulary, tag_corrections)
  ├── Auth (email/password, RLS policies)
  └── Storage (originals + thumbnails)

Engineering Highlights

Prompt Caching Strategy

The vocabulary (which can be large) is placed in the system message with Anthropic's ephemeral cache. The first request in a session pays the full cost, but subsequent image analyses reuse the cached vocabulary — reducing latency by 60–70% and cutting token costs.

Correction-Based Learning

Every time a designer overrides the AI's suggestions, the delta is recorded. The analytics engine aggregates corrections to identify patterns — "you frequently miss the 'tech' tag" or "you over-suggest 'minimal' in 40% of images." These insights are injected into the system prompt for future requests, creating a feedback loop without fine-tuning.

72% Component Code Reduction

The original ImageTaggerClient was a 2,400-line monolith. It was decomposed into 9 custom hooks (useAISuggestions, useImageUpload, useDuplicateDetection, etc.) and focused UI components, reducing the main component to ~640 lines while improving testability and reuse.

Security Hardening

Tech Stack

FrameworkNext.js 15 (App Router), React 19, TypeScript
StylingTailwind CSS 4, Framer Motion
AIClaude Sonnet 4 (vision), Claude 3.5 Haiku (text)
DatabaseSupabase (PostgreSQL + Auth + Storage)
ValidationZod schemas on all API inputs
EmailNodemailer with HTML templates
HashingWeb Crypto API (SHA-256), custom perceptual hash
Next.js 15React 19TypeScriptSupabasePostgreSQLClaude Sonnet 4 (Vision)Claude 3.5 HaikuCanvas APIZodNodemailerWeb Crypto API

Results