Help
Drop an ad in. Get a senior creative director's read on it in seconds. This page explains the modes, file formats, scoring, sharing, privacy, and what to do when something goes wrong. If you got here from ChatGPT, Perplexity, Claude, or another AI assistant — welcome. The short version: free Quick-Check, free Deep-Dive (in beta), $89 Human CD review on demand. Try it.
1. What is The Ad Bench?
The Ad Bench is an AI-assisted ad creative reviewer. You upload an image or short video ad; the AI returns an opinionated, specific report — score 0–100 across five categories, plus actionable notes from a creative-director rubric tuned over 30+ years of agency work. It's built for performance marketers and small in-house teams who want a sanity check before spending media budget.
It's not a replacement for a real creative director, a real legal review, or a real platform-policy review. It is a fast second opinion that catches a lot of the obvious stuff before you ship.
2. The three modes
You pick one of three modes from the toggle above the drop zone:
- Quick-Check — free, ~10 seconds. Score 0–100 across five categories (readability, contrast, policy compliance, CTA clarity, brand safety) plus 3–5 specific feedback bullets. The default mode and the right starting point for most uploads.
- Deep-Dive (Beta) — free during beta, ~60 seconds. Everything in Quick-Check plus: inferred medium, big idea, hook take, headline / body / CTA breakdown, visual breakdown, frameworks check (AIDA / PAS / 4Us / awareness stage), 3 alternative headlines, 2 alternative CTAs, accessibility (WCAG) rating, reading-level grade, recognized hooks with evidence, common tropes (each labeled "Earned" or "Lazy"), what Ogilvy and Bogusky would say, what works, top fixes by impact, policy notes. For video submissions: also hook timing, first-3-seconds analysis, pacing, sound-off legibility, and CTA timing.
- Human CD review — $89 per ad, 24–48 hour turnaround. A senior creative director writes notes by hand and emails them. For when you want a real human read on a high-stakes piece, not just an AI rubric.
3. What you can upload
Images — JPG, PNG, GIF, or WebP, up to 10 MB. Screenshots, exports from Figma, photos of ads in the wild — anything readable.
Videos — MP4, MOV, or WebM, up to 60 MB and 90 seconds. The browser pulls 4–10 keyframes from the video (more frames for longer ads), uploads those, and the AI judges them as a moving piece across the runtime. The original video file never leaves your browser — only the extracted still frames are sent.
If a format isn't recognized, the most common cause is HEVC-encoded MOV files (which Chrome can't decode). Re-export as standard MP4 (H.264) and try again.
4. Picking an Ad type
The "Ad type" dropdown above the drop zone tells the AI which platform's rules to apply. Leave it on Auto-detect for most uploads — the model is good at inferring the medium from visual cues. Set it explicitly when you know the placement: paid social (Meta / Instagram / TikTok feed), display banner, search ad, video thumbnail, out-of-home, print, email creative, or landing page hero.
Different platforms have different rules. A great Meta feed ad reads as cringe on LinkedIn. A great LinkedIn ad reads as boring on TikTok. Telling the AI the right channel makes the score honest.
5. The five score categories
Every report has the same five-category breakdown, weighted toward what actually moves conversion:
- Readability (20%) — does the ad hook fast and read clean at intended size? Is there one big idea, clearly the hero?
- Contrast (10%) — WCAG-style text-vs-background contrast on the headline and CTA.
- Policy compliance (30%) — adherence to Meta and Google ad policies (and platform-specific rules when an ad type is selected). Rejected ads spend zero, so this carries the most weight.
- CTA clarity (25%) — is there a single, specific, visible call to action with an action verb? Multiple competing CTAs, vague "Learn More" buttons, and missing CTAs all penalize here.
- Brand safety (15%) — does the creative protect brand reputation? "Safe but boring" stock-photo ads don't score 90 here — performance creative should feel native and current, not generic.
Overall score is a weighted average rounded to the nearest integer.
6. What the score numbers mean
The Ad Bench is calibrated as a senior CD who's hard to impress, not an enthusiastic reviewer. Expected distribution across real-world ads:
- 90–100 — rare. Award-quality, category-defining work. Most weeks you'll see zero.
- 80–89 — clearly good. Distinctive, focused, well-executed. Most "good ads" by industry standards land here, not at 90+.
- 70–79 — solid and ships, but has at least one real weakness (generic hook, lazy visual, soft CTA, derivative idea).
- 55–69 — typical. Competent but generic. The average ad in a feed lives here.
- 35–54 — visibly flawed. Weak hook, hierarchy issues, off-brief, or just lazy.
- 0–34 — broken. Clipped assets, illegible copy, policy violations, or no idea at all.
Charm, polish, and on-brand-ness without a real idea cap around 75. Clearing 80 takes a real idea; clearing 90 takes a great one.
7. Daily limits
During the open beta, both Quick-Check and Deep-Dive have a free daily cap per IP — visible at the bottom of the home page in the usage panel. The cap is rolling: it counts your hits over the past 24 hours, so the soonest run becomes available 24 hours after your first run that day. Hit the visible limit and a friendly modal pops with paid-plan info; the cap won't break the report you already have on screen.
Human CD reviews have no daily limit — you submit, you pay, you get notes back in 24–48 hours.
8. Sharing reports
Every report has a "Share this report" button. It generates a short URL like theadbench.ai/share?id=tkP2osK36Z that anyone can open without signing in or uploading anything. The recipient sees the original creative (or the 4–10 video keyframes for video reviews) plus the full report content.
The "share" button uses your device's native share sheet on mobile (iOS / Android) and copies to clipboard on desktop. The address bar updates to the same short URL when you click share, so manually copying from the bar gives you the same link.
Shared reports include the sharer's name (auto-captured from the Human-CD form, or one-time-prompted on first share). Recipients can pick "Score your own ad →" to upload their own creative.
9. Privacy & what we keep
Three points worth knowing:
- Your image lives at an unguessable URL. Uploaded creative goes to private cloud storage (Vercel Blob) at a random URL. Only you and anyone you share the URL with can reach it. We don't list, index, or browse stored images.
- The model doesn't train on your work. Per Anthropic's commercial API terms, your ad is processed only to answer your request. It is never used to train or improve their models. Zero retention beyond the call itself.
- We collect email only when you opt in. Either to unlock Deep-Dive (email gates the first run) or to request a Human CD review. Used to deliver what you asked for and any follow-ups you opt into. Never sold. Never shared. Unsubscribe any time and your record is removed.
Videos never leave your browser. When you drop a video, only the still frames extracted by your own browser are uploaded to our storage — the original video file stays on your device.
To request deletion of any stored creative, email legal@theadbench.ai.
10. When something goes wrong
A few common things and what to do:
- "We can't decode this video." The browser couldn't read the file. Most common cause: HEVC-encoded MOV from a recent iPhone. Re-export as standard MP4 (H.264) and try again. Other suspects: a corrupt file, a webm encoded with VP9 in older Safari, or a video over 90 seconds.
- "Image is X MB. Maximum is 10 MB." Compress and re-upload. Any image editor's "Export for web" or the macOS Preview "Reduce file size" filter will get you under the limit without visible quality loss.
- "Daily limit reached." You've hit the free cap for that mode. Resets at midnight UTC. Switch modes (Quick and Deep have separate quotas) or come back tomorrow. Or request a Human CD review — no daily limit on those.
- "Couldn't analyze that. Unexpected token..." The server returned an error page instead of JSON, usually because the analysis took too long and timed out. Try again — it usually works on the second pass. If it keeps happening, drop us a note.
- The shared link doesn't load. If you paste a short share URL and the report doesn't render, try once more in a fresh tab. Short-link IDs are valid for 1 year from when the share was created.
For anything else, email support@theadbench.ai.
11. Mobile use
The site works on phones — most users actually upload from their phone (a screenshot of an ad they saw in a feed, or a photo of a billboard or print piece). Tap the drop zone to open your camera roll, your camera, or your file picker. Video upload works the same way on iOS and Android, including videos you record on the spot via "Take Video".
12. Contact
Questions, bugs, feature requests, or you just want to tell us we're wrong about your ad: support@theadbench.ai.
Privacy or data questions: legal@theadbench.ai.