I used to manage audience data for B2B media companies, I still do, but I used to too. We sold ad campaigns, sponsored content, and email deployments; advertisers always wanted some version of “We want to target only directors and above.”

The problem was that the data wasn’t always…clean. How could it be? Some people have made up titles like “Growth Ninja”, “Chief Evangelist”, or sometimes people would shorten their titles to “Sr. Mgr. Digital Experience”. Every year, someone (usually me) has to normalize job titles and seniority buckets for advertisers.

Cleaning the data always felt rushed and fragile, and every time I found a goofy title, I would have to make a judgment call about where to bucket it, write a note about why, and move on.

Freeform Titles Are a Segmentation Tax

Every time you need to update audience segmentation, or sales is breathing down your neck, ask,” How many C-Suite executives are on our email list?” You pay the segmentation tax, someone (me) has to clean the data so they can use it. How much time does that take, and is that the best use of someone’s time?

For B2B media companies, knowing who and what your readers do for a living is really important to how you advertise to them. Your segmentation can be filled with hot garbage because someone typed “Chief Executive Assistant,” and your lookup table included them in the C-Suite bucket. Which means you’re either lying to advertisers or leaving real audience value uncounted for.

If your contact list feeds anything like the daily email performance reports or the newsletter audience reporting that go to your sales team or advertisers, seniority accuracy is load-bearing data.

What Are the Standard B2B Job Title Seniority Levels?

  • C-Suite — CEO, Founder, CMO, President, Owner

  • VP — VP, SVP, EVP, AVP, and variants

  • Director — Director, Head of (team-managing), Senior Director

  • Manager — Manager, Team Lead, Program Manager, Supervisor

  • IC — Analyst, Engineer, Coordinator, Specialist, Account Executive

Why Use Claude Over a Lookup Table?

I’ve built lookup tables and regex to try and fix this issue in the past; it’s worked for simple things like “VP” and “Manager”, but it fails immediately on:

  • "Dir/VP, Revenue Operations"

  • "Head of Partnerships" (managing a team? solo? no idea)

  • "Partner, Client Strategy" (equity partner? non-equity title at a big firm?)

  • "Chief Evangelist"

  • “Sr” vs “Senior”

  • Anything that is in a different language, I’m looking at you, Geschäftsführer.

You can extend these lookup tables indefinitely and still not cover all use cases, but does anyone want to? Is that how you really want to spend your time?

The advantage of Claude is that you can offload some of that knowledge. You can provide instructions in a markdown file and explain your classification logic. With a dedicated section for edge cases, you can have the rules evolve. You’re building a standard for how you classify data.

Where the Domain Expertise Lives

The ruleset is a plain Markdown document that becomes Claude's system prompt. It defines the five seniority buckets with examples, sets normalization rules (fix capitalization, expand abbreviations, clean punctuation), and explicitly handles edge cases.

In this version of the tool, the ruleset lives in a dedicated "Title Normalizer Rules" tab in your Google Sheet — not in a file on your machine. That matters: anyone with access to the sheet can read and edit it without touching code. When your definition of "Director" changes, you update the tab, not a config file. Here’s an excerpt from the ruleset.

## Seniority Buckets ### C-Suite The most senior executive leaders with company-wide authority. Typically report to a board or directly own the business. Examples: CEO, CTO, CFO, COO, CMO, Founder, Co-Founder, President, Managing Director, Owner, Partner (at own firm) ### VP Senior leaders who manage directors or large teams. Carry budget authority and are typically two levels below the CEO. Examples: VP, Vice President, SVP, Senior Vice President, EVP, AVP, VP of Sales, VP of Marketing, VP of Engineering ## Edge Cases - **"Head of [Department]"**: Use `Director` if they likely manage a team. Use `IC` if the context suggests they're a solo contributor with a fancy title. - **"Founder"**: Always `C-Suite`, regardless of company size. - **"Partner"**: Use `C-Suite` if at their own firm. Use `IC` or `Director` if at a larger organization where Partner is a non-equity title. - **"Principal"**: Use `C-Suite` if it implies ownership. Use `IC` if it's a technical seniority level (e.g. "Principal Engineer"). - **Non-English titles**: Normalize to the English equivalent if confident (e.g. "Geschäftsführer" → "CEO", bucket: `C-Suite`). Use `Unknown` if unsure.

The edge case section is the most important part. Every time the tool flags something wrong, the right fix is to add a new edge-case rule and rerun — not to fight the model. The ruleset is a living documentation of how your organization thinks about seniority.

How the Google Apps Script Works

This script runs in Google Apps Script and is directly attached to your Google Sheet. It reads job titles from your sheet, sends them to Claude in batches with a classification ruleset you define, and writes "Normalized Title" and "Seniority Bucket" columns back — without touching your original data. Setup requires pasting two files into the Apps Script editor and saving your Anthropic API key once.

The tool is a Google Apps Script that runs directly inside your Google Sheet — no local environment, no service account, no CLI. You paste two files into the Apps Script editor, and you're done.

When you open the sheet, a CRM Tools menu appears with three options: Normalize Titles, Dry Run Preview, and Update Rules from Corrections.

The part that took me the longest to design was the feedback loop. Every tool like this has the same problem: Claude flags ambiguous titles as "Unknown," you fix them manually, and two months later, you're fixing the same titles again because nothing was learned from your corrections. This one does. Unknown titles are highlighted in yellow on the sheet. You correct them. One menu click sends those corrections back to Claude, which updates the ruleset automatically. The next run handles those titles correctly — and anything like them.

Here's what happens when you run Normalize Titles:

  1. If the "Title Normalizer Rules" tab doesn't exist, it's created and seeded with the default ruleset

  2. The ruleset tab is read and used as Claude's system prompt

  3. The active sheet is scanned for the "Job Title" column

  4. Rows with an existing "Seniority Bucket" value are skipped (idempotency)

  5. Unprocessed titles are batched in groups of 50 and sent to Claude via UrlFetchApp

  6. Claude returns a JSON array with original, normalized, and bucket for each title

  7. Results are written to "Normalized Title" and "Seniority Bucket" columns

  8. Any row where bucket = "Unknown" gets highlighted yellow — a visual flag to review

The script handles its own auth. Because it runs inside Apps Script with SpreadsheetApp Google handles the Sheets permission natively. The only credential you manage is your Anthropic API key, which is saved to Script Properties on first run, so you're asked only once.

Frequently Asked Questions

Does this work on large contact lists?

The script processes titles in batches of 50 per API call. For lists with thousands of rows, you may need to run it in chunks due to Google Apps Script's 6-minute execution limit — but idempotency means already-processed rows are always skipped.

What happens when Claude can't classify a title?

The script writes "Unknown" in the Seniority Bucket column and highlights the cell yellow. You correct it manually, then run "Update Rules from Corrections" to feed that correction back into the ruleset.

Do I need to know how to code to use this?

No. The script is paste-and-run inside the Apps Script editor. The only credential you manage is an Anthropic API key, which you save once and never touch again.

How much will this cost me?

The script uses Claude Haiku, which is priced for exactly this kind of batch classification work. A list of 10,000 titles costs well under a dollar to process — check current per-token pricing at https://console.anthropic.com. The higher cost is the Anthropic API key, which requires a paid account.

Which model does this use?

The script uses claude-haiku-4-5 by default — Claude's fastest and cheapest model. Haiku is the right call for structured classification tasks like this: the input is simple, the output format is rigid, and speed matters more than reasoning depth. You're not asking it to write copy or analyze strategy; you're asking it to apply a ruleset you've already written. Haiku does that well at a fraction of the cost of larger models.

Want the complete code and walk-through?

Sign up for the Automation Almanac newsletter to get the complete working code. Everything you need to run this on your own sheet today.

Google Apps Script Code Walkthrough

If you don’t care about the codebase and why every function does something, and you just want to get on with cleaning your job titles, watch this video.

If you’re more technical and care about the inner workings, read the rest

Reading Job Titles from the Google Sheet

SpreadsheetApp handles everything — no API client, no auth setup. The script reads the entire sheet into memory getDataRange().getValues(), finds the title column by header name, and collects rows that don't yet have a bucket value:

const alreadyDone = bucketIdx !== -1 &&
  String(data[i][bucketIdx] || "").trim() !== "";

If the bucket column doesn't exist, no rows are skipped — everything gets processed.

Batching to Claude

Google Apps Script doesn't have a native HTTP client, but UrlFetchApp.fetch() handles external API calls cleanly — including to the Claude Messages API.

The API call goes through UrlFetchApp.fetch(), which is Apps Script's built-in HTTP client. The payload is standard Claude Messages API format — system prompt (the ruleset), user message (the titles):

UrlFetchApp.fetch("https://api.anthropic.com/v1/messages", {
  method: "post",
  headers: {
    "x-api-key": apiKey,
    "anthropic-version": "2023-06-01",
    "content-type": "application/json"
  },
  payload: JSON.stringify({ model: MODEL, max_tokens: 4096, system, messages }),
  muteHttpExceptions: true
})

muteHttpExceptions: true is important — without it, Apps Script throws on any non-200 status code, and you lose the response body, which you need to log the actual error.

The prompt structure

The user message is explicit about the exact JSON schema Claude needs to return. This matters more than you'd think — without specifying field names in the user message, Claude invents its own schema and the whole thing falls apart silently:

`Return a JSON array — one object per title — using EXACTLY these field names:
- "original": the exact input string, unchanged
- "normalized": cleaned, properly-cased title
- "bucket": one of: C-Suite, VP, Director, Manager, IC, Unknown

No other fields. No markdown. No code fences. Raw JSON only.`

The code strips markdown fences defensively anyway, but the explicit instruction in the prompt is what keeps this from being a recurring issue.

Retry logic

Apps Script uses Utilities.sleep() instead of setTimeout(). The retry wrapper handles 429 (rate limit), 500, 502, 503, 504, and 529 (Anthropic overload) with exponential backoff starting at 1 second.

The Feedback Loop: Updating Rules from Corrections

This is the part that surprised me when I thought through the design. When Claude can't classify a title, it writes "Unknown" and highlights the cell yellow. That yellow highlight is a flag — not just a visual reminder, but a functional marker the script reads.

When you run Update Rules from Corrections, the script scans for cells that are still yellow but whose value is no longer "Unknown" — meaning you've manually corrected them. It collects those [title, correctedBucket] pairs, sends them to Claude along with the current ruleset, and asks Claude to update the rules to handle similar titles going forward. The updated ruleset overwrites the Rules tab. Yellow highlights are cleared.

The result: every correction you make feeds back into the system. The ruleset gets better with use, without you having to manually write new edge case rules every time.

Common Issues and Edge Cases

JSON schema discipline. The user message must specify the exact field names Claude should return. Without "original", "normalized", and "bucket" spelled out explicitly, Claude will return plausible-looking JSON with completely different keys — "title", "level", "department" — and everything silently comes back as Unknown because the field access returns undefined.

Apps Script time limit. Google enforces a 6-minute execution limit per run. For large sheets (thousands of rows), you may need to run in batches — add data in chunks and rerun after each batch. The idempotency check means previously processed rows are always skipped.

muteHttpExceptions: true If you leave this out, a 429 from the Claude API throws a generic Apps Script exception, and you lose the response body. You want to handle rate limits yourself, not let Apps Script handle them.

Rules tab as a single source of truth. The entire ruleset resides in cell A1 on the Rules tab as a single text block. If you accidentally delete that tab or clear A1, the default rules are re-seeded on the next run. But any customizations you've made — including corrections applied by the feedback loop — are gone. Worth treating that tab with care.

What Claude gets wrong. Startup titles with invented seniority ("Head of Vibes"), dual-role entries ("VP of Marketing & Sales Enablement"), and made-up C-suite variants ("Chief Happiness Officer") are where the model is most likely to be inconsistent. The fix is the same every time: let the yellow highlight prompt you to correct it, then run Update Rules from Corrections. Don't fight the model — train the ruleset.

I built this a few years too late for the job where I needed it most. But the next time someone asks me how many directors are on the list, I can answer in two minutes instead of two hours — and I can back it up.

Keep Reading