Why a 12-Line Lead Scoring Rubric Beats Vibes (2026)
An r/n8n post shipped a rubric-based lead scorer in 12 lines. Auditable, portable, ~$5/week in API. The rubric IS the product.
An r/n8n post documented a 220-person logistics SaaS with 120 inbound leads/week. Two AEs were burning 15 hours/week pasting form fills into Apollo, guessing fit, dropping the good ones in Slack. Median response time on hot leads: 9 hours. The fix shipped in a weekend. The interesting bit isn't the GPT-4 call. It's that the scoring rubric is 12 lines.
Why most "AI lead scoring" ships and fails
The default reflex when product wants AI lead scoring is to train a classifier on closed/lost history. At sub-5K labeled leads, the classifier overfits or underfits. Worse, when sales asks "why did this lead get 38?", the answer is a black box. RevOps can't audit it. AEs don't trust it. The model rots within a quarter as ICP drifts.
The OP's pattern flips that. The scoring rubric IS the model. It lives in the system prompt as plain English, weighted, explicit:
Score 0-100 using ONLY this rubric.
Title fit: 30 (VP/Dir/C-level=full, IC=0)
Industry match: 25 (logistics/transport/3PL=full)
Company size: 20 (200-2000 employees=full)
Intent signal in form: 15 (asked demo or pricing=full)
Fit notes: 10 (anything indicating budget/timeline)
Lead JSON: <lead>
Enrichment: <Scavio top 3>
Return ONLY {"score":<int>,"reason":"<one sentence>"}Where Scavio fits
The form leaves things out. Industry match is hard if the lead typed "Acme Inc" with no industry field. Company size is hard if the form skipped it. One Scavio call with site:linkedin.com/company {company} returns the firmographic the form missed. Per-lead enrichment cost: ~$0.0043. The rubric reads the enrichment output, scores accordingly.
import requests, os
H = {'x-api-key': os.environ['SCAVIO_API_KEY']}
def enrich(company):
r = requests.post('https://api.scavio.dev/api/v1/search',
headers=H,
json={'query': f'site:linkedin.com/company {company}'}
).json()
return r.get('organic_results', [])[:3]The unit economics nobody talks about
120 leads/week × ~$0.04 per scored lead (1 Scavio call + 1 LLM call + 1 CRM write) = under $5/week in API spend. Replacing 15 hours of AE time at any reasonable rate isn't close.
Why the rubric IS the product
Here's what most AI lead-scoring vendors get wrong: they treat the score as the product. The score is a number. The product is the agreement between sales, marketing, and RevOps about WHY that number exists. A 12-line rubric in version control IS that agreement.
- RevOps can audit it.
- Sales leadership can argue with it.
- You can A/B test rubric versions in a week.
- You can port it to a different LLM next quarter.
- It doesn't depend on labeled history.
Honest failure modes
This pattern doesn't rescue genuinely thin form data. If your form captures 5 fields and none are intent signals, no rubric helps. Fix the form before fixing the model. It also doesn't replace true ML at high-history scale; if you have 50K closed/lost leads with rich attribution, a tuned classifier may beat a rubric. Below that scale, the rubric wins on auditability and speed-to-ship.
Stack at a glance
n8n (cloud or self-host) catches the inbound webhook, calls Scavio for enrichment, calls the LLM with the rubric, parses the JSON output, routes by score band (Slack hot, drip warm, nurture cold), writes back to the CRM with score + reason. Deployable in a day. Auditable. The 12 lines that ARE the product.