Articles

Lead Scoring: Your MQL Score Is a Random Number Generator

Ibby SyedIbby Syed, Founder, Cotera
7 min readFebruary 9, 2026

Lead Scoring: Your MQL Score Is a Random Number Generator

Lead Scoring

At some point in the last decade, every B2B company decided they needed lead scoring. Marketing would assign points to activities — downloaded a whitepaper, visited the pricing page, attended a webinar, opened three emails — and when a lead accumulated enough points, they'd become an MQL and get passed to sales.

In theory, this makes sense. In practice, it's one of the most reliably broken systems in B2B. I have yet to work with a company where the sales team trusts MQL scores. Not one. The conversation always goes the same way. Marketing says "we sent you 200 MQLs this month." Sales says "yeah, and 180 of them were garbage." Marketing points to the scoring model. Sales points to the pipeline that didn't move.

Here's the root problem: traditional lead scoring measures engagement and calls it intent. A person who downloaded your whitepaper, visited three pages, and opened two emails gets a high score. But that person might be a student doing research. They might be a competitor analyzing your content. They might be a tire-kicker who will never have budget. The engagement is real. The buying intent is assumed. And that assumption is wrong often enough that sales teams have learned to basically ignore MQL scores entirely.

The irony is brutal. Companies spend months building scoring models, tuning the point values, debating whether a webinar attendance is worth 10 points or 15, and the end result is a system that neither marketing nor sales actually believes in.

Why Activity-Based Scoring Fails

Traditional lead scoring assigns points to observable activities. The logic seems reasonable: someone visiting your pricing page is more interested than someone who only read a blog post. Someone who attended a live demo is more engaged than someone who opened a marketing email.

The problem is that these activities measure curiosity, not readiness to buy. And curiosity without authority, budget, and timing is just browsing.

I'll give you a specific example. At a previous company, our highest-scoring lead was a marketing coordinator at a 15-person agency. She'd downloaded every piece of content we published, attended three webinars, visited the pricing page six times, and opened every email we sent. Score: 97 out of 100. Perfect MQL. Sales called her immediately.

Turns out she was building a competitive analysis for her boss. She had zero purchasing authority, no budget, and was evaluating us as a case study for a client presentation. Our scoring system ranked her above a VP of Revenue Operations at a 500-person company who had visited our pricing page exactly once and never opened a single email. That VP signed a $120K annual contract three weeks after a cold call reached him. His MQL score? 12.

Lead Score vs Reality

This is the systemic failure of activity-based scoring. It optimizes for the wrong signal. People with time browse extensively. People with budget and authority are often too busy to read your blog. The leads your scoring model ranks highest are frequently the least likely to buy, and the leads it ignores are frequently the ones with actual purchasing power.

The Enrichment-First Approach

What if, instead of scoring leads on what they do on your website, you scored them on who they are and what's happening at their company?

This is the fundamental shift that AI-powered lead scoring enables. Instead of watching page views and email opens, you enrich every lead with data that actually predicts buying behavior: their title and seniority level, their company's size and growth rate, their industry, their company's funding stage, what technology they currently use, what they're hiring for, and whether any buying signals are present.

A lead who is a VP of Sales at a 200-person SaaS company that just raised a Series B, is hiring three SDRs, and currently uses a competitor — that lead has a fundamentally higher probability of buying than a marketing coordinator who downloaded six whitepapers. The enrichment data tells you this before they've ever visited your website. The activity data? It would rank the coordinator higher.

This is why enrichment-first lead scoring outperforms activity-based scoring by such a wide margin. You're scoring on fit and timing, not engagement. Fit tells you whether they match your ideal customer profile. Timing tells you whether something is happening at their company that creates urgency. Engagement can still be a tiebreaker — between two equally well-fit leads, the one visiting your pricing page is probably further along. But it should never be the primary signal.

The Signal-Based Scoring Model

Here's a practical lead scoring model built on enrichment and signals rather than page views. I've seen variations of this outperform traditional scoring by 3-5x on SQL conversion rates.

Tier 1: Fit Score (0-40 points)

This scores how well the lead matches your ideal customer profile based on enriched data.

  • Title/seniority: C-level or VP = 15 points. Director = 10. Manager = 5. Individual contributor = 0.
  • Company size: In your ICP range = 10 points. Adjacent = 5. Outside = 0.
  • Industry match: Target industry = 10 points. Adjacent = 5.
  • Tech stack: Uses a competitor or complementary product = 5 points.

Tier 2: Timing Score (0-40 points)

This scores whether buying signals are present right now.

  • Hiring in relevant department: +15 points. A company hiring for roles your product supports is actively investing in that function.
  • Recent funding: +10 points. Capital means budget.
  • Leadership change in relevant function: +10 points. New leaders buy new tools.
  • Company growth rate above 20%: +5 points. Growth creates the problems you solve.

Tier 3: Engagement Score (0-20 points)

Activity still counts, but it's capped and weighted as a secondary signal.

  • Pricing page visit: +10 points.
  • Demo request: +10 points.
  • Content engagement: +5 points max (capped, regardless of how many whitepapers they download).

With this model, a VP of Sales at a growing company that just hired three SDRs scores 40+ before they ever touch your website. A marketing coordinator who binges your content library maxes out at 25, no matter how engaged they are. The scoring finally reflects reality.

How AI Makes This Practical

The obvious objection to enrichment-first scoring is: "enriching every lead takes too long." And with manual research, that's true. You can't have reps Googling every inbound lead's company to check their funding stage and hiring activity. That's hours of work per day.

But this is exactly what AI agents are built for. When a new lead comes in — a form fill, a demo request, a free trial signup — an enrichment agent automatically pulls their LinkedIn profile, company data, recent news, hiring activity, and technology indicators. A decision maker finder confirms whether this person has purchasing authority. The enrichment happens in minutes, not hours.

The result is that every single lead that enters your system is immediately scored on fit and timing, not just on whether they clicked a link. Your sales team gets a prioritized queue where the top leads are genuinely the most likely to buy — based on who they are and what's happening at their company — rather than who's been most active on your website.

The teams I've seen implement this approach consistently report two things: sales accepts a higher percentage of marketing-generated leads (because the quality is genuinely better), and the leads that sales does accept convert to pipeline at 2-3x the rate of activity-scored MQLs.

The "So What?"

Lead scoring is supposed to answer one question: "which leads should sales call first?" Traditional activity-based scoring answers a different question: "which leads have spent the most time on our website?" Those two questions produce very different answers.

Rebuild your scoring model around enrichment and signals — who the person is, what their company looks like, and what buying indicators are present — and cap engagement as a secondary tiebreaker. The result is a scoring system that sales actually trusts, because it finally reflects what every experienced rep already knows: a VP at a funded company who visited once is worth more than a marketing coordinator who visited a hundred times.


Try These Agents

For people who think busywork is boring

Build your first agent in minutes with no complex engineering, just typing out instructions.