
Build a lead scoring model (Profile + Engagement), add Deductions, visualize in a matrix, and export CSV/JSON for implementation.
| Signal | Rule / Definition | Points | Cap | Recency |
|---|
| Deduction | Rule / Definition | Points | Cap |
|---|
If your pipeline feels noisy, you don't need more leads. You need a reliable way to separate sales-ready prospects from everyone else — without relying on gut feel or whoever shouted loudest in last week's sync.
This free lead scoring model builder helps you create a complete, exportable lead scoring framework in minutes. It works across two core scoring dimensions — Profile Fit (who they are) and Engagement (what they've done) — and lets you add Deductions to filter out false positives before they waste sales time.
When you're done, you'll have a working model with scoring criteria, weights, MQL/SQL thresholds, and a lead scoring matrix ready to hand off to your RevOps, marketing ops, or CRM admin for implementation in HubSpot, Salesforce, Marketo, Pardot, or your existing stack. If you're still evaluating which platform to implement in, our Eloqua vs HubSpot vs Braze buyer's guide breaks down how each handles lead scoring and routing.
A lead scoring model is a structured system for assigning point values to leads based on two things: how closely they match your ideal customer profile (ICP) and how strongly they're signaling buying intent. The output is a score — or pair of scores — that tells your team who to prioritize, who to nurture, and who to filter out.
A well-designed lead scoring model does three things:
A good lead scoring model doesn't try to predict revenue with precision. Its job is to improve routing decisions consistently, at scale. When it's working, it feeds directly into the marketing automation workflows that handle follow-up, nurturing, and handoff to sales — so the right action happens automatically at the right time.
Use the builder above to configure your model, then follow this process to turn it into a repeatable system.
Before assigning a single point, agree on what qualified actually means for your team. A useful starting template: "A sales-ready lead is an ICP account with buying authority that has shown high-intent behavior in the last 14 days."
That one sentence prevents the most common scoring failure: rewarding activity that looks good on paper but has no correlation with pipeline.
Most lead scoring models fail because they collapse everything into a single blended number. Two-dimensional scoring — Fit and Engagement separately — is operationally clearer:
This is why the matrix matters. You can treat "high-fit / low-engagement" very differently from "low-fit / high-engagement." A single score hides that distinction and makes routing decisions harder to defend.
If you're building your first model, use a preset (B2B SaaS, Services, or E-commerce) to move quickly. If you already have a process, start blank and rebuild it in a way that's exportable and shareable with your team.
Profile Fit captures stable, reliable signals about whether a lead matches your ideal customer. Start with the most defensible attributes:
Keep Fit criteria short. If a sales rep wouldn't use a signal to qualify a lead in a discovery call, it probably shouldn't carry heavy weight. One useful proxy: review your closed-won accounts and look for the firmographic patterns that show up consistently. That's your Fit model in its earliest form.
Engagement scoring captures behavioral signals that correlate with revenue. Layer these in after you've locked your Fit criteria. The key is tying engagement scoring to your actual marketing automation workflows — so point values fire automatically based on real behavior, not manual entry.
Avoid over-weighting vanity behavior. One blog read does not indicate purchase intent — unless your sales cycle is heavily content-led and you have data to support it.
Deductions are the quality control layer of your model. They prevent high engagement from inflating the score of the wrong leads. Common deductions include:
A good deduction is stronger than any single engagement action. It exists to catch false positives before they reach Sales.
Thresholds convert your scores into workflow triggers:
If you don't have historical data yet, set conservative thresholds and revise after 2–4 weeks of outcomes. A practical heuristic: a threshold is working when raising it reduces volume without reducing pipeline contribution.
Understanding where these thresholds land relative to revenue is also the foundation of measuring marketing automation ROI — because once scoring is live, you can tie MQL volume and MQL-to-SQL rates directly back to your automation investment.
The scoring matrix is where your lead scoring model becomes operational. Each quadrant maps to a different action:
| Profile / Engagement | High Engagement | Low Engagement |
|---|---|---|
| High Profile Fit | Immediate sales follow-up | Nurture + retarget, SDR light-touch |
| Low Profile Fit | Qualify carefully; consider self-serve | Suppress or long-term nurture |
Documenting routing logic by quadrant gives your RevOps team a clear spec — not just a number to interpret.
Export your model as CSV or JSON and hand it off as an implementation spec. The typical implementation path:
If you're still evaluating which platform best fits your stack, our comparison of Eloqua, HubSpot, and Braze covers how each handles lead scoring natively, what requires workarounds, and which is best suited to different team sizes and sales motions.
If you want to keep implementation simple: build Fit and Engagement as separate fields and route from the matrix. That's often cleaner and more actionable than a single combined score — especially for teams that are implementing scoring for the first time.
Most scoring models fail because teams build them once and never revisit them. The model becomes stale as buyer behavior, product positioning, and channel mix evolve. Treat calibration as a standing process, not a one-time setup.
Run a 2–4 week calibration cycle. During the first month, monitor your MQL-to-SQL conversion rate, your SQL-to-meeting-booked rate, and — critically — identify false positives (high-score leads that never became pipeline) and false negatives (low-score leads that did).
Add guardrails to prevent score inflation. Cap points for repetitive behaviors so that 10 pricing page visits don't carry the same weight as a demo request. Use recency windows (typically 7–14 days) to ensure old intent doesn't keep a lead artificially warm.
Build a Sales feedback loop. If your sales team doesn't trust the model, they won't use it. A simple weekly question works: "Which scored leads felt wrong this week, and why?" Document changes so the model improves over time and the logic stays visible to everyone who relies on it.
One of the clearest signs your calibration is working: conversion rates by score band start stabilizing, and Sales stops going around the queue. If you're tracking the right metrics, this shows up directly in your marketing automation ROI reporting — you'll see MQL quality scores tighten alongside pipeline velocity.
Your exported model is a spec. The platform you implement it in determines how much of the logic you can automate — and how much requires manual configuration. Here's what the implementation pattern typically looks like regardless of platform:
Choosing the right platform matters here. Different tools handle lead scoring very differently — HubSpot's native scoring is straightforward for smaller teams, while Eloqua offers more granular control for complex enterprise setups. Our Eloqua vs HubSpot vs Braze comparison walks through exactly where each platform's scoring capabilities begin and end.
Once implemented, lead scoring doesn't live in isolation. It feeds into automated workflows that trigger nurture sequences, route to the right rep, and escalate based on score changes — which is why getting the scoring logic right before you build the automation matters.
Ready to turn this model into a live system? Portage's marketing services team can implement your lead scoring end-to-end — tracking setup, scoring rules, routing workflows, and reporting — so scoring becomes a real operational system, not just a spreadsheet.
A lead scoring model assigns point values to leads based on Profile Fit (ICP alignment) and Engagement (buying intent), so teams can prioritize follow-up and route leads consistently without relying on judgment calls.
Fit measures whether a lead matches your ideal customer profile based on stable attributes like role, company size, and industry. Engagement measures whether they're showing active buying signals right now. Strong models separate these into two scores and route from the intersection.
There's no universal number — the right threshold is one where your MQL-to-SQL conversion rate meaningfully improves above that score band, and your sales team agrees the leads coming through are workable. Set conservatively at first, then adjust with data.
A Fit × Engagement matrix is usually clearer operationally. It prevents "high engagement, wrong fit" leads from being treated as sales-ready, and gives your team a more actionable routing framework than a single blended number.
Deductions are negative point values assigned to disqualifying signals — wrong geography, competitor domains, spam behavior. They reduce false positives and improve the overall quality of what reaches Sales.
Score decay (or recency weighting) reduces point values over time so that old intent signals don't keep a lead perpetually warm. It's especially important in longer sales cycles where engagement from months ago has little bearing on current purchase intent.
Review weekly for the first 2–4 weeks, then move to monthly reviews. Any significant change to your product, pricing, ICP, or channel mix should trigger a fresh review.
Yes. Export the model as a CSV or JSON spec, then build scoring properties, workflow rules, and lifecycle stage triggers in your CRM or marketing automation platform based on the criteria and thresholds you've defined. See our platform comparison guide if you're evaluating which tool to use.
Not to start. Most teams get strong, measurable results with a well-designed Fit + Engagement model and a consistent calibration process. Predictive AI adds value later — once you have enough historical outcome data to train on — but it's not a substitute for getting the fundamentals right first.


