What gets measured gets managed. That principle applies to courtroom strategy, marketing spend, and staff performance. It applies with particular force to intake, the function that determines how many cases your firm signs. If you are not scoring intake calls, you are managing intake by intuition, and intuition is not a system.
A well-designed intake scorecard does three things: it defines what good performance looks like, creates a consistent standard across all coordinators, and gives managers a concrete basis for coaching. This article walks through the process of building one from scratch.
Before you build a rubric, you need a clear answer to this question: what does an excellent intake call look like at your firm?
Pull five to ten calls that ended in a signed client. Listen to them. What happened? How did the coordinator open? How did they transition between topics? How did they handle objections? How did they close? Now pull five to ten calls that ended without a signed client, where the caller seemed qualified and interested. What went differently?
The comparison will surface the patterns that your scorecard should capture. The behaviors that consistently appear in successful calls should be the behaviors you score. The absences that consistently appear in unsuccessful calls should be the failure modes you flag.
This analysis takes a few hours. It is time well spent because a scorecard built on your actual call data will be more relevant and actionable than a generic template.
A useful intake scorecard uses a simple, consistent rating scale. A 1-to-5 scale works well:
For each criterion on the scorecard, define what each score level looks like in concrete behavioral terms. “The coordinator explained contingency fees” is a binary criterion. A rubric criterion is richer: a 5 means the coordinator explained contingency fees clearly, confirmed understanding, and linked the fee structure to the caller’s situation. A 3 means the coordinator mentioned contingency but did not confirm understanding. A 1 means the coordinator did not address fees at all or made the caller more confused.
Concrete behavioral descriptions make scoring consistent across different managers and remove subjective judgment from the process.
A good intake scorecard covers ten dimensions of call performance. Here are the ten criteria your scorecard should include and what each one measures:
Did the coordinator open with warmth, clarity, and a clear firm identification? Did they invite the caller to share their situation? Did the first 15 seconds establish a tone of competence and care?
Did the coordinator acknowledge the caller’s emotional state? Did they reflect back what they heard? Did the caller feel heard before the questioning began?
Did the coordinator ask the key qualifying questions for this practice area? Did they cover liability, damages, timeline, and insurance status? Did they gather the information needed to assess the case?
Find the hidden revenue gaps in your intake process. A 10-minute audit that shows you exactly where cases are leaking.
Did the coordinator provide accurate information about the firm’s services, fees, and process? Were there any misstatements or omissions that could create confusion or liability?
When the caller raised objections (price, hesitation, need to think), did the coordinator address them directly and confidently? Did they use a structured response or did they back off?
Did the coordinator explain the fee structure clearly? Did they confirm the caller’s understanding? Did they present fees with confidence rather than apology?
Did the coordinator ask for a commitment? Did they attempt to schedule a consultation before the call ended? Did they create a clear next step?
Did the coordinator communicate any time-sensitive elements of the case (statute of limitations, evidence preservation, medical treatment windows)? Did the caller leave with a sense that acting promptly matters?
If the caller did not sign or schedule, did the coordinator establish a specific follow-up plan? Did they confirm a date and time for the next contact?
Was the call free from distracting background noise, excessive filler words, or unprofessional tone? Did the coordinator stay calm under pressure? Did the call end cleanly with a clear next step stated?
Not all criteria are equally important. The close attempt, for example, is more directly tied to conversion than call professionalism. Consider weighting your criteria to reflect their relative importance.
A simple weighting approach:
With this weighting, the maximum score on a 10-criteria, 5-point-per-criterion scorecard with three double-weighted criteria would be: (7 x 5) + (3 x 10) = 65 points. A coordinator who scores 52 out of 65 is performing at 80%. That benchmark is concrete, comparable, and trackable over time.
A scorecard is only as valuable as the coaching it enables. Here is how to use scores effectively:
Score at least two to three calls per coordinator per week. More is better, but two to three calls reviewed with specific feedback will produce more improvement than twenty calls reviewed superficially.
Feedback loses most of its impact after 48 hours. The coordinator needs to be able to connect the feedback to a specific call they remember. Delayed feedback feels abstract and lands without force.
The real playbook for training intake teams. What works, what wastes time, and how to build a coordinator who converts.
Start every coaching session with what the coordinator did well. Name specific moments: “In the third minute, when the caller mentioned they were worried about the cost, you responded exactly the way we train for. That was excellent.” Then address the gaps with equal specificity.
Do not give a coordinator five things to work on at once. Pick the one behavior change that will have the biggest impact and focus there exclusively until it is consistently showing up in their scores. Then move to the next one.
A single call score is a data point. A trend over eight weeks is actionable information. A coordinator whose objection-handling scores are consistently low has a training need. A coordinator whose close attempt scores dropped in the last month may be experiencing burnout or demoralization. Trends tell you what individual scores cannot.
| Criterion | Weight | Score (1-5) | Notes |
|---|---|---|---|
| Opening Quality | 1x | ||
| Active Listening and Empathy | 1x | ||
| Qualification Questions | 2x | ||
| Information Accuracy | 1x | ||
| Objection Handling | 2x | ||
| Contingency Fee Explanation | 1x | ||
| Close Attempt | 2x | ||
| Urgency Communication | 1x | ||
| Follow-Up Plan | 1x | ||
| Call Professionalism | 1x | ||
| Total | /65 |
Firms that build scorecards often make the same implementation errors. The most common:
Scoring too infrequently. A quarterly review of ten calls per coordinator is not enough data to identify trends or validate improvement. Score weekly.
Failing to share scores with coordinators. Scores that live in a spreadsheet but are never discussed are useless. The coordinator needs to see their own data to understand their performance.
Using scores as punishment rather than coaching. If coordinators associate the scorecard with negative consequences rather than growth, they will perform for the scorecard rather than for the caller. Frame scores as a coaching tool, not a performance management weapon.
Not updating the scorecard as the firm evolves. Your practice areas, your scripts, and your conversion goals will change over time. The scorecard should evolve with them.
eNZeTi builds intake scoring into the live call process, surfacing real-time scores and coaching prompts without waiting for a weekly review. To see how automated scoring and coaching work together to improve conversion, visit enzeti.com.
Further Reading on This Topic:
eNZeTi gives your intake coordinators real-time coaching, mid-call, so every conversation moves toward a signed case.
Get Your Free Intake Audit →