Most email verification benchmarks publish vendor-reported accuracy claims and call it a day. We ran a real one.
We took 1,000 Real Estate decision-maker emails from Apollo, ran the identical list through six providers on the same day, then sent actual test emails to every "valid" verdict and counted the bounces. The results show a wide gap between what verifiers claim on their pricing pages and what they deliver on a B2B list with heavy catch-all coverage.
The full raw data is available as a live Google Sheet (and downloadable Excel) at the bottom of this post. Every number on this page comes from that file.
TL;DR
- WizLeads delivered the lowest bounce rate among the high-approval tools at 0.60%, with 852 of 1,000 emails classified as valid.
- ZeroBounce approved the most emails (888) but bounced 6.6 times more often than WizLeads (3.70% vs 0.60%).
- LeadMagic came closest to WizLeads on accuracy (822 valid, 7 bounces) but costs roughly 5x more per verification.
- NeverBounce, MillionVerifier, and Debounce kept bounces low by rejecting more than a third of valid emails as invalid. NeverBounce also flagged 79 emails as "unknown."
- The real differentiator is the combination of low bounces AND a high correctly-classified valid pool. Only two tools sit in that quadrant.
All six providers on the same 1,000 emails
| Provider | Total | Valid | Bounces | Bounce % | Invalid | Notes |
|---|---|---|---|---|---|---|
| WizLeads | 1,000 | 852 | 5 | 0.60% | 148 | Best in class catch-all and SEG resolution |
| ZeroBounce | 1,000 | 888 | 33 | 3.70% | 112 | Approves more, but bounces 6.6x more |
| LeadMagic | 1,000 | 822 | 7 | 0.90% | 178 | Close to WizLeads on accuracy, ~5x the price |
| NeverBounce | 1,000 | 711 | 4 | 0.60% | 289 | Plus 79 emails flagged "unknown" |
| MillionVerifier | 1,000 | 709 | 7 | 1.00% | 291 | Doesn't resolve catch-all to valid/invalid |
| Debounce | 1,000 | 670 | 1 | 0.10% | 330 | Lowest bounce rate, highest false-invalid rate |
Bounce counts come from real send tests, not vendor estimates. Invalid counts reflect each provider's own verdict.
Methodology
The source list was 1,000 decision-maker emails for US Real Estate agencies, exported from Apollo in April 2026. All six providers received the identical CSV, same day, within a two-hour window. None of them knew the list was a benchmark.
After receiving each provider's verdicts, we isolated the emails each one approved as valid and sent a plain-text test message from a clean Google Workspace account to every approved address. We counted hard bounces and mailbox-does-not-exist responses over 72 hours. Soft bounces that resolved inside the window were not counted.
The sending account had no prior history with any domain on the list, so the bounce rates reflect actual mailbox status rather than reputation-based filtering.
Why a Real Estate list
Real Estate decision-makers are a hard test case for email verification. Many run Microsoft 365 tenants with catch-all forwarding, which means the domain accepts every address regardless of whether the mailbox exists. Standard verification tools handle catch-all domains poorly, which is exactly the scenario this benchmark surfaces.
Our list contained 212 catch-all addresses (21.2% of the sample) and 159 addresses behind a Secure Email Gateway (15.9%). Together, that is more than a third of the list sitting in the difficult-to-verify category. On a Fortune 500 list the catch-all rate would be lower and the spread between tools would compress. On a typical SMB B2B list it looks much closer to this one.
The catch-all gap
Catch-all domains are where verifiers split into two camps. Three tools resolve catch-all addresses into clean valid/invalid verdicts. Three tools do not.
| Provider | Catch-all handling |
|---|---|
| WizLeads | Resolves to valid/invalid |
| ZeroBounce | Resolves to valid/invalid |
| LeadMagic | Resolves to valid/invalid |
| NeverBounce | Flags 137 as "catchall," no resolution |
| MillionVerifier | Flags 170 as "catch_all," no resolution |
| Debounce | Flags 201 as "Accept All," no resolution |
The three tools that do not resolve catch-alls (NeverBounce, MillionVerifier, Debounce) compensate by being more aggressive on the invalid label. That is why their valid pools land in the 670 to 711 range while the other three sit between 822 and 888.
This is the single most important variable in the benchmark. If your list has catch-all coverage above 10%, picking a tool that resolves catch-alls is the difference between using your full pipeline or throwing a third of it away.
Per-provider breakdown
WizLeads
852 valid, 5 bounces (0.60%), 148 invalid.
Lowest bounce rate AND biggest usable list. WizLeads resolves every catch-all into a valid or invalid verdict (212 detected here). It also flags addresses behind five different SEGs that no other tool catches. Best for cold email teams who can't ship bounces but also can't afford to discard a third of their list.
ZeroBounce
888 valid, 33 bounces (3.70%), 112 invalid.
Approves the most emails of any tool, but bounces 6.6x more than WizLeads. The pattern: it marks Google catch-all mailboxes as valid based on the domain accepting mail, even when the specific mailbox doesn't exist. 33 bounces in one sample is enough to wreck sender reputation inside two or three days. Use only if you have spare warmed-up domains to burn.
LeadMagic
822 valid, 7 bounces (0.90%), 178 invalid.
Closest tool to WizLeads in this benchmark. Resolves catch-alls cleanly. Bounce rate is fully acceptable for cold email. The catch is the price: roughly 5x more per verification than WizLeads at the same volume tier. Use it if you're already on it; benchmark your own list against WizLeads before staying.
NeverBounce
711 valid, 4 bounces (0.60%), 289 invalid, 79 unknown.
Same low bounce rate as WizLeads, but it gets there by rejecting 141 more deliverable leads per 1,000 plus flagging another 79 as "unknown" you paid to verify. Use only when shipping one bad email costs more than throwing away 14% of your usable list.
MillionVerifier
709 valid, 7 bounces (1.00%), 291 invalid, plus 170 flagged as catch_all.
Same shape as NeverBounce: low bounces by being aggressive on the invalid label. Doesn't resolve catch-alls. Pricing is competitive at volume. Fine for lists with low catch-all coverage; not ideal for B2B decision-makers.
Debounce
670 valid, 1 bounce (0.10%), 330 invalid, plus 201 flagged as Accept All.
Lowest bounce rate in the test by a clear margin. The cost: hands you back 530 of 1,000 addresses without a clean verdict (330 rejected as invalid plus 201 flagged "Accept All"). Use only if zero bounces matter more than half your list.
SEG detection (where the gap widens)
A Secure Email Gateway is a filtering layer that sits in front of a corporate inbox. Proofpoint, Mimecast, Barracuda, Appriver, and Sophos are the five common ones. SEGs respond to verification probes differently than the actual mailbox does, which is where most verifiers produce false positives on protected domains.
In our test list, 159 addresses (15.9% of the sample) sat behind one of the five major SEGs.
| SEG provider | Count | Share |
|---|---|---|
| Proofpoint | 66 | 6.6% |
| Barracuda | 38 | 3.8% |
| Appriver | 27 | 2.7% |
| Mimecast | 25 | 2.5% |
| Sophos | 3 | 0.3% |
| Total | 159 | 15.9% |
WizLeads identified the specific SEG provider for each of those 159 addresses. None of the other five tools in the benchmark surfaced SEG detection at all. They simply received the gateway's SMTP response and treated it as a normal verdict.
For a B2B campaign, that distinction matters because SEG-protected mailboxes are usually the highest-value targets on the list (mid-market and enterprise buyers, not solo operators). Treating those addresses as a normal "valid" or "risky" verdict misses the chance to route them differently or warm up the sending domain before sending.
The false-invalid trade-off
Bounce rate alone does not tell you which tool is best. The metric that actually matters is the combination of two numbers:
- Bounce rate: how often "valid" verdicts actually bounce on send
- False-invalid rate: how often "invalid" verdicts are actually deliverable
Most comparison posts publish only the first. Ignoring the second has a real cost. A low-bounce, high-false-invalid tool ships clean campaigns but throws away reachable prospects. A high-bounce, low-false-invalid tool uses the full list but damages sender reputation.
In this benchmark:
- Debounce: 0.10% bounce, 330 rejections. High false-invalid rate.
- NeverBounce / MillionVerifier: 0.60% to 1.00% bounce, 289 to 291 rejections. Same shape.
- ZeroBounce: 3.70% bounce, 112 rejections. Inverse trade-off.
- WizLeads: 0.60% bounce, 148 rejections. The only tool sitting in the good quadrant on both axes.
- LeadMagic: 0.90% bounce, 178 rejections. Second-best on this combined metric, with the price caveat.
How to pick the right verifier
Use this decision framework based on what you optimize for:
Lowest possible bounce rate, smaller validated pool acceptable: Debounce (0.10% bounce, 670 valid) or NeverBounce (0.60% bounce, 711 valid). Expect to lose 30 to 33% of your list to invalid verdicts.
Maximum validated volume, willing to absorb a 3 to 4% bounce rate: ZeroBounce (3.70% bounce, 888 valid). Plan for sender reputation impact and rotate domains aggressively.
Low bounces AND high valid-pool retention on B2B catch-all-heavy lists: WizLeads (0.60% bounce, 852 valid). The catch-all and SEG resolution is why this combination is achievable. LeadMagic (0.90% bounce, 822 valid) is the closest alternative if you do not mind paying ~5x more per verification.
Existing MillionVerifier workflow with low-catch-all lists: Stay on MillionVerifier. The low-catch-all part matters; on B2B decision-maker lists the trade-off is worse.
For side-by-side detail on any of these tools against WizLeads, see WizLeads vs ZeroBounce, WizLeads vs NeverBounce, WizLeads vs MillionVerifier, WizLeads vs Debounce, and WizLeads vs LeadMagic.
The broader point
The standard way to pick an email verifier is to read a vendor's accuracy claim on their homepage and trust it. ZeroBounce advertises 99% accuracy. NeverBounce advertises 99%. MillionVerifier advertises 99.9%. In a benchmark where ZeroBounce bounced 33 emails and NeverBounce rejected 289 valid ones, those three identical-looking marketing numbers translate into three completely different campaign outcomes for the same list.
The fix is to run your own test before committing. A 1,000-email benchmark on your actual list profile takes about two hours and resolves more uncertainty than six months of reading vendor pages. The reproducibility instructions are at the bottom of this post if you want to do it.
Frequently asked questions
Which email verifier had the lowest bounce rate in this benchmark?
Debounce had the lowest bounce rate at 0.10%, followed by WizLeads and NeverBounce tied at 0.60%, LeadMagic at 0.90%, MillionVerifier at 1.00%, and ZeroBounce at 3.70%. Bounce rate alone is misleading because three of those tools kept bounces low by rejecting more than a third of valid emails as invalid.
Which email verifiers resolve catch-all domains?
WizLeads, ZeroBounce, and LeadMagic all resolve catch-all addresses into valid or invalid verdicts. NeverBounce, MillionVerifier, and Debounce flag them as risky or accept-all and leave the call to the user. WizLeads detected 212 catch-all addresses in the 1,000-email test set, the highest of any tool.
Is ZeroBounce accurate for cold email verification?
ZeroBounce approved 888 of 1,000 emails in our benchmark, more than any other provider. However, 33 of those 888 bounced, a 3.70% bounce rate. ZeroBounce performs poorly on Google catch-all domains where the domain accepts mail but the mailbox does not exist.
Why do NeverBounce and MillionVerifier reject so many valid emails as invalid?
NeverBounce and MillionVerifier flag any address that lands on a catch-all or ambiguous SMTP response as invalid or risky instead of resolving it. On a Real Estate decision-maker list with heavy catch-all coverage, this rejected roughly 290 emails out of 1,000 compared to 148 rejected by WizLeads. NeverBounce also returned "unknown" on 79 emails, a verdict you paid for but cannot use.
What bounce rate should I target for cold email campaigns?
Keep hard bounces under 2%, ideally under 1%. Sustained bounce rates above 5% trigger spam filter downgrades across Google Workspace and Microsoft 365 within 10 to 14 days of sending. The realistic target is sub-1% bounce rate without discarding valid leads.
What is a Secure Email Gateway and why does it matter for verification?
A Secure Email Gateway is a filtering layer like Proofpoint, Mimecast, Barracuda, Appriver, or Sophos that sits in front of corporate inboxes. SEGs respond to verification probes differently than the real mailbox does, which is where most verifiers produce false positives on protected domains. WizLeads detected 159 SEG-protected addresses (15.9% of the test list) and identified the specific gateway in each case. No other tool surfaced SEG detection.
How often should I re-verify my email list?
Re-verify cold email lists every 30 to 60 days if you are actively sending. Lists decay at roughly 2% per month as people change jobs and mailboxes deactivate. For prospecting lists pulled more than 90 days ago, re-verify before the next campaign, not after the first batch of bounces.
Should I use more than one email verifier in a waterfall setup?
Waterfall verification across multiple providers helps if your primary tool returns a large share of risky or unknown verdicts that need a second check. If your verifier already resolves catch-all and SEG domains directly to valid or invalid, a waterfall adds cost without meaningful accuracy gain.
Reproduce this test on your own list
If you want to run the same benchmark on your list profile:
- Pick a sample of 500 to 1,000 emails from the list type you actually send to. Real B2B decision-maker lists produce different catch-all rates than consumer or freelancer lists.
- Run the same CSV through each tool on the same day, ideally within a two-hour window.
- Separate each tool's "valid" output and send a plain-text test message from a clean Google Workspace or Microsoft 365 sender with no prior history on the target domains.
- Count hard bounces over 72 hours. Do not count soft bounces that resolve inside the window.
- Optional but recommended: send a sampled subset of each tool's "invalid" verdicts and measure how many actually deliver. This is the false-invalid check most benchmarks skip.
Step 5 is the one that reveals the real trade-off. Skipping it lets a tool with a low bounce rate and a high false-invalid rate look like the winner.
Download or browse the raw data
Every email, every verdict, every bounce, every SEG flag from the 1,000-email test is in the spreadsheet below. Two sheets: a Summary tab with the comparison tables on this page, and a Raw Data tab with all 1,000 rows including the catch-all flag, the SEG provider per address, and every verifier's individual verdict.
Download the full benchmark dataset (Excel) or open the live Google Sheet.
This benchmark was run by Vatsal Nigam, Co-Founder of WizLeads. It is one of the six providers tested. The methodology was identical across all providers, the raw data is available above, and the source list is preserved so third parties can re-run verifications independently. Published April 2026.