Referring doctor profitability analysis reveals that the "keep the relationship" argument costs imaging centers tens of thousands per year in unrecoverable drug losses. Data from 433 specialty PET encounters across 5 imaging centers shows that zero out of 21 referring doctors who sent toxic-payer patients generated enough profitable volume to justify absorbing the losses.

Key Takeaway: When we analyzed 433 PET encounters by referring doctor, not a single doctor's profitable referrals covered the toxic-payer losses. The "relationship" argument fails because it confuses two independent decisions: blocking a payer (insurance policy) is not the same as blocking a doctor (relationship decision). A doctor can keep referring — just not patients whose insurance reimburses below drug cost.


The Sacred Cow Nobody Questions

The office manager leans into your door. "Dr. Smith sent us 50 patients this year. We can't tell him no on one managed care patient."

You've heard this before. Everyone nods. The front desk schedules the scan. A $3,000 radiotracer gets ordered. The payer reimburses $776. You lose $2,224 before anyone in billing touches the claim.

This is the sacred cow of referral-based medicine: the idea that high-volume referring doctors earn a pass on their toxic-payer patients. The assumption is that the profitable referrals subsidize the unprofitable ones, and the net relationship is worth protecting.

We tested that assumption. We pulled 433 PET encounters across 5 imaging centers, identified every referring doctor who had sent at least one toxic-payer patient, and calculated the net financial value of each referral relationship.

The result: 21 doctors had sent toxic-payer patients. Zero had a positive net relationship value. Not one.

The sacred cow isn't sacred. It's expensive. And nobody had ever done the math.

The $142,000 wasn't a single loss. It was the cumulative drag across all five centers — hundreds of encounters where a $3,000 drug was ordered for a patient whose insurance was going to reimburse $776. Every one of those encounters was scheduled by someone who knew the payer, knew the procedure, and had never been told to check whether the math worked.

Nobody had told them because nobody had done the analysis. The relationship assumption had replaced the analysis entirely.


What a Per-Referrer P&L Actually Looks Like

Most practices track referral volume. They know Dr. Smith sent 50 patients and Dr. Jones sent 12. They treat the first doctor as more valuable because the number is bigger.

But volume is not value. A doctor who sends 50 patients — 47 profitable and 3 toxic — might destroy more value than a doctor who sends 12 patients, all profitable. The question isn't how many. It's how much.

A per-referrer P&L answers that question. It takes every encounter attributed to a referring doctor, separates them into profitable and toxic categories, and calculates the net financial impact of the relationship.

Here's what it looks like in practice, using anonymized data from our analysis. Drug costs ranged from $2,800 to $6,500 per dose depending on the tracer. Toxic payers — those reimbursing below drug acquisition cost — included a major managed care plan, a regional HMO, and a Medicare Advantage product with deeply discounted radiology rates.

Metric Dr. A (19 referrals) Dr. B (20 referrals)
Total scans 19 20
Toxic-payer scans 3 0
Non-toxic profit -$9,013 $11,752
Toxic drug cost burned $5,880 $0
Net relationship value -$14,893 $11,752
Verdict UNPROFITABLE PROFITABLE

Dr. A sends comparable volume. Looks like a strong referral source on paper. But three toxic-payer patients consumed $5,880 in drug costs that were never recovered — and the remaining 16 encounters didn't generate enough margin to cover it.

Dr. B sends one more patient. Zero toxic encounters. Every scan contributes positively. The relationship is worth $11,752.

Volume said these doctors were equivalent. Margin said one was destroying $14,893 in value and the other was creating $11,752. A $26,645 spread between two doctors your front desk treats identically.

Now multiply that across your entire referral base. If you have 40 referring doctors and even five of them look like Dr. A, the cumulative drag is six figures. That's not a rounding error. That's a staffing position, an equipment upgrade, or half a year of rent — gone, because nobody separated the volume number from the margin number.

The per-referrer P&L makes it impossible to hide behind volume. It forces the conversation from "how many patients did they send" to "how much money did those patients make or lose." Those are different questions with different answers.


The Relationship Test — A Framework

This isn't a one-off analysis. It's a framework any practice can run monthly. Here's how:

Step 1: Pull all encounters by referring doctor for the analysis period. Every scan, every payer, every dollar.

Step 2: For each doctor, separate encounters into two buckets — non-toxic (payers that reimburse above procedure cost) and toxic (payers that reimburse below procedure cost, particularly below drug acquisition cost).

Step 3: Calculate non-toxic profit. Total revenue minus total cost on the non-toxic encounters. This is the relationship's gross contribution.

Step 4: Calculate toxic drug cost burned. The acquisition cost of drugs used on toxic-payer patients. In specialty imaging, this is the radiotracer — $2,800 to $6,500 per dose ordered and injected before anyone knows if the claim will cover it.

Step 5: Net Relationship Value = non-toxic profit minus toxic drug cost.

Step 6: If positive, the relationship mathematically covers the toxic losses. But even here, the better strategy is to block the payer and keep the relationship — more on that below.

Step 7: If negative, the relationship is net-destructive. The sacred cow is eating your margin.

We applied this framework to 433 encounters across 5 centers. Twenty-one doctors had sent at least one toxic-payer patient. Zero — not one — had a positive Net Relationship Value.

The pattern was consistent. The toxic drug costs were so high per encounter ($2,800 to $6,500 each) that even doctors with strong profitable volume couldn't offset a handful of toxic encounters. A single amyloid PET scan on a managed care plan that reimburses $776 against a $3,038 drug cost burns $2,262. That doctor needs five or six strongly profitable scans just to break even on one bad one.

Most didn't have them. And that's the fundamental problem with the relationship defense in high-cost procedure medicine: the losses per toxic encounter are so large — thousands of dollars each — that the profitable encounters would need extraordinary margins to compensate. In a business where even good payers produce margins of $500 to $2,000 per scan, the math rarely works.

One neurologist in our data set sent 31 total referrals over the analysis period. Twenty-eight were profitable, producing a combined margin of $18,340. Three were toxic, burning $9,114 in unrecoverable drug costs. Net Relationship Value: $9,226 in the hole. A 90% clean referral rate — and still underwater.

If you're running a multi-location imaging center and don't know your per-referrer margin, that's a conversation worth having.


Why Blocking the Payer Doesn't Lose the Doctor

This is where the myth collapses.

The "we'll lose Dr. Smith" argument treats two independent decisions as one:

Decision 1: Block a managed care plan or regional HMO from PET scheduling. This is a payer policy. It applies to all patients regardless of who referred them.

Decision 2: Stop accepting referrals from Dr. Smith. This is a relationship decision. It affects every patient that doctor sends, regardless of their insurance.

These are independent. Completely independent.

When you block a toxic payer, you're telling scheduling: "If a patient carries this insurance, we don't perform PET scans for them." It doesn't matter if Dr. Smith referred them or Dr. Jones referred them. The policy applies to the payer, not the doctor.

Dr. Smith can still refer 50 patients a year. Forty-seven of them have insurance that covers the drug cost and then some. Those patients get scheduled, scanned, and billed exactly as before. The relationship continues.

The only patients affected are the three whose insurance reimburses below drug cost. Those patients need to be redirected to a facility willing to absorb the loss — or the payer needs to renegotiate.

The doctor doesn't choose the patient's insurance. The patient arrives with whatever plan they enrolled in. Blocking the payer doesn't block the doctor. It blocks the financial grenade the doctor unknowingly attached to the referral.

Think of it this way: a restaurant doesn't stop seating a regular customer because they decline one item on the menu. They just don't serve the item that loses money.

Dr. Smith is the regular customer. The toxic payer is the menu item priced below ingredient cost. Stop serving it. Keep the customer.

There's a second reason the myth persists: nobody has ever told the doctor. In our experience, referring doctors don't know which of their patients have toxic insurance for imaging. They write a referral. The patient calls to schedule. The practice performs the scan. The doctor gets a report. At no point does anyone inform the neurologist or oncologist that Patient X's managed care plan reimburses below drug cost.

If you block the payer and one of Dr. Smith's patients calls to schedule a PET scan, the scheduling desk says: "We're not in-network for that plan for this procedure. Here are two facilities that are." The patient reschedules elsewhere. Dr. Smith never hears about it — unless the patient mentions it at the next visit, which happens rarely.

The fear of losing the doctor is based on a phone call that almost never happens. The financial loss from not blocking the payer happens every single time.


The Data Quality Problem Nobody Talks About

Here's a finding that surprised us more than the $142,000: 49% of encounters in our data set had no referring doctor recorded.

Half. Nearly half the scan volume couldn't be attributed to any referral relationship.

This means the practice couldn't even run the relationship test on half its patients. If the office manager says "we can't lose Dr. Smith," the obvious question is: which of these unattributed scans did Dr. Smith actually send? Nobody knows.

The "relationship" argument requires knowing the relationship exists. Without clean referral data, the argument is based on feeling, not evidence. And feelings don't show up on a P&L.

Three fixes, in order of priority:

1. Capture the referring doctor at scheduling for every encounter. Not optional. Not "if the patient mentions it." Every encounter needs a referring physician field populated before the appointment is confirmed.

2. Validate against a master physician directory. "Dr. Smith" doesn't cut it. You need the NPI, the practice name, and a consistent identifier. Otherwise Dr. John Smith from one referral becomes J. Smith from another and Dr. Smith, Neurology from a third. Same doctor, three records, useless data.

3. Run the relationship test monthly. Once the data is clean, the analysis takes 10 minutes. Group by referring doctor. Separate toxic from non-toxic. Calculate net value. Rank. This is not a quarterly project. It's a monthly habit.

The Medical Group Management Association (MGMA) benchmarking data consistently shows that referral source tracking is among the most underutilized data points in practice management — most groups track referral counts but fewer than 30% tie referral sources to financial outcomes.

The irony is brutal: practices invest in referral marketing, physician liaison programs, and relationship dinners to grow referral volume — then fail to capture the data needed to know whether that volume is profitable. The CRM tracks how many referrals came in. Nobody tracks how much money those referrals made or lost.

Clean referral data isn't just an operational nicety. It's the prerequisite for every strategic question about your referral network. Which doctors should get priority scheduling? Which payer contracts should you renegotiate? Which locations attract the most profitable referral mix? None of those questions are answerable with a 49% data gap.


How to Build Your Own Referral Profitability Analysis

This works for any practice. Here's the step-by-step:

Step 1: Export encounter data with six fields: date, referring doctor, payer, procedure code, cost (including drug/consumable acquisition cost), and reimbursement received.

Step 2: Identify your toxic payers. Any payer where average reimbursement falls below the procedure's primary consumable cost is toxic. For PET imaging, the consumable is the radiotracer. For surgery centers, it's the implant. For infusion centers, it's the drug. CMS publishes Medicare reimbursement rates by procedure code — use these as a baseline to identify where commercial and managed care plans fall relative to cost.

Step 3: Group encounters by referring doctor.

Step 4: For each doctor, calculate two numbers. First: total margin on non-toxic encounters (revenue minus cost). Second: total drug cost burned on toxic encounters (acquisition cost of consumables used on patients whose insurance didn't cover them).

Step 5: Net Relationship Value = non-toxic margin minus toxic drug cost.

Step 6: Rank every referring doctor by Net Relationship Value. The list tells you exactly which relationships create value and which destroy it.

Step 7: Any doctor with a negative NRV — the relationship argument fails for that doctor, specifically, by the numbers. The conversation shifts from "we can't lose them" to "we're paying to keep them."

This framework is not limited to radiology. It works identically for any high-cost procedure business where the "raw material" costs more than some payers reimburse. Surgery centers with expensive implants. Oncology infusion centers with specialty drugs. Orthopedic practices with high-cost prosthetics. Specialty pharmacies with limited-distribution medications.

The economics are the same everywhere: when the consumable costs more than the reimbursement, volume makes it worse, not better. And a referring doctor who sends those patients isn't helping — regardless of how many other patients they send.

The output of Step 6 is a ranked list. Print it. The top of the list shows your most valuable referral relationships — the doctors whose patients consistently generate margin. Those are the relationships worth investing in: faster turnaround on reports, dedicated scheduling lines, quarterly lunch-and-learns. The bottom of the list shows the doctors whose referral patterns destroy value. Those aren't relationships to end — they're relationships to protect by blocking the payers that make them unprofitable.

The distinction matters. You're not ranking doctors by quality. You're ranking the financial impact of their referral mix. A great doctor with a bad payer mix still needs the payer fix, not a relationship fix.

Understanding these per-unit economics is the foundation of the assembly line approach to practice profitability. Every encounter is a unit. Every unit has a cost. Every payer has a reimbursement rate. The math either works or it doesn't.


The Monday After the Myth Died

He stared at the spreadsheet for a long time. Twenty-one doctors. Twenty-one relationships his office managers had protected for years. Zero justified.

The number that hit hardest wasn't the $142,000. It was the realization that the excuse had been costing real money for years while everyone nodded along. "We can't lose the relationship" had become a reflex — an answer that stopped the conversation before anyone did the math.

He didn't fire any doctors. He didn't burn any relationships. He didn't send a single letter.

He updated one scheduling policy: before ordering any radiotracer with an acquisition cost above $2,800, verify the patient's insurance against the toxic payer list. Seven payers. One checkbox.

The referring doctors never noticed. Their patients with commercial insurance kept coming. Their patients with Medicare kept coming. Nothing changed for the doctor. Nothing changed for 93% of the patients.

The only patients affected were the ones whose insurance was going to lose the practice money anyway — and those patients were redirected to facilities that had different payer contracts or were willing to absorb the loss.

That was a Tuesday. By Thursday, the front desk had already caught two managed care patients scheduled for amyloid PET scans. Two grenades defused before the drug was ordered. $6,000 saved in 48 hours.

That started the Monday morning after they stopped confusing volume with value.

Three months later, the front desk had intercepted 14 toxic-payer encounters. At an average drug cost of $3,200 per dose, that's $44,800 in losses that never happened. The referring doctors sent the same volume. The same patients with good insurance kept showing up. The relationships were intact.

The only thing that changed was a seven-line spreadsheet taped to the scheduling monitor.


Most accounting firms see a P&L and move on. This analysis required connecting three data sets that don't naturally talk to each other: the referral log (who sent the patient), the encounter data (what drug was used, what was billed), and the reimbursement record (what the payer actually paid). The insight didn't come from any one of those systems — it came from layering them together and asking a question nobody had asked: is this referral relationship actually worth money?

When your accounting function operates as an ROI center instead of a cost center, this is the kind of analysis that pays for itself in a single week.

The data was already in the billing system. It just needed someone to connect referring doctor names to payer reimbursement rates — a connection nobody had made because the "relationship" assumption was never questioned.

Running a multi-location imaging or specialty practice? Benefique builds referral profitability dashboards that connect your billing data to referring doctor performance — updated live, not quarterly. See what your data reveals →


FAQ — Referring Doctor Profitability

How do I calculate referring doctor profitability?

For each referring doctor, sum the profit from their non-toxic-payer encounters and subtract the drug cost burned on their toxic-payer encounters. The result is the Net Relationship Value. If negative, the doctor's referral volume doesn't justify absorbing the toxic losses. The calculation requires three data points per encounter: the referring doctor's name, the payer, and the margin (reimbursement minus cost including consumables).

Will blocking a toxic payer cause us to lose referring doctors?

No. Blocking a payer is an insurance policy decision, not a relationship decision. The referring doctor can still send patients with any other insurance. The doctor doesn't choose the patient's insurance plan. In our analysis of 433 encounters, no referring doctor exclusively sent toxic-payer patients — the toxic patients were scattered across the referring base. The doctor keeps referring. The only thing that changes is which insurance plans you accept for high-cost procedures.

What is a toxic payer in medical imaging?

A toxic payer is any insurance company that reimburses below the cost of the procedure's primary consumable — in PET imaging, that's the radiotracer drug ($2,800-$6,500 per dose depending on the tracer type). When a toxic payer's reimbursement doesn't cover the drug cost alone, the practice loses money on every scan regardless of efficiency, volume, or operational excellence. The loss is locked in at scheduling, before the drug is even ordered.

How often should we run a referral profitability analysis?

Monthly, after encounter data is complete for the prior month. The analysis takes minutes once data collection is clean. The critical prerequisite is capturing the referring doctor's name at scheduling for every encounter — without this, the analysis has gaps. In our data set, 49% of encounters had no referring doctor recorded, which means half the volume couldn't be analyzed at all. Fix the data capture first, then run the analysis on a monthly cycle.

Does this framework apply outside of radiology?

Yes. Any practice where a single procedure requires an expensive consumable — specialty drugs, implants, surgical supplies — faces the same economics. Surgery centers with high-cost orthopedic or cardiac implants, oncology infusion centers administering specialty biologics, specialty pharmacies with limited-distribution drugs, and wound care centers using advanced skin substitutes all have "toxic payer" risk. The per-referrer P&L framework works identically. Substitute your consumable cost for the drug cost, identify payers that reimburse below that cost, and calculate each referring doctor's Net Relationship Value.


Disclaimer: This article is for informational purposes only and does not constitute tax, legal, or financial advice. Tax situations vary — consult a qualified tax professional for advice specific to your circumstances. Practice examples are anonymized composites based on real client data; identifying details have been changed to comply with HIPAA Safe Harbor de-identification standards.