On March 25, 2026, at a Crain’s New York Business panel discussion of the city’s hospital sector, Mitchell H. Katz, MD, president and CEO of NYC Health + Hospitals, told the assembled executives what cost-cutting now sounds like in the largest public hospital system in the United States. “We could replace a great deal of radiologists with AI at this moment, if we are ready to do the regulatory challenge.” Sandra Scott, MD, who runs One Brooklyn Health, one of the city’s safety-net institutions operating on tight margins, replied that the move would be “a game-changer.” The exchange appeared in Crain’s coverage of the panel and was picked up by the radiology trade press within forty-eight hours.

The proposal reads as the second move in a strategy whose first move has been documented for fifteen years. American hospital systems built imaging volume on the back of a preventative-medicine apparatus that the American College of Cardiology’s own Choosing Wisely campaign identified in 2012 as substantially overused, with up to 45% of stress cardiac imaging in low-risk asymptomatic patients flagged as inappropriate by the ACC’s own appropriate-use criteria. That volume produced revenue. The same hospital systems now propose to automate away the labor cost of interpreting the revenue-producing volume. Imaging continues, billing continues, the radiologist disappears from the ledger, and the patient pays the same copay for a scan whose ordering was already questionable, now read by an algorithm whose performance varies by manufacturer, training data, patient population, and deployment context.
The strongest evidence base for AI in radiology supports a use case that the Katz proposal does not describe. The Mammography Screening with Artificial Intelligence trial, called MASAI, randomized over 100,000 Swedish women to either standard double reading by two radiologists or AI-supported single reading by one radiologist with the Transpara system from ScreenPoint Medical. Lead author Kristina Lång and colleagues at Lund University reported in The Lancet Oncology in 2023 that the AI-supported arm reduced radiologist workload by 44% while modestly increasing cancer detection. Follow-up data published in The Lancet in 2026 showed a 12% reduction in interval cancers, meaning cancers that emerge between screenings and that carry worse prognosis, with AI-supported screening compared to standard double reading. First author Jessie Gommers of Radboud University Medical Centre was direct in the press release: “Our study does not support replacing healthcare professionals with AI as the AI-supported mammography screening still requires at least one human radiologist to perform the screen reading.”
That distinction matters. AI-assisted reading, where a human radiologist works alongside an algorithm that flags suspicious findings and triages low-risk cases for single rather than double review, has been validated in randomized trials with hard outcome measures. The validation extends to AI as a triage and detection support, where one human radiologist remains in the loop. AI-only reading, where no human reviews the image unless the algorithm flags an abnormality, has not been tested to the same standard. A Stanford working paper on so-called “AI mirages” in medical imaging, which describes algorithms that perform well on benchmark datasets and fail in clinical deployment because the training distribution does not match the deployment distribution, was circulating at the time of the Katz panel and was awaiting peer review. Mohammed Suhail, MD, a radiologist at North Coast Imaging quoted in coverage of the Katz statement, said that any attempt to implement AI-only reads “would immediately result in patient harm and death, and only someone with zero understanding of radiology would say something so naive.” That is a strong claim from a working radiologist, but the structural point underneath it is conservative. The trial that would justify AI-only reading on a population basis has not been run. The trial that would justify AI-assisted reading has been run, and it requires the radiologist.
Set the safety question aside for a moment and consider what the proposal does to the labor market. The radiologist has been a high-margin specialist for the same reason all specialists are high-margin: the supply is constrained by the length of training and the licensing apparatus, and the demand is set by imaging volume. Katz’s proposal substitutes capital for labor. If New York State relaxes the regulation requiring radiologist review, NYC Health + Hospitals saves the salary of every radiologist whose reads can be displaced to the algorithm; the imaging machine still runs, still bills, still produces a chargeable encounter on the patient’s account. Generalized, the same logic applies to dermatology, where machine-learning skin lesion classifiers have shown strong retrospective performance, and to pathology, ophthalmology, and any imaging-heavy specialty whose work product is a classification task on a digital image. A worsening shortage of breast imaging specialists, particularly in rural and underserved markets, is the legitimate operational pressure Katz is responding to, and the American College of Radiology has documented this shortage at length. Using that pressure to license a deployment model the trial evidence has not endorsed is the illegitimate response.
Two profits accrue to the hospital system. The first is the original imaging revenue, generated by the appropriate-use-violating ordering patterns that produced the screening volume in the first place. The second is elimination of the labor cost of reading the imaging. A patient pays the copay, the insurer pays the technical fee, and the AI vendor takes a per-read or subscription fee that comes in well below the radiologist’s salary equivalent. Vendor and hospital split the gain. Radiologists are unemployed or shifted to abnormality review only, which substantially compresses earnings since the volume of abnormal reads is a fraction of total reads. Care delivered to the patient may be equivalent in accuracy under the AI-supported model and inferior in accuracy under the AI-only model, with no individual professional license held responsible for the read.
The regulatory politics will determine which model gets deployed. Katz himself flagged the regulatory challenge at the Crain’s panel, asking the assembled CEOs whether there was any reason they should not be lobbying New York State to permit AI-only reads. Lobbying for the relaxation is the hospital system facing margin pressure. Lobbying against is the American College of Radiology and the radiologists themselves, organized through their professional society. New York State legislators will decide. Patients do not have a seat at this table. A patient learns about the change when the mammogram comes back from the screening center read by Transpara version whatever and the bill arrives in the mail with no indication of who, if anyone, looked at the image.
Liability shifts. Under current regulation, a missed cancer on a mammogram exposes the reading radiologist to malpractice litigation, which is why the radiologist carries professional liability insurance and why the radiologist’s professional license is on the line for every read. Under a proposed AI-only model with radiologist confirmation only of flagged abnormalities, the missed cancer that occurred when the algorithm scored the image low and no human looked at it produces a liability question with no individual defendant. The plaintiff sues the institution, the institution sues the AI vendor, the AI vendor sues the training data licensor or invokes the FDA clearance as a shield. Many degrees of separation now sit between the patient and the party with deep pockets. The structural change resembles the shift from the family doctor to the corporate practice in primary care: personal accountability disappears into the institutional defendant, and the patient learns that the system is the system.
The preventative-medicine apparatus that produced excess imaging volume and the AI-radiology apparatus that proposes to read it without human review are two faces of the same financial logic. Both extract value from patient bodies through technical interventions whose individual benefit is small or unproven on a population basis, both produce steady recurring revenue, and both depend on the patient being a passive substrate rather than an active agent in the care chain. One creates the imaging. The other eliminates the labor cost of reading it. The hospital system, which is the only party that crosses both moves, captures the margin on both.
AI-assisted radiology is a real technology with real performance data. The MASAI trial demonstrated that the right deployment, with the right oversight, in the right population, produces better cancer detection at lower radiologist workload. That is a legitimate technological gain and the trial is one of the cleaner pieces of clinical evidence for AI in medicine to date. The question is who controls deployment, under what oversight, and to what end. If AI becomes a tool that radiologists use to read more imaging more accurately at lower cost per read, with patient outcomes that match or exceed the current standard, that is medicine. If AI becomes a license to eliminate the radiologist altogether, with the institutional savings flowing to hospital margins and the patient losing the only party in the imaging chain whose individual professional license is on the line for the read, that is bookkeeping. Mitchell Katz proposed the second model at a panel in March. The trial evidence supports the first. The next move belongs to the New York State legislature, which is to say, to whoever lobbies hardest in Albany.
Leave a Reply