From Promise to Practice:
What’s Holding Back AI in Radiology and How to Fix it
AI in radiology is stuck – not from lack of tools, but from fragmented systems and unstructured workflows. Free-text reports, vendor silos, and poor integration block progress. Hospitals can’t scale AI without infrastructure reform. The solution: structured reporting, aligned teams, and interoperable systems
The paradox of progress
More than 1000 AI tools approved. Fewer than 2% used. What went wrong?
At radiology conferences, the future looks bright. Booths are highlighting promises of fully automated diagnostics, intelligent workflows, and seamless clinical decision support. More than 1000 FDA-cleared AI tools – most for radiology – are showcased as if the future has already begun.
However the truth is, that little has changed. Fewer than 2% of U.S. radiology practices use these tools. Most radiologists have never tried one—and even fewer trust them. This isn’t due to a lack of innovation, funding, or regulatory approval. The real barrier is more fundamental—and more deeply embedded: the everyday structure of how radiology works.
Radiology’s data infrastructure is fundamentally misaligned with the logic behind scalable AI-solutions. Free-text reports, siloed systems, and absent standards create structural barriers that keep even the best algorithms from reaching clinical impact. This article argues AI isn’t failing – our clinical environments are. If we want intelligence to scale, we need to start thinking structurally.
“Diagnosis”: Why AI isn’t scaling in radiology
If FDA clearance is the finish line, radiology AI would already be a success story. But the real race begins after regulatory approval – and most tools never make it out of the starting blocks.
The gap is striking: over 700 AI tools have been cleared in the U.S., yet clinical use remains rare. Only two—targeting coronary artery disease and diabetic retinopathy—have exceeded 10,000 uses nationwide. In most hospitals, even with advanced imaging systems, these tools remain shelfware: approved, available, and unused.
Why?
The main barriers to AI adoption aren’t technological—they’re infrastructural and cultural. AI tools often produce structured, high-quality data, yet must operate in workflows still dominated by free-text reporting. This mismatch creates friction: structured outputs require manual reconciliation, turning automation into additional work. In daily clinical practice, AI faces fragmented reporting environments, siloed systems, and incompatible formats—making seamless integration nearly impossible. These structural issues also limit AI’s long-term potential: without consistent, longitudinal data, models can’t improve over time. Legacy IT infrastructures further compound the problem, blocking scalable integration.
Moreover, trust remains a central issue. Radiologists are rightly skeptical of tools that cannot explain their decisions or that function as black boxes. This is exacerbated when AI tools are injected into workflows without alignment to the way radiologists actually read, report, and communicate findings.
The issue isn’t AI’s ability to read images – it’s its inability to read our clinical reality. Without reengineering the environment, scale will remain impossible.
The anatomy of an invisible barrier
If we look past the surface-level explanations – cost, time, resistance – we find a deeper, more systemic issue: radiology was never built for AI.
Hospital data ecosystems remain deeply fragmented. PACS, RIS, and EMR-systems often operate in silos, limited by proprietary structures and weak adherence to standards like HL7 or FHIR. Even within a single hospital, departments may use separate vendors for imaging, speech recognition, and reporting – each with its own constraints. Vendor lock-ins and missing APIs turn even basic data handoffs into friction points. Trying to insert an AI tool into such an environment is like fitting a jet engine onto a bicycle. The infrastructure can’t carry it.
The problem becomes especially acute at the point of reporting. Most radiology reports today are written in free text, using inconsistent phrasing and individual reporting styles. For a human reader, this variability is manageable – even though it is inefficient. For an AI-system, it’s chaos. Extracting structured meaning from prose language requires error-prone NLP techniques that often introduce more uncertainty than insight.
Let´s imagine a different setup – one where radiology reports are created using structured templates: clearly labeled, standardized across vendors and sites, and machine-readable. Structured reporting forms the landing zone that enables AI to integrate directly into the diagnostic workflow – actively by inserting findings into predefined fields, and passively by learning from the high-quality data this structure provides.
It’s a shift in philosophy. Instead of building smarter AI to overcome noisy environments, we build smarter environments that allow even modest AI to function with precision. The barrier to AI isn’t code – it’s unstructured habits and disconnected systems.
What structured reporting can actually achieve
Skeptics often ask: if structured reporting is so effective, why isn’t it the norm?
The answer is that – quietly, and in select institutions – it already is. At NYU Langone Health, a structured template was introduced for adnexal mass MRI reporting, known as O-RADS MRI. The result? Referring gynecologic oncologists rated the new reports as significantly clearer and more actionable. Patients, too, reported fewer misunderstandings. Structured reporting didn’t dilute the nuance of radiology – it amplified its communicative value.
At Cincinnati Children’s Hospital, the transition to structured reporting was completed in under two years. A cross-functional team – radiologists, IT, and operations – jointly selected templates and set integration milestones. Initial resistance faded as radiologists joined the design process. Reporting became more consistent, and referring physicians noted clearer communication and fewer follow-ups. Crucially, efficiency improved not by speeding up readings, but by reducing disruptions and delays in downstream care.
Structured reports benefit both human readers and AI training pipelines. Templates encode findings in a structured, machine-readable format. This provides clean, reliable data for training future models, reducing the reliance on costly and error-prone manual annotation. In short, structured reports enable a virtuous cycle: better data leads to better AI, which in turn improves diagnostic workflows.
Company contact
Neo Q Quality in Imaging GmbH
Karl-Josef Bohrer
Chief Revenue Officer
M +49 172 52 25 25 7
Email karl-josef.bohrer@neo-q.de
2strom – Die Healthcare Agentur
Anne-Gret Prasse-Wolff
Senior PR-Manager
Tel. +49 (0) 30 844 14 07 41
Email presse@2strom.de