From 15 Days to 60 Minutes: Neuberg Uses Agentic AI to Transform Lab Analytics

The diagnostic chain routinely computes Sigma metrics—essentially Six Sigma quality indicators for around 130 assays, including glucose, creatinine, and lipid profiles. Earlier, this was an extremely time-intensive process. It would take nearly 15 days just to consolidate the previous month’s data.

By Abhishek Raval
Dr. Sujay Prasad, Chief Medical Director, Neuberg Diagnostics

Neuberg Diagnostics is a large entity across India, formed through the coming together of legacy labs that have existed for even 50 years or more. It has 150+ laboratories in 250+ cities processing about 30+ million tests annually. 

Our AI journey began around 2015, when we set up our data science department. Since then, we have introduced multiple projects. Broadly, our AI initiatives are classified into three verticals:

Science of diagnostics: How do we use data to add value in diagnosis and ensure correct interpretation of results.

Laboratory processes: How do we make processes less error-prone and more efficient using AI and data science.

Business intelligence: Using data to drive insights for operational and strategic decision-making

FE FUTECH speaks with Dr. Sujay Prasad, Chief Medical Director of Neuberg Diagnostics.

Edited Excerpts

How has your AI journey evolved so far?

At Neuberg Diagnostics, our AI strategy is anchored around three pillars—business intelligence, laboratory processes, and the science of diagnostics. These help us prioritise use cases and allocate resources effectively.

A critical starting point in any AI initiative is asking the right question. Without a clearly defined problem statement, AI has limited value.

Diagnostics remains a highly sensitive domain. We do not use AI or machine learning outputs to report directly to patients. Instead, AI functions strictly as a decision-support system for clinicians. It provides inputs, but the final diagnosis and report sign-off rest entirely with the doctor, who remains accountable.

From a regulatory standpoint, there are no clearly defined frameworks governing direct AI usage in patient reporting. Liability continues to rest with the signatory clinician.

At our Bengaluru laboratory, our core philosophy has been to eliminate errors. We define quality as the absence of errors. Accordingly, we focus on root-cause analysis and use technology—including AI—to address them. This has delivered incremental but meaningful gains over time.

Over the past six months, we have also begun exploring agentic AI—building autonomous systems that can handle tasks such as data management and workflow automation. This is an evolving capability, but one with significant potential to improve efficiency at scale.

AI outcomes depend heavily on data quality. What steps have you taken on data cleansing?

The Laboratory Information System (LIS) serves as our primary data backbone and is largely structured and clean. Our focus now is on improving data quality at the source.

Whether it is instrument-generated outputs—such as glucose or creatinine values—or pathology inputs like biopsy reports, we are standardising data capture to ensure objectivity and consistency. This remains a work in progress but will be a critical asset going forward.

Historically, data quality posed challenges. Based on those learnings, we now prioritise structured data capture at the point of entry—primarily within the LIS.

Earlier, much of the analysis was conducted using Excel. Today, it is performed directly on the database, improving both efficiency and reliability.

With fragmented data sources, how do you ensure standardisation?

Let me illustrate this with a recent use case.

We routinely compute Sigma metrics for approximately 130 assays. Earlier, this process took nearly 15 days, primarily due to data fragmentation. Inputs came from multiple instruments, each generating reports in different formats. A clinician had to manually consolidate these into a single dataset before analysis—making the process both time-consuming and error-prone.

This changed after we adopted an agentic AI approach.

We collaborated with an engineering college to build a multi-agent system:

  • One agent extracts data from PDFs and converts it into structured database entries

  • Another aggregates the data into a unified format

  • A supervisory agent validates completeness, consistency, and quality

  • A final analytical agent computes the Sigma metrics

The entire process now takes under an hour.

Could a single agent deliver the same outcome instead of multiple sub-agents?

Agentic systems are built on modularity. Instead of a monolithic architecture, workflows are broken into smaller, well-defined tasks, each handled by a specialised agent.

This approach offers several advantages:

Functional clarity: Each agent operates within a defined scope

Error isolation: Issues can be traced and resolved at specific stages

Scalability: Components can be upgraded independently

A supervisory agent ensures task-level accuracy, while a higher-level orchestration layer manages the workflow end-to-end.

How are you addressing integration with legacy systems?

Integration is one of the most complex aspects of deploying AI in healthcare.

Our LIS remains the core system, housing structured and clinically validated data. The long-term objective is to embed AI capabilities directly within the LIS, enabling native analysis without data movement.

While that remains a work in progress, we currently follow a modular approach. APIs are used to extract data from the LIS, process it externally using AI models, and then reintegrate the output.

For unstructured or external data, we are developing ingestion layers that standardise inputs before feeding them into AI pipelines and subsequently into the LIS. 

What does your API architecture look like?

The API infrastructure has been built in-house as a reusable, organisation-wide layer.

For AI use cases, we define the data extraction and routing logic. The workflow is as follows:

- Data is extracted from the LIS into designated tables

- AI models process the data externally

- Outputs are standardised and pushed back into the LIS via APIs

From the end-user’s perspective, AI-generated insights are seamlessly integrated within the LIS interface, eliminating the need for separate systems. This ensures AI augments—rather than disrupts—clinical workflows.

Where is AI being deployed across the diagnostics value chain?

We are taking measured, incremental steps, but the impact is already visible. At the entry stage, we are piloting AI systems that can read test request forms (TRFs) and convert them into structured orders, reducing manual data entry errors.

In diagnostics, we are using machine learning and deep learning models for use cases such as thalassemia screening. By analysing large volumes of CBC data, these models can flag cases that may appear normal but indicate underlying traits.

Operationally, AI has significantly improved efficiency—for instance, in Sigma metric computation.

At the reporting stage, AI can generate preliminary interpretations for numerical reports, such as lipid profiles or liver function tests. These insights are presented to clinicians as decision support.

The final responsibility, however, remains with the doctor. AI enhances speed, consistency, and analytical depth—but does not replace clinical judgment.

Empower your business. Get practical tips, market insights, and growth strategies delivered to your inbox

Subscribe Our Weekly Newsletter!

By continuing you agree to our Privacy Policy & Terms & Conditions