AI Ethics: ‘Treat it as a Product, Not a Discussion,’ Allcargo Logistics CITO Kapil Mahajan

AI Ethics will solve real problems when viewed as a product. “You build it, test it, refine it, and integrate it into your systems just like any other critical capability. This shift in mindset ensures that ethics is not theoretical—it becomes operational, measurable, and scalable,” said Mahajan.

Kapil Mahajan, Global Chief Information & Technology Officer, Allcargo Logistics Ltd.

In the logistics industry, trust is not a soft concept—it is the backbone of the business. Logistics companies are responsible for moving high-value cargo across shipping, surface, and air networks. When a customer entrusts them with goods worth millions of dollars, trust becomes embedded in every transaction.

“In that context, ethical AI is not just about algorithms—it is about carrying forward that same trust into digital systems. A breach of trust does not just mean a technical failure; it directly translates into lost business and damaged credibility,” said Kapil Mahajan, Global Chief Information & Technology Officer at Allcargo Logistics. He was speaking at the AIConic Summit and Awards, organised by the Financial Express in Delhi.

So ethical AI is naturally aligned with principles like transparency and bias mitigation. “You ensure that models are explainable, and biases are identified and eliminated. But beyond these widely discussed principles, there are deeper structural considerations you must address,” said Mahajan.

Structural Considerations for Ethical AI

Companies in the logistics space operate across multiple countries. Thus the datasets must reflect that diversity. If the data is skewed toward certain geographies or conditions, the AI outputs will inherit those biases. Hence companies should ensure their datasets are representative and that biases are removed at the foundational level—not as an afterthought.

Secondly, the question of ownership and accountability is a major challenge. The following questions arise: “When you deploy AI systems, who owns them? Who is responsible when something goes wrong? If a model produces a biased outcome, who steps forward to fix it? In many organisations, this accountability remains a blurred responsibility shared across teams but owned by none,” said Mahajan. While it’s natural to be enthusiastic about deploying AI at scale, the post deployment scenarios should also be defined. Is it acceptable to have a system that is only 91% or 95% accurate? In a world where AI is meant to eliminate human error.

Third, companies should examine how models are trained. “Off-the-shelf large language models may offer convenience, but they often fall short at enterprise scale. You need systems that understand your data deeply—micro LLMs that are trained on your proprietary datasets and can speak your business language,” said Mahajan. This ensures contextual accuracy and operational relevance.

Ultimately, ethical AI is about embedding corporate governance directly into the technology stack. It cannot be an afterthought or a pre-launch checklist. It must be integrated into the engineering pipeline itself—that is how systems are built, that are trustworthy by design.

Challenges to Implement AI

Organisations tend to struggle along three key dimensions while integrating AI.

The first is data. “You must decide what data to use, how to select it, how to cleanse it, and how to eliminate biases before feeding it into your systems,” Poor data quality inevitably leads to flawed outcomes, making this the most foundational challenge.

“The second is explainability. Whether you are using a third-party AI engine or building your own, you must be able to question its outputs. Why did the model arrive at a specific decision? Why was one pricing parameter selected over another? Why is the price set at $18 instead of $20? If you cannot interrogate your system, you cannot trust it,” This ability to question outcomes builds trust first within your engineering teams and then among business users.

Questioning Algorithms, Really?

However in a corporate environment there is a lack of consensus. “Not everyone is incentivised to question algorithms,” said Mahajan. Consider ride-sharing apps—surge pricing mechanisms are often opaque, yet widely accepted because they drive business value. Similarly, even decisions like product placement in retail can sit in a grey zone between ethical and non-ethical practices. These examples highlight that ethical AI is not always a clear-cut discussion—it is often a complex trade-off. 

“A lot of those algorithms are tricked to create those prices and that is a business value. Now you would say that is ethical, non-ethical, is all a question mark,” stated Mahajan.

The third challenge is accountability. When something goes wrong—when even 1% of users are adversely affected—who is responsible? Is it the vendor who built the model, the IT team that deployed it, the business team that uses it, or the governance bodies like ethics or audit committees? “Most organisations do not have a clear answer.”

These challenges drive the need to adopt an ‘AI Ethics by Design’ approach. These three pillars—data integrity, explainability, and accountability—must be integrated into the engineering pipeline.

Enhancing AI Across Geographies

Companies operating in multiple geographies and governed by the respective laws of the land face the question of embedding fairness and transparency into their AI systems at scale. Applying AI to critical functions such as pricing, global talent allocation, vessel routing, and cargo movement carries the risk of embedded bias.

“For instance, your routing algorithms might unintentionally avoid certain regions—such as the Middle East—based on historical data patterns. Similarly, pricing engines could introduce bias by treating small and medium enterprises differently from large customers. Even when customers access your services digitally, disparities can emerge based on credit profiles or business size.”

At the same time, Alongside companies may have business objectives, such as promoting environmentally sustainable practices. The use of low-emission vessels through green credits may be incentivised. “While this aligns with sustainability goals, it adds another layer of complexity to your AI decision-making frameworks,” said Mahajan.

To manage these competing priorities, organisations need a structured approach. “One effective method is to establish an enterprise-wide AI architecture. Every application must pass through this framework—much like a toll gate embedded within your engineering pipeline. This ensures that fairness, transparency, and compliance are not optional discussions but mandatory checkpoints.

Until the end of June 2025, we were still doing it at an application level but since July 1, we rolled it out as a piece of architecture spanning our DevOps, SecOps work streams,” informed Mahajan. AI systems should not be static. Continuous improvement is essential. As AI is deployed, companies learn from real-world outcomes. Over time, the systems evolve, becoming more refined and aligned with both business objectives and ethical standards.

In this journey, one principle stands out: just because AI can do something does not mean it should. “Your responsibility is to continuously evaluate that boundary.”

The approach to AI from an engineering perspective is to treat AI ethics as a product—not as a discussion.

“When you see it as a product, you design it to solve real problems. You build it, test it, refine it, and integrate it into your systems just like any other critical capability. This shift in mindset ensures that ethics is not theoretical—it becomes operational, measurable, and scalable,” said Mahajan.

 

Empower your business. Get practical tips, market insights, and growth strategies delivered to your inbox

Subscribe Our Weekly Newsletter!

By continuing you agree to our Privacy Policy & Terms & Conditions