60% AI-ready Firms Mature on Responsible AI, Gaps Persist...
Tech Beetle briefing IN

60% AI-ready Firms Mature on Responsible AI, Gaps Persist: Nasscom Report

Essential brief

60% AI-ready Firms Mature on Responsible AI, Gaps Persist: Nasscom Report

Key facts

Nearly 60% of Indian firms confident in scaling AI responsibly have mature Responsible AI frameworks.
Data quality issues remain a significant challenge for reliable and fair AI outcomes.
Regulatory uncertainty slows AI adoption and complicates compliance for businesses.
Emerging AI risks like hallucinations threaten trust and require robust mitigation strategies.
Collaborative efforts are needed to enhance data governance, clarify regulations, and manage AI risks effectively.

Highlights

Nearly 60% of Indian firms confident in scaling AI responsibly have mature Responsible AI frameworks.
Data quality issues remain a significant challenge for reliable and fair AI outcomes.
Regulatory uncertainty slows AI adoption and complicates compliance for businesses.
Emerging AI risks like hallucinations threaten trust and require robust mitigation strategies.

A recent report by Nasscom highlights that nearly 60% of Indian businesses confident in scaling artificial intelligence (AI) responsibly have developed mature Responsible AI (RAI) frameworks. This indicates a significant advancement in the adoption of ethical AI practices within the Indian corporate landscape. These frameworks are designed to ensure AI systems are developed and deployed in ways that are transparent, fair, and accountable, addressing concerns related to bias, privacy, and ethical use.

Despite this progress, the report identifies several persistent gaps that could hinder the effective implementation of RAI. One major challenge is the quality of data used in AI systems. High-quality, unbiased data is crucial for training AI models that perform reliably and fairly. Many firms still struggle with data inconsistencies, incomplete datasets, and biases that can lead to inaccurate or unfair AI outcomes. This issue underscores the need for improved data governance and management practices.

Another significant concern highlighted is regulatory uncertainty. As AI technologies evolve rapidly, regulatory frameworks are often lagging, creating ambiguity for businesses on compliance requirements. This uncertainty can slow down AI adoption and innovation, as firms may hesitate to fully deploy AI solutions without clear guidelines. The lack of standardized regulations also complicates cross-border AI applications and collaborations.

Emerging AI risks, such as hallucinations—where AI systems generate incorrect or fabricated information—pose additional challenges. These risks can undermine trust in AI technologies and have serious implications in critical sectors like healthcare, finance, and legal services. Addressing these risks requires continuous monitoring, robust validation mechanisms, and the integration of fail-safes within AI systems.

The Nasscom report suggests that while Indian firms are on the right track with their RAI initiatives, there is a pressing need to focus on enhancing data quality, advocating for clearer regulatory frameworks, and proactively managing emerging AI risks. Strengthening these areas will be essential to fully realize the benefits of AI while minimizing potential harms.

In conclusion, the maturity of Responsible AI frameworks among Indian businesses is a positive sign of ethical AI adoption. However, addressing gaps related to data quality, regulatory clarity, and AI-specific risks remains critical. Collaborative efforts between industry stakeholders, regulators, and technology developers will be key to advancing responsible AI practices and fostering sustainable AI growth in India.