Meity invites proposals for developing trusted AI ecosystem, tools

Meity, in the proposal, also seeks the creation of risk management tools and frameworks to enhance the safe deployment of AI in public services, stress-testing tools to evaluate how AI models perform

India's decade-old fintech sector is putting artificial intelligence (AI) at the heart of its work, using the technology for purposes as varied as credit assessment and understanding complex data.
Miety has invited proposals for watermarking and labelling tools to authenticate AI-generated content. | File Image
Press Trust of India New Delhi
2 min read Last Updated : Dec 16 2024 | 10:10 PM IST

The IT ministry has invited proposals from entities for the development of technology tools to create a trusted AI ecosystem, including the detection of deepfakes, as per information published on Meity's website.

As part of IndiaAI mission, the Safe and Trusted AI pillar envisages the development of indigenous tools and frameworks and self-assessment checklists for innovators, among others, to put in place adequate guardrails to advance the responsible adoption of AI.

"To spearhead this movement, IndiaAI is calling for Expressions of Interest (EOI) from individuals and organisations that want to lead AI development projects to foster accountability, mitigate AI harms and promote fairness in AI practices," the note for proposal said.

Miety has invited proposals for watermarking and labelling tools to authenticate AI-generated content, ensuring it is traceable, secure, and free of harmful materials.

The proposal calls for the need to establish AI frameworks that align with global standards, ensuring AI respects human values and promotes fairness.

The proposal includes the creation of "Deepfake Detection Tools to enable real-time identification and mitigation of deepfakes, preventing misinformation and harm for a secure and trustworthy digital ecosystem".

Meity, in the proposal, also seeks the creation of risk management tools and frameworks to enhance the safe deployment of AI in public services, stress-testing tools to evaluate how AI models perform under extreme scenarios, detect vulnerabilities, and build trust in AI for critical applications.

(Only the headline and picture of this report may have been reworked by the Business Standard staff; the rest of the content is auto-generated from a syndicated feed.)

*Subscribe to Business Standard digital and get complimentary access to The New York Times

Smart Quarterly

₹900

3 Months

₹300/Month

SAVE 25%

Smart Essential

₹2,700

1 Year

₹225/Month

SAVE 46%
*Complimentary New York Times access for the 2nd year will be given after 12 months

Super Saver

₹3,900

2 Years

₹162/Month

Subscribe

Renews automatically, cancel anytime

Here’s what’s included in our digital subscription plans

Exclusive premium stories online

  • Over 30 premium stories daily, handpicked by our editors

Complimentary Access to The New York Times

  • News, Games, Cooking, Audio, Wirecutter & The Athletic

Business Standard Epaper

  • Digital replica of our daily newspaper — with options to read, save, and share

Curated Newsletters

  • Insights on markets, finance, politics, tech, and more delivered to your inbox

Market Analysis & Investment Insights

  • In-depth market analysis & insights with access to The Smart Investor

Archives

  • Repository of articles and publications dating back to 1997

Ad-free Reading

  • Uninterrupted reading experience with no advertisements

Seamless Access Across All Devices

  • Access Business Standard across devices — mobile, tablet, or PC, via web or app

More From This Section

Topics :Artificial intelligenceIT ministryAI Models

First Published: Dec 16 2024 | 10:10 PM IST

Next Story