FOSTERING A ‘TRUSTED ECOSYSTEM’
Both AIVF and IMDA said that generative AI has “significant transformative potential” beyond what traditional AI has been able to achieve, but noted this comes with risks.
“While it remains a dynamically developing space, there is growing global consensus that consistent principles are needed to create a trusted environment — one that enables end-users to use generative AI confidently and safely,” said the agencies.
To help foster a “trusted ecosystem”, the new framework aims to address nine dimensions:
- Trusted development and deployment
- Incident reporting
- Testing and assurance
- Content provenance
- Safety and alignment research and development (R&D)
- AI for public good
It integrates previous ideas from the discussion paper published by IMDA and technology firm Aicadium in June 2023, and draws on earlier work for guidance on suggested practices for safety evaluation of generative AI models.
Practical insights from ongoing evaluation tests will also been taken into account.
“Given the large volume of data involved in AI training, there is value in developing approaches to resolve these difficult issues in a clear and efficient manner,” said AIVF and IMDA.
The proposal also addresses the need for AI to be directed for the public good, outlining four concrete touchpoints where AI can have beneficial, long-term effects.
These include serving the public in “impactful ways”, with AI already powering many public services like health management systems in hospitals and helping to improve user experience.
AIVF and IMDA said that AI governance remains a nascent space and that building international consensus is key.
“As generative AI continues to develop and evolve, there is a need for global collaboration on policy approaches,” added the two agencies.
“We hope that this serves as a next step towards developing a trusted AI ecosystem, where AI is harnessed for the public good, and people embrace AI safely and confidently.” CNA