Thursday, 3 April 2025
26.1 C
Singapore
29.2 C
Thailand
27 C
Indonesia
27.3 C
Philippines

Microsoft highlights achievements in responsible AI in its first transparency report

Microsoft's first Responsible AI Transparency Report details their 2023 efforts in ethical AI development, amidst challenges and ongoing improvements.

Microsoft has released its inaugural Responsible AI Transparency Report, outlining the steps it took in 2023 to develop and deploy AI technologies responsibly. This report comes as part of Microsoft’s commitment to creating safer AI systems, a promise made in a voluntary agreement with the White House in July of the previous year.

In the report, Microsoft boasts of significant strides in the realm of responsible AI, including the creation of 30 new AI tools aimed at ensuring ethical practices. The company has also expanded its responsible AI team and implemented stringent risk assessment protocols throughout the development cycle of generative AI applications. One notable addition is the Content Credentials feature on its image generation platforms, which embeds a watermark on AI-generated images to indicate their synthetic origin.

Moreover, Microsoft has provided Azure AI customers with sophisticated tools designed to identify and mitigate harmful content, such as hate speech, sexual content, and self-harm, alongside new methods for detecting security risks like jailbreaks and indirect prompt injections.

Despite these advancements, Microsoft’s journey in AI deployment has not been without controversy. Early instances of the Bing AI chatbot delivering inaccurate information and inappropriate content raised concerns. Issues such as the ability to generate objectionable images, including manipulated celebrity photos, prompted significant backlash and necessitated the closure of exploitable loopholes in Microsoft’s systems.

The report also highlights Microsoft’s ongoing red-teaming efforts, where in-house and third-party teams rigorously test AI models to bypass safety features, aiming to fortify these systems before a wider release.

A commitment to ongoing improvement

Natasha Crampton, Microsoft’s chief responsible AI officer, emphasised in a communication to The Verge that responsible AI is a continuous journey with no definitive endpoint. “Responsible AI has no finish line, so weโ€™ll never consider our work under the voluntary AI commitments done. But we have made strong progress since signing them and look forward to building on our momentum this year,” Crampton stated.

As Microsoft navigates the complex landscape of AI development, its commitment to enhancing and expanding its responsible AI practices remains crucial to addressing both current challenges and future innovations.

Hot this week

Anthropic introduces Claude for Education, a new AI chatbot plan for universities

Anthropic launches Claude for Education, an AI chatbot plan for universities that offers advanced learning tools and administration support.

Mobvistaโ€™s XMP and AdsPolar recognised as Meta AdTech Business Partners

Mobvistaโ€™s XMP and AdsPolar gain Meta AdTech Partner status, giving users early access to tools, insights, and expert campaign support.

OpenAI secures US$40 billion in funding at US$300 billion valuation

OpenAI secures US$40B, reaching a US$300B valuation, to advance AI research and expand Stargate.

Most APAC developers now view AI agents as vital to software development

Over four in five APAC developers believe AI agents are becoming as essential to app development as traditional software tools.

Intelโ€™s future in the GPU market looks uncertain

Intel may not release a high-end Battlemage GPU, and Arc Celestialโ€™s future is unclear, leaving gamers with limited options in a challenging market.

Qualcomm expands AI research with MovianAI acquisition

Qualcomm has acquired Vietnamese AI research firm MovianAI to boost its AI development in smartphones, PCs, and software-defined vehicles.

Roblox introduces new parental controls to enhance child safety

Roblox introduces new parental controls, allowing parents to block games, restrict friends, and monitor their childโ€™s activity for better safety.

Anthropic introduces Claude for Education, a new AI chatbot plan for universities

Anthropic launches Claude for Education, an AI chatbot plan for universities that offers advanced learning tools and administration support.

Exabeam introduces Nova, an agentic AI that boosts cybersecurity operations

Exabeam unveils Nova, a proactive AI agent that boosts security team productivity and reduces incident investigation time by over 50%.

Related Articles