Microsoft has officially rolled out Azure AI Content Safety, a tool designed to monitor and eliminate harmful content from both AI-generated and user-generated platforms. The tool can detect and filter out adult material, gore, violence, and hate speech. The system has key features such as multilingual proficiency, severity indication metrics, multi-category filtering, and text and image detection. This comes as a significant move to make digital literacy and online safety more widespread. The significance of this is particularly felt in regions like Asia Pacific and Singapore, where the dependence on social media and messaging apps is high and could affect public health and safety.
Azure AI’s safety measures reach from classrooms to chatrooms
Earlier this year, the education department in South Australia was looking to introduce generative AI into classrooms. However, they faced a significant concern: how to ensure the technology is used responsibly. The department’s digital architecture director, Simon Chapman, pointed out the limitations in public versions of generative AI, particularly the absence of safety measures.
To resolve this, the department turned to Azure AI Content Safety, with its multilingual proficiency and severity indication metrics. These metrics provide a severity rating of 0-7 for each flagged content, allowing users to gauge the potential threat level quickly. The department recently concluded a pilot test with EdChat, an AI-driven chatbot, in eight secondary schools. The chatbot, tested by nearly 1,500 students and 150 teachers, used Azure AI Content Safety’s multi-category filtering and text and image detection features to provide a safe educational experience.
Broader applications and ongoing development
Azure AI Content Safety was initially part of Azure OpenAI Service but has now been separated to function as a standalone system. This allows it to be used not just with Microsoft’s models but also with AI content from other companies and open-source models. Corporate Vice President of Microsoft AI Platform, Eric Boyd, said that the standalone service would cater to broader business needs as generative AI becomes more prevalent.
Over the years, Microsoft has been actively pushing for responsible AI governance and has nearly 350 staff dedicated to responsible AI. Sarah Bird, who leads responsible AI for foundational technologies at Microsoft, said Azure AI Content Safety is vital to Microsoft’s commitment to responsible AI. The technology is also highly customisable, allowing businesses to adapt policies to fit their unique needs and operating environments.
The tool has already been integrated into various Microsoft products, including Bing, GitHub Copilot, and Microsoft 365 Copilot. Microsoft continues to invest in research and gather customer feedback to enhance the capabilities of Azure AI Content Safety, especially in detecting combinations of harmful images and text.
As we move further into the era of AI, Microsoft aims to bolster public trust by ensuring online safety. Azure AI Content Safety is a pivotal part of this mission, building on Microsoft’s long history of content moderation tools and leveraging advanced language and vision models to create a safer online world.