Microsoft has introduced a new Content Safety feature within the Azure cloud infrastructure to combat inaccuracies in generative artificial intelligence. This tool automatically detects and corrects errors in AI model responses.
Content Safety: How It Works
Content Safety is available as a preview in the Azure AI Studio package, a set of tools designed to identify vulnerabilities, search for “hallucinations” in AI systems, and block invalid user requests. The feature scans AI responses, comparing them with the client’s source materials to identify inaccuracies.
When an error is detected, the system highlights it, explains why the information is incorrect, and rewrites the problematic content. This process occurs “before the user can see” the inaccuracies. However, it’s important to note that this feature does not fully guarantee the reliability of AI responses.
Similar Features in the Industry
The Google Vertex AI enterprise platform also offers a similar “grounding” feature, where AI model responses are checked against Google’s search engine, the company’s internal data, and potentially third-party data sets in the future.
Microsoft’s Content Safety uses both large and small language models to compare AI-generated responses to the underlying documents, according to a company spokesperson. However, it’s acknowledged that this system is not entirely error-proof, adds NIXsolutions. “Grounding does not solve the ‘accuracy’ problem but helps match generative AI responses to the original documents,” Microsoft clarified.
We’ll keep you updated on any future improvements or changes to the Content Safety feature.