Microsoft Copilot in Bing represents a significant leap in AI-enhanced web search, offering users summarized search results, a conversational chat experience, and the ability to generate creative content. This article provides an in-depth look at how Microsoft approaches responsible AI in Copilot, covering key aspects like privacy, safety, transparency, and control.
Microsoft is deeply committed to responsible AI. The development of Copilot in Bing is guided by Microsoft's AI Principles and Responsible AI Standard. This commitment involves collaboration with responsible AI experts within Microsoft, including the Responsible AI Office, engineering teams, Microsoft Research, and Aether.
Key areas of focus include:
To understand how Microsoft ensures responsible AI in Copilot in Bing, it’s essential to understand the following key terms:
Copilot in Bing leverages advanced technologies to deliver an innovative search experience. Here's a breakdown of the process:
Within prompts, grounded responses are favored, where inputted information supports the LLM statement so the response remains relevant to what is drawn from the web. In constrast, a non-grounded response is when something is presented by LLM that cannot be founded in an input source.
Microsoft employs a rigorous approach to identify, measure, and mitigate potential risks associated with Copilot.
Initial work began in the summer of 2022 with exploratory analysis of GPT-4, in partnership with Open AI for extensive red team testing. This rigorous exploration aims to evaluate the functionality of the newest technologies in development to identify potential areas of abuse, by intentionally generating harmful responses.
The team creates templates to capture conversational structures. It reviews whether dialogue contains harmful content, by adapting and implementing guidelines. The responsible AI index measures Copilot's effectiveness.
Protecting user privacy is a core principle in the development of Copilot in Bing. The platform is designed to collect and use personal data only when necessary and not retain it longer than required. Microsoft protects user privacy through transparency, control and minimization of user data.
Copilot employs facial blurring techniques when users upload images in chat prompts. This protects the subject's privacy. Also, digital representations of people in uploaded images are not stored or shared with third parties.
Microsoft takes extra precautions to protect children and young people using Copilot in Bing. All Microsoft accounts for users under 13 (or as defined by local laws) are blocked from accessing the experience. SafeSearch is set to "Strict" for these users to prevent exposure to inappropriate content.
Microsoft believes that trust in technology depends on transparent access to data usage and user control. Copilot in Bing offers insights into how and why the product works, while keeping the user in control.
Microsoft continuously monitors Copilot in Bing and has processes for addressing potential abuse or violations of the Terms of Use. This includes tools for users to submit feedback and report issues to Microsoft's operations teams.
Microsoft's Copilot in Bing represents a significant advancement in AI-powered web search. By prioritizing responsible AI practices, Microsoft aims to deliver a powerful and beneficial tool while mitigating potential risks and ensuring a safe and trustworthy experience for all users. As the technology evolves, Microsoft remains committed to continuously improving its approach to responsible AI and adapting to new challenges.