OpenAI, a subsidiary of Microsoft, is utilizing its large language models like GPT-4 to create a scalable, consistent, and customizable content moderation system. The company claims that GPT-4 can not only assist in content moderation decisions but also aid in policy development and iteration, significantly reducing the time cycle from months to hours.
With GPT-4, OpenAI aims to achieve more consistent labeling of content by parsing various regulations and nuances in content policies and instantly adapting to any updates. This powerful model can quickly understand the intricacies of different policies and regulations, resulting in enhanced content moderation.
OpenAI believes that GPT-4 can significantly expedite the moderation process and help companies accomplish approximately six months’ work in just a day. The company is actively exploring ways to enhance the prediction quality of GPT-4, such as by incorporating chain-of-thought reasoning or self-critique.
Additionally, OpenAI is experimenting with methods to detect unknown risks and leveraging models inspired by Constitutional AI to identify potentially harmful content based on high-level descriptions of harmful content.
In a recent blog post, OpenAI stated that their vision of the future involves AI playing a crucial role in moderating online platforms according to specific policies, thereby relieving the burden on human moderators. OpenAI is making the approach accessible to anyone with OpenAI API access, enabling the creation of AI-assisted moderation systems.
In other news, OpenAI has expanded the “custom instructions” feature to all ChatGPT users, including those on the free tier. This feature allows users to have more control over how ChatGPT responds to their queries.
OpenAI continues to innovate and develop advanced AI systems for content moderation and chatbot capabilities. These initiatives aim to create a safer and more efficient online environment that aligns with platform-specific policies.