OpenAI plans to remove all fake information on ChatGPT

OpenAI plans to remove all fake information on ChatGPT

 ChatGPT, the OpenAI chatbot, is a subject that continues to democratize through its multiple uses, but also the fact that it is becoming usable in everyone's daily life, like the mobile phone.

However, the tool is imperfect since it frequently provides incorrect information that the project developers call "hallucinations". The latter intends to adjust the algorithm quickly to offer an efficient and reliable AI.

openai


Problematic hallucinations

OpenAi recently announced its intention to improve the mathematical problem-solving capabilities of its GPT-4 chatbot to reduce the famous hallucinations that are so evident in the development of AI.

Thus, last March, the introduction of the ChatGPT-4 version further highlighted this remarkable tool that everyone is using. However, ChatGPT, like all of these new AI tools, sometimes needs help to provide reliable information and then produces content called hallucinations.

You should know that although the ChatGPT model is complex, to produce its content and meet users' needs, it is ultimately a story of probability by seeking to remain consistent on a sequence of words by comparing existing data.

openai


However, these hallucinations occur when the chatbot generates false information by creating events or people who do not exist or even providing inaccurate information on a particular subject.

Thus, in an attempt to improve the results of ChatGPT, OpenAI has implemented two new models operating in the form of "feedback": with on one side the supervision of the result (outcome supervision) and on the other the analysis of the process to arrive at this result (process supervision).

The beginning of the track gave rise to more research

Following an in-depth analysis of these two models by evaluating them using mathematical problems to be solved, the OpenAi playground researchers concluded that the "process supervision" model provides better results and applies more to human logic. Conversely, the "outcome supervision" model produces more random results for which it is difficult to determine a trend.

Thus OpenAI has recognized that this new "process supervision" model goes beyond mathematics and that more research is needed in many areas. To facilitate research, the company has released the full dataset inviting anyone to do their experiments and research on the subject.

OpenAI Playground

Even though OpenAI did not provide a case giving rise to these hallucinations, we remember that Bard, the Google chatbot, had made a serious mistake for the company during its demonstration.

Even more recently, a lawyer for a large company used ChatGPT in a case against a Colombian aviation company citing many similar cases. The only problem, all of these cases were fake and a worse invention of ChatGPT.

We, therefore, understand better why it is necessary to make artificial intelligence evolve and even develop it to make these chatbots more efficient and, above all, to remove any problem of hallucination in the data generated.

Chat gpt login.

Post a Comment

Previous Post Next Post