OpenAI aims to battle AI 'hallucinations' with new training method

File picture

OpenAI announced it is tackling the issue of AI "hallucinations" through a novel approach to training artificial intelligence models.

The research comes at a critical juncture, as the spread of misinformation generated by AI systems has become a topic of intense debate, particularly in light of the upcoming 2024 US presidential election and the ongoing generative AI boom.

OpenAI made waves in the industry last year with the release of ChatGPT, its chatbot powered by GPT-3 and GPT-4, which quickly garnered over 100 million monthly users, setting a record as the fastest-growing app. Microsoft has demonstrated its confidence in OpenAI's potential, having invested over $13 billion in the startup, thereby valuing it at approximately $29 billion.

AI hallucinations occur when models, such as OpenAI's ChatGPT or Google's Bard, fabricate information and present it as factual. For instance, Google's Bard made an inaccurate claim about the James Webb Space Telescope in a promotional video. More recently, ChatGPT cited false cases in a New York federal court filing, potentially leading to sanctions for the involved attorneys.

In their report, the OpenAI researchers acknowledged that even state-of-the-art models are prone to producing falsehoods and exhibit a tendency to invent facts when faced with uncertainty. Such hallucinations pose significant challenges in domains that require multi-step reasoning, as a single logical error can derail an entire solution.

To combat these fabrications, OpenAI's potential solution involves training AI models to reward themselves for each correct step of reasoning they take in reaching an answer, rather than solely rewarding the final conclusion. This approach, known as "process supervision," as opposed to "outcome supervision," aims to promote more explainable AI. By encouraging models to follow a more human-like chain of thought, OpenAI hopes to mitigate logical errors and enhance the overall capabilities of AI systems.

Karl Cobbe, a mathgen researcher at OpenAI, explained that detecting and addressing logical mistakes or hallucinations is a crucial step toward building artificial general intelligence (AGI). While OpenAI did not originate the process-supervision approach, the company is actively contributing to its advancement. Cobbe emphasized that the research aims to address hallucinations and improve models' problem-solving abilities.

OpenAI has released an accompanying dataset of 800,000 human labels used to train the model mentioned in the research paper, according to Cobbe.

More from Business News

  • Disney settles suit over women's pay for $43 million

    Walt Disney has agreed to pay $43.3 million to settle a lawsuit alleging that its female employees in California earned $150 million less than their male counterparts over an eight-year period, the plaintiffs' lawyers said in a statement on Monday.

  • Etihad Airways adds ten new destinations for 2025

    UAE carrier Etihad Airways is set to introduce ten new destinations starting in 2025, expanding its global presence as it brings tens of thousands of new visitors to the capital.

  • Trump pledges new tariffs on Canada, Mexico, China

    US President-elect Donald Trump on Monday pledged a 25 per cent tariff on all products from Mexico and Canada from his first day in office, and an additional 10 per cent tariff on goods from China, citing illegal immigration and the trade of illicit drugs.

  • UAE and Bahrain finalise ICV programmes procedures

    The UAE and Bahrain have finalised the procedures required to implement an MoU, signed last January, that fosters cooperation between the National In-Country Value (ICV) Programme and Bahrain’s Value Programme in Industry, known as Takamul.

On Virgin Radio today

Trending on Virgin Radio