OpenAI aims to battle AI 'hallucinations' with new training method

File picture

OpenAI announced it is tackling the issue of AI "hallucinations" through a novel approach to training artificial intelligence models.

The research comes at a critical juncture, as the spread of misinformation generated by AI systems has become a topic of intense debate, particularly in light of the upcoming 2024 US presidential election and the ongoing generative AI boom.

OpenAI made waves in the industry last year with the release of ChatGPT, its chatbot powered by GPT-3 and GPT-4, which quickly garnered over 100 million monthly users, setting a record as the fastest-growing app. Microsoft has demonstrated its confidence in OpenAI's potential, having invested over $13 billion in the startup, thereby valuing it at approximately $29 billion.

AI hallucinations occur when models, such as OpenAI's ChatGPT or Google's Bard, fabricate information and present it as factual. For instance, Google's Bard made an inaccurate claim about the James Webb Space Telescope in a promotional video. More recently, ChatGPT cited false cases in a New York federal court filing, potentially leading to sanctions for the involved attorneys.

In their report, the OpenAI researchers acknowledged that even state-of-the-art models are prone to producing falsehoods and exhibit a tendency to invent facts when faced with uncertainty. Such hallucinations pose significant challenges in domains that require multi-step reasoning, as a single logical error can derail an entire solution.

To combat these fabrications, OpenAI's potential solution involves training AI models to reward themselves for each correct step of reasoning they take in reaching an answer, rather than solely rewarding the final conclusion. This approach, known as "process supervision," as opposed to "outcome supervision," aims to promote more explainable AI. By encouraging models to follow a more human-like chain of thought, OpenAI hopes to mitigate logical errors and enhance the overall capabilities of AI systems.

Karl Cobbe, a mathgen researcher at OpenAI, explained that detecting and addressing logical mistakes or hallucinations is a crucial step toward building artificial general intelligence (AGI). While OpenAI did not originate the process-supervision approach, the company is actively contributing to its advancement. Cobbe emphasized that the research aims to address hallucinations and improve models' problem-solving abilities.

OpenAI has released an accompanying dataset of 800,000 human labels used to train the model mentioned in the research paper, according to Cobbe.

More from Business News

  • Trump asks Supreme Court to pause law that could ban TikTok

    President-elect Donald Trump has urged the US Supreme Court to pause implementation of a law that would ban popular social media app TikTok or force its sale, arguing he should have time after taking office to pursue a "political resolution" to the issue. 

  • DoH awards research projects over AED19 million to transform AD healthcare

    The Department of Health – Abu Dhabi (DoH), in partnership with the Authority of Social Contribution - Ma’an, has awarded over AED19 million in grants to support advancements in groundbreaking fields such as cell and gene therapies, precision medicine, and advanced cancer treatments.

  • UAE, Italy sign MoU to combat financial, economic crimes

    The UAE General Secretariat of the National Anti-Money Laundering and Combatting Financing of Terrorism and Financing of Illegal Organisations Committee (GS-NAMLCFTC) and the Italian Guardia di Finanza on Friday signed a Memorandum of Understanding for a strategic partnership in the fight against the emerging financial crime threats.

  • Putin says there is no time to sign new Ukraine gas transit deal this year

    President Vladimir Putin said on Thursday there was no time left this year to sign a new Ukrainian gas transit deal, and laid the blame firmly on Ukraine for refusing to extend the agreement that brings gas to Slovakia, the Czech Republic and Austria.

On Virgin Radio today

  • Nala

    2:00pm - 5:00pm

    Playing 10 hits in a row every hour, all weekend!

  • The Middle East Hot 30

    5:00pm - 7:00pm

    Kris Fade counts down the biggest songs of the week

Trending on Virgin Radio