18.4 C
Columbus
Sunday, September 29, 2024
More

    Open AI Launches the O1 model with ‘reasoning capabilities’

    Read Later

    OpenAI has unveiled a new AI model called O1 on September 12, 2024. This new AI model can solve complicated issues faster than a human brain. It owns superior reasoning capabilities better than any other previously launched AI models by the company.

    Capable of Reasoning Faster than Humans

    Notably, the company has also released the smaller and cheaper version of the O1 variant called O1-mini. These models designed AI to spend more time “thinking” before responding, which enhances their ability to handle complex situations and reasoning tasks, particularly in science, coding, and mathematics.

    According to the OpenAI, the O1 models have demonstrated impressive performances in solving problems at a level comparable to PhD students, especially in challenging subjects such as physics, mathematics, astronomy, chemistry, and biology.

    - Advertisement -

    How OpenAI O1 work?

    The o1 are trained in such a way to spend more time thinking about their problems like a human mind before giving the actual response. They were trained enough to refine their mistakes and try different strategies to enhance them.

    For instance, the GPT-4o was able to correctly solve 13% of problems. The reasoning model scored 83% in a qualifying International Mathematics Olympiad. This score was significantly higher than previously launched models.

    Also, the previous models were also not equipped with many features, such as browsing for web information and uploading files and images. The ChatGPT-4o will be more capable of handling such complexities in the near future.

    OpenAI aligned with Safety Concerns

    The company has come up with new safety training approaches. These will improve their reasoning capabilities to make them adhere to the safety and alignment guidelines.

    - Advertisement -

    The company has bolstered their safety work, internal governance, and federal government collaboration. This includes rigorous testing and evaluations using a prepared framework, best-in-class red teaming, and board-level review processes by the safety and security committee.

    Further, to advance their commitment to AI safety, the company had also formalized their commitment with the US and UK AI safety institutes. They have operated these agreements, granting the institutes early access to a research version of this model. This was an important first step in their partnership. It will be helpful in establishing a process for research, evaluation, and testing of future models prior to their public release.

    - Advertisement -

    You May Like

    More Stories

    Related stories

    Is Mathematics the Key to More Accurate AI Chatbots?

    AI chatbots are rapidly becoming essential tools for businesses...

    NASA SpaceX Crew-9 launch: When and Where to Watch?

    NASA is all geared to launch the SpaceX Crew-9...

    Who is Hassan Nasrallah, Hezbollah’s Leader targeted by Israeli Strikes?

    In a significant turn of events, Israel sought to...

    India Slams Pakistan’s UNGA Address, Calls Out Hypocrisy

    The 79th United Nations General Assembly (UNGA) saw a...

    Subscribe

    - Never miss a story with notifications

    - Gain full access to our premium content

    - Browse free from up to 5 devices at once

    Comments