-4 C
Columbus
Saturday, December 21, 2024
More

    Open AI Launches the O1 model with ‘reasoning capabilities’

    Read Later
    - Advertisement -

    OpenAI has unveiled a new AI model called O1 on September 12, 2024. This new AI model can solve complicated issues faster than a human brain. It owns superior reasoning capabilities better than any other previously launched AI models by the company.

    Capable of Reasoning Faster than Humans

    Notably, the company has also released the smaller and cheaper version of the O1 variant called O1-mini. These models designed AI to spend more time “thinking” before responding, which enhances their ability to handle complex situations and reasoning tasks, particularly in science, coding, and mathematics.

    According to the OpenAI, the O1 models have demonstrated impressive performances in solving problems at a level comparable to PhD students, especially in challenging subjects such as physics, mathematics, astronomy, chemistry, and biology.

    - Advertisement -

    How OpenAI O1 work?

    The o1 are trained in such a way to spend more time thinking about their problems like a human mind before giving the actual response. They were trained enough to refine their mistakes and try different strategies to enhance them.

    For instance, the GPT-4o was able to correctly solve 13% of problems. The reasoning model scored 83% in a qualifying International Mathematics Olympiad. This score was significantly higher than previously launched models.

    Also, the previous models were also not equipped with many features, such as browsing for web information and uploading files and images. The ChatGPT-4o will be more capable of handling such complexities in the near future.

    OpenAI aligned with Safety Concerns

    The company has come up with new safety training approaches. These will improve their reasoning capabilities to make them adhere to the safety and alignment guidelines.

    - Advertisement -

    The company has bolstered their safety work, internal governance, and federal government collaboration. This includes rigorous testing and evaluations using a prepared framework, best-in-class red teaming, and board-level review processes by the safety and security committee.

    Further, to advance their commitment to AI safety, the company had also formalized their commitment with the US and UK AI safety institutes. They have operated these agreements, granting the institutes early access to a research version of this model. This was an important first step in their partnership. It will be helpful in establishing a process for research, evaluation, and testing of future models prior to their public release.

    + posts

    Kanishka Malhotra is a seasoned journalist with a deep passion for reporting and uncovering the truth. With a specialization in research and investigative Journalism, she has covered wide range of topics related to social issues, travel, lifestyle, technology, Entertainment and much more. She believes to express and share her creativity to the world through words. With relentless pursuit of knowing and uncovering the truth, Kanishka continues to leave a mark in the world of journalism.

    - Advertisement -

    You May Like

    More Stories

    Related stories

    Subscribe

    - Never miss a story with notifications

    - Gain full access to our premium content

    - Browse free from up to 5 devices at once

    Comments