Arclantic

Open AI Launches the O1 model with ‘reasoning capabilities’

27-09-2024

2 min read

Open AI Launches the O1 model with ‘reasoning capabilities’

OpenAI has unveiled a new AI model called O1 on September 12, 2024. This new AI model can solve complicated issues faster than a human brain. It owns superior reasoning capabilities better than any other previously launched AI models by the company.

Capable of Reasoning Faster than Humans

Notably, the company has also released the smaller and cheaper version of the O1 variant called O1-mini. These models designed AI to spend more time thinking before responding, which enhances their ability to handle complex situations and reasoning tasks, particularly in science, coding, and mathematics.

According to the OpenAI, the O1 models have demonstrated impressive performances in solving problems at a level comparable to PhD students, especially in challenging subjects such as physics, mathematics, astronomy, chemistry, and biology.

How OpenAI O1 work?

The o1 are trained in such a way to spend more time thinking about their problems like a human mind before giving the actual response. They were trained enough to refine their mistakes and try different strategies to enhance them.

For instance, the GPT-4o was able to correctly solve 13% of problems. The reasoning model scored 83% in a qualifying International Mathematics Olympiad. This score was significantly higher than previously launched models.

Also, the previous models were also not equipped with many features, such as browsing for web information and uploading files and images. The ChatGPT-4o will be more capable of handling such complexities in the near future.

OpenAI aligned with Safety Concerns

The company has come up with new safety training approaches. These will improve their reasoning capabilities to make them adhere to the safety and alignment guidelines.

The company has bolstered their safety work, internal governance, and federal government collaboration. This includes rigorous testing and evaluations using a prepared framework, best-in-class red teaming, and board-level review processes by the safety and security committee.

Further, to advance their commitment to AI safety, the company had also formalized their commitment with the US and UK AI safety institutes. They have operated these agreements, granting the institutes early access to a research version of this model. This was an important first step in their partnership. It will be helpful in establishing a process for research, evaluation, and testing of future models prior to their public release.

Newsletter

Stay up to date with all the latest News that affects you in politics, finance and more.

Recent Comments

No Comments Added !