Why OpenAI Says ‘Not Yet’ to GPT-5 : Emily Rosemary Collins

Why OpenAI Says ‘Not Yet’ to GPT-5
by: Emily Rosemary Collins
blow post content copied from  Be on the Right Side of Change
click here to view original post

Rate this post

In a recent dialogue concerning the potential risks of AI systems, Sam Altman, CEO and co-founder of OpenAI, firmly stated that the organization is not in the process of developing GPT-5, the anticipated successor to the AI language model GPT-4, launched earlier this year in March.

💡 Recommended: GPT-4 is Out! A New Language Model on Steroids

Sam Altman on “The Letter”

The backdrop was an event at the prestigious Massachusetts Institute of Technology (MIT), where Altman addressed a circulating open letter within the tech community.

The letter, urging labs like OpenAI to halt the creation of AI systems more potent than GPT-4, underscored anxieties about the safety of such future systems. This plea, however, has been met with criticism from various industry professionals, including some of its own endorsers.

Opinions on the potential threat posed by AI vary widely—some envision an existential crisis, while others see more ordinary challenges—and there’s no clear consensus on how to enact a “pause” in development.

Altman: GPT-5 Is Not Under Development

Altman, addressing the audience at MIT, opined that the letter lacked technical specificity concerning where the developmental pause is necessary. He also corrected an early version of the letter that insinuated OpenAI was already working on GPT-5. “That is not the case, and it won’t be for a while,” clarified Altman, dismissing the assertion as somewhat misguided.

Nonetheless, OpenAI’s decision to hold off on GPT-5 doesn’t mean it’s halting the progression of GPT-4. As Altman emphasized, the company is considering the safety ramifications of its ongoing projects. “We are conducting other explorations atop GPT-4 that I believe carry a range of safety issues, which were overlooked in the letter,” he pointed out.

You can view the entire conversation here:

YouTube Video

Altman’s remarks provide valuable insight—not necessarily about OpenAI’s future roadmap, but rather the complexities surrounding the AI safety debate and how we quantify and track advancement. While Altman confirms that OpenAI isn’t actively developing GPT-5, the statement is not as informative as it may seem.

The Fallacy of Version Numbers

This confusion partly arises from what could be termed the ‘fallacy of version numbers.’ This notion suggests that sequentially numbered tech upgrades equate to clear, linear enhancements in capability.

It’s a misconception rooted in consumer tech, where new models or operating systems often carry higher numbers as a marketing strategy. It’s tempting to infer that the iPhone 35 is superior to the iPhone 34 simply because the numerical value is larger. (Yeah, these version numbers don’t exist yet.)

This flawed reasoning has seeped into the realm of artificial intelligence, particularly when discussing models like OpenAI’s language processors.

Unfortunately, it’s not just tech enthusiasts sharing overzealous Twitter threads, prophesying the advent of superintelligent AI based on the incrementing numbers. Even seasoned commentators, often lacking falsifiable evidence to support their claims about AI superintelligence, fall into this trap.

They resort to drawing ambiguous graphs with “progress” and “time” axes, sketching an upward trending line and presenting it without critical evaluation.

This is not an attempt to dismiss the genuine concerns about AI safety or overlook the rapid evolution of these systems, which we are yet to fully control.

Instead, it emphasizes the necessity of distinguishing between well-founded and flawed arguments. Assigning a number to something—a new phone model or the concept of intelligence—doesn’t necessarily imply a comprehensive understanding.


The primary argument made here is to shift the spotlight from an arbitrary numbering system to the actual capabilities and improvements of the models.

Progress can often come from integrating existing AI models with other systems to enhance their functionality, rather than just focusing on developing newer versions.

Again: First, integrate the existing powerful AI systems into legacy systems before focusing on new flagship versions. There are myriads of juicy, low-hanging fruits you can harvest quickly!

The distinction between development and training is another crucial point in this discussion. While the development of improvements is an ongoing process, the training of a new model is a separate, extensive procedure that requires considerable resources.

This is why Altman’s statement that OpenAI isn’t currently working on GPT-5 won’t necessarily provide reassurance to those apprehensive about AI safety.

The organization is still enhancing GPT-4’s potential (by connecting it to the internet, for instance), and other industry players are developing equally ambitious tools, allowing AI systems to act on behalf of users. Numerous efforts are undoubtedly underway to optimize GPT-4, and OpenAI might introduce an intermediate version, such as GPT-4.5 (similar to the GPT-3.5 release), further illustrating how version numbers can mislead.

Even if a hypothetical global moratorium on new AI developments were enforceable, it’s evident that society is already grappling with the capabilities of existing systems.

GPT-5 may not be on the horizon, but the question remains: does it matter when we are still coming to grips with the depth and breadth of GPT-4’s capabilities?


May 15, 2023 at 01:55AM
Click here for more details...

The original post is available in Be on the Right Side of Change by Emily Rosemary Collins
this post has been published as it is through automation. Automation script brings all the top bloggers post under a single umbrella.
The purpose of this blog, Follow the top Salesforce bloggers and collect all blogs in a single place through automation.