GPT-5 "Disappoints," Has AI Hit a Wall?

Deep News
Aug 17

OpenAI's highly anticipated GPT-5 has failed to deliver revolutionary breakthroughs. While the path to Artificial General Intelligence (AGI) appears to have encountered bottlenecks, market focus is shifting toward leveraging existing technology to create broader commercial value at the product and service level.

Last week, when OpenAI released its new model GPT-5, it should have been another highlight moment for the company. Sam Altman had previewed that GPT-5 was "an important step on the road to AGI." However, the model's release quickly sparked disappointment. Users shared the new model's basic errors on social media, such as incorrectly labeling US maps, while experienced users expressed dissatisfaction with its performance and "personality" changes, considering its benchmark test results mediocre.

This may not have been OpenAI's intention, but GPT-5's launch clearly demonstrates that the nature of the AI race has changed. Even if this doesn't bring extraordinary progress in AGI or so-called superintelligence, it may bring more innovation to products and services created using AI models.

This controversy has swept a sharp question through Silicon Valley: after investing hundreds of billions of dollars, has generative AI's technological progress approached the limits of its current phase? This not only challenges the foundation of OpenAI's $500 billion valuation but also prompts external observers to reassess AI technology's development trajectory.

Despite doubts surrounding frontier technology discussions, enthusiasm in capital markets and industrial applications hasn't subsided. Investors seem to value AI's actual growth in commercial applications more than distant promises of AGI. This shift signals that the second half of the AI race will pivot from extreme sprints in model capabilities to more pragmatic, cost-effective product commercialization.

**Gap Between Expectations and Reality**

Over the past three years, AI researchers, users, and investors have grown accustomed to the pace of rapid technological capability advancement. But GPT-5's release broke this momentum. GPT-5 appeared "clumsy" due to technical glitches, receiving widespread user complaints and even being considered inferior to previous versions. CEO Sam Altman acknowledged the "bumpy" launch, explaining that an underlying "automatic switcher" malfunction caused the system to call weaker models.

Thomas Wolf, co-founder and chief scientist of open-source AI startup Hugging Face, stated:

"People expected to discover something entirely new from GPT-5, but this time we didn't see it."

This sense of disappointment was particularly intense because before GPT-5's release, the industry was filled with optimistic predictions about AGI's imminent arrival, with Altman even predicting it would come during Trump's presidency. Gary Marcus, professor emeritus of psychology and neuroscience at New York University and prominent AI critic, stated:

"GPT-5 is a marker of the entire 'scaling to AGI' route, but it didn't succeed."

Meanwhile, the industry competitive landscape has quietly changed. Competitors like Google, Anthropic, DeepSeek, and Musk's xAI have narrowed the gap with OpenAI in frontier development. OpenAI's dominance no longer exists.

**"Scaling Laws" Encounter Bottlenecks**

Behind GPT-5's underperformance lies the core logic supporting large language model development—"Scaling Laws"—approaching its limits. Over the past five years, companies like OpenAI and Anthropic have followed a simple formula: invest more data and stronger computing power to create larger, better models.

However, this path faces two major constraints. First is data depletion, as AI companies have nearly exhausted all available free training data on the internet. Although they're seeking new data sources through deals with publishers and copyright holders, whether this is sufficient to drive frontier technological progress remains unknown.

Second are the physical and economic limits of computing power. Training and running large AI models requires enormous energy consumption. It's estimated that GPT-5's training utilized hundreds of thousands of NVIDIA's next-generation processors. Altman also admitted to reporters this week that while underlying AI models "are still rapidly improving," chatbots like ChatGPT "won't get better anymore."

**The Specter of AI Winter**

Signs of slowing technological progress have reminded some veteran researchers of historical "AI winters." Stuart Russell, computer science professor at UC Berkeley, warns that the current situation has similarities to the 1980s bubble burst, when technological innovations failed to deliver on promises and couldn't provide investment returns. "The bubble burst, systems couldn't make money, we couldn't find enough high-value applications," he stated:

"It's like musical chairs, everyone's fighting not to be the last one holding the AI baby."

Russell warns that excessive expectations can easily collapse investor confidence, and if they determine the bubble has been overinflated, "they'll flee to the exit as quickly as possible, and things could collapse very, very, very fast."

However, capital continues flowing into AI startups and infrastructure projects. According to Bain & Company and Crunchbase data, AI has accounted for 33% of global venture capital investment this year.

**From AGI to Productization**

The nature of the race is changing. Rather than technological stagnation, it's more about focus shifting. Princeton University researcher Sayash Kapoor points out that AI companies "are slowly accepting the fact that they're building infrastructure for products."

Kapoor's team assessment found that GPT-5's performance across different tasks wasn't significantly inferior, but excelled in cost-effectiveness and speed. This could open doors for product and service innovation based on AI models, even if it hasn't brought extraordinary progress toward AGI. Meta Chief Scientist Yann LeCun also believes that LLMs trained purely on text are entering diminishing returns, but "world models" based on multimodal data like video, aimed at understanding the physical world, still have enormous potential.

This trend is also reflected in corporate strategies. Companies like OpenAI have begun deploying "frontline deployment engineers" to client companies to help integrate models. Kapoor commented:

"If companies thought they were about to automate all human work, they wouldn't be doing this."

**Investors Bet on Application Prospects**

Despite experts debating technological prospects, Silicon Valley investors seem unworried. AI-related stocks and startup valuations continue soaring, with NVIDIA's market cap climbing to $4.4 trillion, approaching historical highs. SoftBank Group, an OpenAI investor, has seen its stock price rise over 50% in the past month.

Driving investment enthusiasm is no longer AGI's grand narrative, but strong growth from products like ChatGPT. ChatGPT reportedly generates $12 billion in annual recurring revenue for OpenAI. David Schneider, partner at OpenAI investor Coatue Management, states that the company's products have become "a verb," just like Google once did.

Many investors believe enormous value remains untapped in the current generation of models. Peter Deng, general partner at venture capital firm Felicis, stated:

"In business and consumer applications, startups and enterprises have barely scratched the surface of these models' potential."

As Hugging Face's Thomas Wolf said, even if AGI or superintelligence can't be achieved in the short term, "there are still many cool things that can be created." For the market, this may be the most important message at the current stage.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10