Intel Signals Return to AI Race with New Chip to Launch Next Year

Reuters
10/15
  • Intel's Crescent Island chip targets AI market with energy efficiency

  • Intel faces challenges from AMD and Nvidia in AI chip market

  • Nvidia invests $5 billion in Intel for future chip development

Intel announced on Tuesday a new artificial intelligence chip for the data center that it plans to launch next year, in a renewed push to break into the AI chip market.

The new chip (GPU) will be optimized for energy efficiency and support a wide range of uses such as running AI applications, or inference, Intel Chief Technology Officer Sachin Katti said at the Open Compute Summit on Tuesday.

"It emphasizes that focus that I talked about earlier, inference, optimized for AI, optimized, optimized for delivering the best token economics out there, the best performance per dollar out there," Katti said.

The new chip, called Crescent Island, is the struggling U.S. chipmaker's latest attempt to capitalize on the frenzy in AI spending that has generated billions in revenue for AMD and Nvidia.

The company's plans trail behind competitors and represent the significant challenge Intel's executives and engineers face to capture a meaningful portion of the market for AI chips and systems.

Intel CEO Lip-Bu Tan has vowed to restart the company's stalled AI efforts after the company effectively mothballed projects such as the Gaudi line of chips and Falcon Shores processor.

Crescent Island will feature 160 gigabytes of a slower form of memory than the high bandwidth memory $(HBM)$ that is found on AMD and Nvidia's data center AI chips. The chip will be based on a design that Intel has used for its consumer GPUs.

Intel did not disclose which manufacturing process Crescent Island would use. The company did not immediately respond to a request for comment.

MIX AND MATCH

Since the generative AI boom with the launch of OpenAI's ChatGPT in November 2022, startups and large cloud operators have rushed to grab GPUs that help run AI workloads on data center servers.

The demand explosion has led to a supply crunch and sky-high prices for chips designed or suited for AI applications.

Katti said at the San Jose trade show that the company would release new data center AI chips every year, which would match the annual cadence set by AMD, Nvidia and several of the cloud computing companies that make their own chips.

Nvidia has dominated the market to build large AI models such as the one used for ChatGPT. Tan has said the company plans to focus its design efforts on building chips useful for running those AI models - which work behind the scenes to make AI software operate.

"Instead of trying to build for every workload out there, our focus is increasingly going to be on inference," Katti said.

Intel has taken an open and modular approach in which customers can mix and match chips from different vendors, Katti said.

Nvidia said last month it would invest $5 billion in Intel, taking a roughly 4% stake and becoming one of its largest shareholders as part of a partnership to co-develop future PC and data center chips.

The deal is part of Intel's effort to ensure that Intel's central processors (CPUs) are installed in every AI system that gets sold, Katti said.

免責聲明:投資有風險,本文並非投資建議,以上內容不應被視為任何金融產品的購買或出售要約、建議或邀請,作者或其他用戶的任何相關討論、評論或帖子也不應被視為此類內容。本文僅供一般參考,不考慮您的個人投資目標、財務狀況或需求。TTM對信息的準確性和完整性不承擔任何責任或保證,投資者應自行研究並在投資前尋求專業建議。

熱議股票

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10