What Are the Hottest Tech Stocks for 2026? Here’s What ChatGPT and Gemini Had to Say

Dow Jones
6 hours ago

In order to achieve artificial general intelligence, or AGI, a machine must be able to match or exceed human abilities in learning and applying knowledge.

While the technology isn’t quite there yet — and the existence of AGI itself is hotly debated — AI has mastered some tasks already. For one, computers have been beating humans at chess for almost three decades. And this year, both Alphabet’s Google Gemini and OpenAI’s ChatGPT scored gold-medal-level performance at the International Mathematical Olympiad, the world championship math competition for preuniversity students.

Could stock picking be the next frontier? 

Humans aren’t very good at stock picking — even those who do it for a living. According a 2024 study from S&P Global, around 90% of active public-equity fund managers underperform their index.

So we decided to put the top large language models, or LLMs, to the test by asking them which tech stocks would fare the best in 2026.

MarketWatch gave the following prompt to OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude and xAI’s Grok chatbots: “You are a technology portfolio manager looking at opportunities for 2026. Please share five of your most high-conviction tech stocks for next year.”

We opted for a role-playing framework instead of a direct question to navigate around LLM guardrails. When asked for a list of “stock picks” without the hypothetical context, Claude refused to answer, insisting that it is “not a financial advisor” and cannot provide investment recommendations. 

Claude and Gemini both included a disclaimer that they were responding in a roleplaying capacity and not actually offering investment advice, and encouraged doing independent research. Grok said to “always consult a financial advisor for personalized advice.”

Here’s how each LLM responded to our prompt:

The results didn’t offer much novelty. Instead of unearthing under-the-radar opportunities, the chatbots opted for a rather homogenous selection of AI infrastructure and hyperscaler names that currently define the AI trade.

Nvidia was the unanimous top pick, an unsurprising outcome given its role as the dominant supplier of AI chips powering the world’s most advanced models. Some of its fellow “Magnificent Seven” stocks, which are also the chip maker’s customers, were flagged by the chatbots, such as Microsoft.

Claude doubled down with a “deliberate overweight to semiconductors,” responding that infrastructure names are “where the most durable moats and visible demand trajectories lie.” Like many people on Wall Street, Claude said that it expects the AI buildout to continue in 2026 and for AI companies to start focusing more on inference, or the process of running AI models after training, as well as on monetization efforts for their offerings.

Gemini, on the other hand, added some variety by including software companies Palantir Technologies and CrowdStrike Holdings, citing AI applications as the next big beneficiary of the market hype. “We are no longer just buying the shovel makers; we are buying the companies that are successfully turning electricity into intelligence at scale,” Gemini said of its 2026 investing strategy. 

ChatGPT gave specific portfolio allocations, recommending a 50% to 60% exposure to “core growth” names Nvidia, Amazon.com and Microsoft, and the remainder to chip manufacturers Advanced Micro Devices and Broadcom. It also flagged specialized AI infrastructure plays such as “emerging quantum/AI startups” as potential watchlist names. 

Claude took a different approach, saying that it was not “chasing momentum names without earnings,” and listing “quantum-computing plays” and “speculative AI software” as examples. 

Grok offered a bold choice with Oracle, whose stock has been hit heavily in recent months by fears surrounding its debt levels and its association with OpenAI. 

The process by which LLMs “reason” can explain how these chatbots arrived at their respective conclusions. During the initial training phase, developers feed the model massive amounts of data up to a specific cutoff date, which the model breaks into “tokens,” or pieces of words or phrases. The model repeatedly engages in a process called “next-token prediction,” where it learns to anticipate the most probable next word in a sequence. Over time, this process strengthens the mathematical connections, or “weights,” between related concepts — for example, strongly linking “Nvidia” with “AI.”

Although models are trained on similar datasets, they diverge when researchers adjust their weights to favor certain outcomes. Anthropic, which prioritizes safety, may reinforce Claude’s tendency to reject financial questions. Elon Musk’s xAI, on the other hand, has actively positioned Grok to be “maximally truth-seeking,” in contrast to other LLMs influenced by what Musk has called “the woke mind virus.”

“If you look at the math of these models or how they’re constructed, they’re just predictable probability distributions,” Sergey Gorbunov, a technologist and co-founder of blockchain-infrastructure platform Axelar, previously told MarketWatch about LLMs. 

That means LLM-generated responses aren’t necessarily a correct answer, just a most likely answer. And if the LLM is not enabled to surf the web like more recent versions of ChatGPT, the training data are outdated. 

Take a recent paper by researchers at the University of Florida titled “The Memorization Problem: Can We Trust LLMs’ Economic Forecasts?” as an example. The researchers asked ChatGPT-4o to predict future economic events, such as interest rates and unemployment numbers — but cut its training data off at 2023, while conducting the study in 2025. They also did not allow the model to access the internet. 

The team found that ChatGPT-4o had near-random predictions when it could no longer tap its memorized knowledge base — the data it was trained on. 

The view that LLMs are simply prediction machines and that scaling laws — or the rules that show how LLMs improve when given more resources such as training data and compute power — are dead has gained momentum in the industry.

Those who believe LLMs will reach a scaling limit argue that these are not the types of AI models that will reach superintelligence — the point when AI models will be believed to have surpassed human intelligence. 

At the request of the copyright holder, you need to log in to view this content

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10