JPMorgan Chase has significantly revised its target for South Korea's benchmark Kospi index upward for the second time in less than a month. The core rationale centers on the ongoing bull run driven by the "memory chip supercycle" fueled by AI infrastructure investment, alongside corporate governance reforms led by President Lee Jae-myung and growth in the industrial sector.
On Monday, the Kospi surged over 5% at the Asian market open, hitting a fresh all-time high and leading gains in the Asia-Pacific region despite rising oil prices and escalating U.S.-Iran tensions. The index has soared more than 85% year-to-date, ranking as the world's best-performing major equity market so far in 2026.
The Wall Street giant has raised its base-case target for the Kospi to 9,000 points and its bull-case target to the historic 10,000-point level, implying a potential 33% upside from Friday's close. In late April, the corresponding targets were 7,000 and 8,500 points, respectively. As of the latest update, the Kospi was trading near the 7,800 level.
Investors' fervent bullishness toward South Korean equities stems primarily from one key theme: the unprecedented AI-driven memory chip supercycle. Last Wednesday, foreign retail and institutional investors purchased over $2 billion worth of Korean stocks directly or through cross-border ETFs, nearing the record set in October of last year.
In this cycle, the two Korean memory chip titans, Samsung Electronics and SK Hynix, which together command nearly 50% of the Kospi's weighting, are the primary engines attracting global capital and driving the market's record-breaking outperformance.
The Kospi's year-to-date gain in 2026 has already surpassed its 76% surge from the previous year, which led global markets. Notably, this year's advance has occurred in less than five months.
Wall Street strategists are racing to upgrade their outlooks for Korean stocks, citing exponential profit growth in the memory sector driven by the global AI boom. On Monday, the Kospi jumped as much as 5.1% to a record intraday high of 7,876.60, extending its year-to-date gain to roughly 86% and cementing its status as the world's top-performing equity market.
JPMorgan's upgrade follows a similar move by Goldman Sachs, which last week raised its Kospi target to 9,000 points, citing Korea's strongest earnings expansion momentum in Asia.
As the chart illustrates, the Kospi has surged over 80% this year, significantly outperforming the Philadelphia Semiconductor Index, a global benchmark for chip stocks. Note: Index performance is normalized to January 2, 2026.
While the Kospi's rally shows signs of overheating—its 14-day relative strength index (RSI) has remained in overbought territory every trading day this month—JPMorgan strategists, including Mixo Das, noted in a report that key fundamentals remain on track: "the optimistic memory cycle, corporate governance reforms, and strong thematic growth."
"Under these unique conditions, we believe it remains appropriate to position for further upside rather than prematurely calling the end of the cycle," the strategists added. They suggested the next two years could mark the start of a new, sustained upcycle for memory chips, driven by both average selling prices and record shipments.
Samsung and SK Hynix, which account for about 50% of the Kospi's weighting, have contributed roughly 70% of the index's gains this year. Global investors are aggressively accumulating Korean chip stocks. The U.S.-listed iShares MSCI Korea ETF has skyrocketed 95% year-to-date, outperforming both the broader U.S. market and the Philly Semiconductor Index. Investors in Hong Kong are also actively buying leveraged ETFs tied to the Korean chip sector. The Hong Kong-listed 2x Long SK Hynix ETF has surged 503% this year, while the 2x Long Samsung Electronics ETF is up 340%. Additionally, the China-listed China-Korea Semiconductor ETF has gained 117%.
Whether for Google's massive TPU AI clusters or Nvidia's AI GPU clusters, fully integrated HBM memory systems are essential. Furthermore, tech giants accelerating the construction or expansion of AI data centers require large-scale procurement of server-grade DDR5 memory and enterprise-class high-performance SSDs/HDDs. Samsung, SK Hynix, and Micron Technology are strategically positioned across these three critical memory domains: HBM, server-grade high-performance DRAM (including DDR5/LPDDR5X), and high-end data center SSDs, making them direct beneficiaries of the AI infrastructure wave and reaping its "super dividends."
The Kospi's approximately 85% surge this year, Samsung's market cap surpassing $1 trillion, and SK Hynix's stock repeatedly hitting new highs are not merely a local bull market but reflect a global bet that the "AI-driven memory supercycle" is far from over.
Research firm TrendForce initially projected in early January that Q1 2026 contract prices for general-purpose DRAM would rise 55–60% quarter-over-quarter and NAND Flash by 33–38%. By early February, due to worsening global supply-demand imbalances from AI and data center demand, TrendForce revised its Q1 DRAM price increase forecast sharply upward to 90–95% and NAND Flash to 55–60%, noting that PC DRAM could surge over 100% quarter-over-quarter, server DRAM by about 90%, and enterprise SSDs by 53–58%.
Regarding DRAM/NAND price increases, Goldman Sachs now believes the 2026 price surge will far exceed its prior optimistic forecasts. The bank recently raised its DRAM price increase projection from about 150% to 250–280% and its NAND price increase forecast from about 100% to 200–250%.
Goldman argues this is not a typical inventory recovery cycle but a "super supply shortage cycle" caused by unprecedented demand growth driven by AI computing power, HBM's complex manufacturing and packaging processes crowding out capacity, and insufficient supply elasticity for general-purpose DRAM/NAND.
GPUs generate intelligence, HBM/DRAM feeds data at high speed, enterprise NAND/eSSD handles hot data and caching, and HDDs store massive amounts of cold/warm data long-term. Therefore, Goldman contends the AI computing arms race led by cloud giants is transforming memory chips from cyclical commodities into scarce strategic assets. The 2026 price increases for DRAM/NAND are not the end but potentially the initial phase of a supercycle.
As Jeremy Werner, Senior Vice President and General Manager of Micron's Data Center Business Unit, highlighted in a recent interview, the underlying driver of this cycle is not simply "AI needs more compute chips." Instead, the era of AI inference, dominated by AI agents like Claude Cowork and OpenClaw, is turning memory/storage from a supporting component into a system bottleneck.
AI training relies heavily on large-scale parallel computing, while inference—especially with long context, multi-turn conversations, and agentic AI workflows—requires continuously preserving KV Cache, context states, and intermediate results. Insufficient memory/storage forces models to recompute historical states, reducing GPU utilization and increasing token generation costs.
Thus, HBM, DDR5, LPDDR, enterprise SSDs, and even HDDs/data lakes are forming an "AI memory chain" from GPU-proximate to distant storage, determining an AI system's throughput, latency, concurrency, and per-token economics.
This explains the synchronized surge in memory and data storage stocks like Micron, Samsung, SK Hynix, SanDisk, and Western Digital: demand is not concentrated solely on HBM but is spilling over across the entire AI server architecture to DRAM, NAND, SSDs, and HDDs.
More critically, AI CPUs are opening a second demand curve. While the market has largely equated AI computing power with GPUs and HBM, as inference workloads grow more complex, CPUs are evolving from "GPU supporting actors" into "AI coordinators" that schedule multiple agents, manage context, and orchestrate workflows. This significantly boosts demand for server DDR5 and data center-class SSD configurations.
Simultaneously, HBM capacity is heavily allocated to AI GPUs, squeezing available capacity for general-purpose DRAM. DDR5 and DDR4 price trends are diverging, and the shortage is spreading from high-end HBM to the broader DRAM/NAND supply chain.
TrendForce also cited Micron's CEO, who stated that both traditional and AI server demand is robust but constrained by DRAM and NAND supply tightness. Samsung and SK Hynix recently warned that AI-driven memory shortages could persist until 2028 or beyond.
The AI boom is pushing memory chip demand to remain strong through the end of the decade (2030), according to a recent report from Melius Research led by star analyst Ben Reitzes. Counterpoint Research data indicates the memory market has entered a "super bull market" or "supercycle" phase, with current supply-demand dynamics and prices far exceeding the previous peak during the 2018 cloud computing boom.
With the concentrated emergence in 2026 of super AI agent tools like Anthropic's Claude Cowork and OpenClaw, which can autonomously execute tasks, this wave of AI agents is sweeping the globe. The bottleneck in AI computing architecture is shifting from GPUs, centered on matrix multiplication throughput, to the "full-stack AI system driven by AI agents." In this narrative shift, data center CPUs and memory chips could be the biggest winners.
In other words, the AI computing bull market is expanding from "compute systems centered on AI GPU/ASIC chips" to central processors and the "data storage foundation."
Media reports citing informed sources indicate that SK Hynix, the dominant leader in HBM memory, is receiving unprecedented "alternative" offers from global tech giants. Microsoft, Google, Amazon, and other major cloud providers have proposed large-scale investments in its new production lines and even plan to directly fund the purchase of increasingly expensive chip manufacturing equipment—including ASML lithography machines and advanced HAR etch and thin-film deposition tools—to expand capacity. This collaborative approach aims to secure as much long-term HBM, DRAM, and NAND supply as possible amid fierce competition.
Such offers to invest in and fund capacity are unprecedented in the global memory chip industry, highlighting the extreme severity of the component shortage worldwide. As unparalleled AI fervor drives a surge in computing infrastructure demand, memory chip manufacturers are struggling to keep pace with exponentially expanding needs.
Three sources mentioned another proposal involves customers providing substantial funding for semiconductor manufacturing equipment procurement, such as ASML's extreme ultraviolet (EUV) lithography machines or even more expensive high-NA lithography tools. These machines, used for printing circuits on silicon wafers, etching, depositing thin films, CMP, and other cutting-edge processes, are worth billions of dollars.
However, two sources noted that the Korean chipmaker, with its strong cash position, is cautious about accepting such financial commitments from customers. Such deals could tie it to specific buyers and potentially require supplying chips at below-market prices in exchange for longer-term, stable revenue guarantees.
Memory chip makers have argued in recent years that multi-year contracts help smooth volatile demand and reduce the massive investment risks inherent in this cyclical industry, which often requires billions in capital to significantly expand capacity.
The unprecedented AI infrastructure wave and memory supercycle have pushed semiconductors into a new phase characterized by greater "material intensity, process control intensity, and advanced packaging integration." Three forces are at play: 3D structures and new materials on the logic side; HBM stacking and interconnect upgrades on the memory side; and system performance conversion into manufacturing complexity via packaging like CoWoS/hybrid bonding. Together, these forces increase the value density of key processes like deposition/etching/CMP/advanced packaging/core metrology and are transforming semiconductor equipment demand from "cyclical fluctuations" more distinctly into a "structural mega-expansion cycle."
The critical semiconductor equipment foundation for memory chip capacity expansion includes not only ASML lithography machines but also expensive, high-end tools required for HBM/DRAM/NAND, such as high-aspect-ratio (HAR) etch/deposition, CMP (chemical mechanical polishing), metrology/inspection, and hybrid bonding equipment.
However, the two Korean memory suppliers are proceeding cautiously in allocating scarce capacity to avoid antitrust scrutiny or perceptions of favoring specific clients. "They don't want to 'bet on one horse' in the AI race and back the wrong one," one source said.