AI Pioneer Andrej Karpathy on the Dawn of Software 3.0: A Paradigm Shift Beyond 10x Engineering

Deep News
Apr 30

OpenAI co-founder Andrej Karpathy stated in a recent interview that large language models are fundamentally reshaping computing architecture as a "new kind of computer." On April 29th, AI luminary Andrej Karpathy, who led the development of Tesla's Autopilot and holds a pivotal role at OpenAI, provided an in-depth analysis of the current technological leap in AI agents and its profound impact on software and hardware ecosystems at an event hosted by AI Sent.

Karpathy indicated that since last December, he began to realize that agent-centric workflows have become genuinely usable. This shift marks the substantive arrival of the Software 3.0 era. He stated that many people's impression of AI last year was still centered on ChatGPT, but a fundamental change has occurred, especially since December.

He also introduced the new concept of "agentic engineering" to distinguish it from what he termed "vibe coding" last year. The former refers to the continuation and acceleration of quality standards in professional software development. He直言 that a significant amount of existing code and applications "should not exist" under this new paradigm, and that most organizations' hiring processes, development tools, and infrastructure are still designed for humans, not agents.

**The Dawn of Software 3.0: A Transfer of Power in Foundational Computing Architecture** The technology industry stands at a crossroads transitioning from quantitative to qualitative change. December was a critical turning point. Karpathy admitted experiencing profound震撼 when facing the latest AI models. He noted that the code blocks generated by the system are increasingly perfect, and he can barely remember the last time he had to modify them, leading to growing trust in the system and a feeling of being left behind as a programmer.

This冲击 represents a complete颠覆 of the computing paradigm. In Karpathy's view, the market currently underestimates the depth of this change. He pointed out that we are moving beyond "Software 1.0 (writing code)" and "Software 2.0 (curating datasets to train neural networks)" and formally entering the "Software 3.0" era. In this new epoch, the large language model itself is a "new kind of computer." He explained that programming now involves writing prompts, and the content within the context window serves as the lever to control the LLM, which acts as an interpreter performing computations in digital information space.

More notably for the market are his bold predictions about the evolution of future底层 hardware architecture. Currently, neural networks run virtualized on existing computers, but he believes this host-guest relationship will reverse in the future. He suggested that one could imagine the neural network becoming the main process, with the CPU turning into a sort of coprocessor, with neural networks handling the bulk of the heavy lifting. This implies that "intelligent computing power," which dominates overall market capital expenditure, will see its strategic core position further solidified.

**Next-Generation Infrastructure: Rebuilding an "Agent-Native" Ecosystem** With execution and coding being handled by machines, where will human core value and future infrastructure forms head? Karpathy直言 that "everything must be rewritten." He expressed frustration that the documentation for current internet frameworks and libraries is still "written for humans," complaining about instructions on how to do things when he simply wants to know what text to copy and paste to his AI agent.

The significant future market opportunity lies in building "agent-first" infrastructure. In this world, systems are decomposed into "sensors" that perceive the world and "actuators" that modify it. Data structures need to be highly readable for LLMs, with machine agents representing individuals and organizations interacting in the cloud. In such a highly automated future, humanity's core scarcity will revert to aesthetics, judgment, and the deepest commercial understanding. Karpathy concluded by quoting a line he often reflects on: "You can outsource your thinking, but you cannot outsource your understanding."

**Agentic Engineering: A Productivity Explosion Far Exceeding the "10x Engineer"** On the dimension of productivity enhancement, which the market cares about most, Karpathy distinguished between two core concepts: "vibe coding" and "agentic engineering." He indicated that "vibe coding" raises the floor for everyone's ability to develop software, while "agentic engineering" aims to maintain the quality ceiling of professional software. Agentic engineering isn't just about speed; it requires developers to coordinate AI agents that are "somewhat error-prone,带有随机性 but extremely powerful," moving at full speed without sacrificing quality. This will also greatly expand the imagination for corporate output. Karpathy pointed out that people used to talk about '10x engineers,' but 10x is insufficient to describe the acceleration achieved. In his view, the peak output of those excelling in this field far exceeds 10 times.

Facing this productivity explosion, corporate organizational structures and talent screening logic must be重构. He建议 companies abandon traditional algorithmic problem-solving interviews and instead assess candidates on their ability to leverage multiple AI agents to collaboratively build large projects and defend against attacks from other AI agents.

**Focus Points for AI Commercialization** For entrepreneurs and investors eager to find AI application scenarios, Karpathy provided a highly practical evaluation framework: verifiability. Current AI capabilities present an extremely peculiar "jagged" profile. He gave an example: the most advanced models can simultaneously refactor a 100,000-line codebase or find zero-day vulnerabilities, yet might suggest walking to a car wash 50 meters away, which he called crazy.

This disparity arises because frontier labs (like OpenAI) pour massive reinforcement learning resources into domains where results are easily verified, such as "math" and "code." Therefore, AI can exert tremendous power as long as it is placed in business scenarios with verifiable outcomes. Karpathy hinted that there are still many high-value, verifiable reinforcement learning environments not yet prioritized by leading labs, representing a vast blue ocean for startups to perform fine-tuning and achieve commercial success.

During the interview, Karpathy elaborated on his personal experience with the shift around December, moving from helpful AI coding assistants to systems generating consistently correct code, leading him to fully trust the system and dive deep into agentic workflows. He contrasted building a menu image generator application ("MenuGen") the old way with a software 3.0 approach where simply prompting a model like Gemini with the menu photo yielded the desired result directly, making his entire application obsolete. This highlights that the change isn't just about acceleration but enables fundamentally new things that weren't possible before, like having an LLM generate a personalized wiki from a set of documents.

Looking forward, he envisioned a future where neural networks become the primary computing substrate, with classic CPUs as coprocessors, leading to potentially very alien and exotic system architectures. He discussed the concept of "jagged intelligence," attributing it to the specific reinforcement learning environments labs choose to focus on during training. Capabilities spike in areas with heavy RL training (like code, math) but remain weak elsewhere (like the car wash logic). This means success depends on whether an application falls within a well-trained "circuit" or not; if not, significant fine-tuning is necessary.

Regarding human skills, Karpathy emphasized that taste, judgment, architectural oversight, and deep understanding remain critical. AI agents currently act like powerful but unreliable interns; humans must provide high-level direction, ensure correct specifications, and make key design decisions. He hopes models will improve in producing elegant code, but currently, human oversight is dominant as aspects like aesthetics aren't heavily rewarded in RL training. He concluded by reiterating the importance of understanding, quoting, "You can outsource your thinking, but you cannot outsource your understanding," suggesting that tools enhancing human understanding are a profoundly exciting direction.

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10