When production logic regains dominance, the implementation of AI depends not only on how "smart" the model is, but also on its ability to execute within controlled boundaries. The Agent model is the core driver facilitating this shift. Future leaders will be organizations that can organize scattered data, rules, and business details into sustainably accessible production resources, while maximizing data flow within secure boundaries. As the next technological revolution following information technology, assessing the prospects of AI should not be limited to model parameters but must consider commercialization levels and the practical absorption capacity of organizations. Unlike traditional Kondratiev wave theory, which focuses more on aggregate indicators like GDP and prices to identify cyclical fluctuations, economist Carlota Perez, in her book "Technological Revolutions and Financial Capital," describes the diffusion process of technological revolutions as "Great Surges of Development" and explains it using the "techno-economic paradigm" [1]. This article adopts Perez's stage division, further extending and expanding upon it to discuss how AI progresses from early attention and fervor, through the bubble deflation at the turning point, then into the mid-to-late stages of institutional diffusion and long-term operational stabilization, ultimately becoming a universal underlying capability for organizations. 1. The Diffusion Logic of Technological Revolutions: From Installation to Deployment Perez divides the diffusion of general-purpose technologies into the Installation Period, the Turning Point, and the Deployment Period. The core insight reveals the motivational differences between financial capital and production capital: the former chases narratives and paper gains, while the latter concerns itself with sustainable profits (Figure 1).
Figure 1: Different stages of technology diffusion. Source: Perez, C. (2002). Technological Revolutions and Financial Capital: The Dynamics of Bubbles and Golden Ages. Edward Elgar Publishing. 1.1 Installation Period: Breaking In, Then Frenzy The Installation Period can be divided into two stages: Irruption and Frenzy: During the Irruption phase before the launch of ChatGPT in November 2022, capital and tech giants had begun preliminary layouts. However, AI was largely seen as an optional tool for localized efficiency gains, not yet widely adopted. The arrival of ChatGPT pushed AI into the Frenzy stage. The significant lowering of the adoption barrier triggered a global "AI + X" wave, with applications like copywriting and creative generation rapidly emerging. Although discussions about an AI bubble increased from the second half of 2025, funding remained strong. Global private AI company funding reached approximately $225.8 billion in 2025 [2], significantly higher than the $100.4 billion in 2024 [3]. Meanwhile, the expansion of AI supply clearly outpaced corporate adoption rates. The misalignment between narrative and delivery signals an impending industrial shift towards the "Turning Point." 1.2 Turning Point: Returning from Narrative Logic to Production Logic The Turning Point is a phase of adjustment and institutional restructuring within a technological revolution. Here, the frenzy bubble driven by financial capital bursts due to disconnection from production realities. The focus shifts from PowerPoint visions to actual production delivery. Technology diffusion is no longer solely financially driven but is constrained by production logic and organizational absorption capacity. As AI implementation moves from conceptual demonstrations to testing delivery loops, Agents become the key lever at this Turning Point. 1.3 Deployment Period: Institutional Diffusion and Maturation The Deployment Period can be divided into Synergy and Maturity stages. During the Synergy phase, AI, institutions, and demand structures mutually reinforce each other, achieving scaled penetration within enterprises. Upon entering the Maturity phase, AI diffusion stabilizes, excess returns diminish, the competitive focus shifts to cost control, and financial capital turns towards the next new narrative. 2. The Turning Point: Why AI Agents Are Poised to Be Key In Perez's view, turning points are often not gentle but manifest as violent turbulence and crashes following prolonged disconnects between financial frenzy, asset bubbles, and real production. However, in this AI diffusion cycle, this article leans towards interpreting it as a replacement of "probabilistic narrative logic" by "deterministic delivery logic," and a clearing out of AI-for-AI's sake: when production logic regains dominance, implementation depends not just on model intelligence but on its execution loop within controlled boundaries. The Agent model is precisely the core driver enabling this shift. It is not merely a tool responding to commands but can decompose goals, invoke tools, and achieve sustained delivery within defined boundaries. Coupled with specialized capabilities provided by Skills, and support for connectivity, permissions, and feedback from middleware like OpenClaw, AI moves from single-point demonstrations to standardized production. At this juncture, judging whether AI has crossed the turning point involves three criteria: Connectivity, Delivery Capability, and ROI. The first two determine if AI can embed into operational processes; the latter determines if this embedding is viable and sustainable. 2.1 Criterion One: Lowering the Connectivity Barrier Connection costs have long been a core obstacle for enterprise AI implementation. Adding each system or data source meant additional engineering investment and maintenance costs. Since late 2024, with the formation of agent interoperability protocols, integration work is shifting from "one-off engineering" to a "protocol-based system." Mechanisms like MCP and A2A are promoting more unified access methods for different models, frameworks, and external systems, supporting more complex multi-agent collaboration: the former standardizes data connections, the latter standardizes agent communication. This means enterprises can move away from inefficient, repetitive integration development towards call management and risk control under a unified connectivity framework. Only after connection costs decrease can multi-agent systems more likely progress from localized access to broader business diffusion. 2.2 Criterion Two: Enhancing Delivery Capability Enhancing delivery capability refers to AI transitioning from a "nice-to-have assistant" to a "teammate bearing performance metrics." CB Insights categorizes agents into two tiers: "Agents with Guardrails," which operate in constrained environments using structured workflows for specific goals, with decision space bounded by processes and permissions; and "Fully Autonomous Agents," capable of more complex decisions, greater adaptation, and more complete task execution with less human intervention [4]. In early implementation, delivery capability primarily manifests as "controlled automation." By invoking a small number of highly deterministic Skills to handle high-frequency repetitive tasks, AI first gains a foothold in real business operations. As implementation scenarios increase, Skills will continuously enrich, enabling agents to handle more complex tasks and gradually evolve towards "Fully Autonomous Agents." For enterprises, a more feasible path is a "light-first, then heavy" strategy: first establish stable delivery loops in low-risk, relatively simple scenarios, then gradually progress to higher-level planning and decision-making. 2.3 Criterion Three: Rationalizing ROI As AI shifts from bubble narratives back to hard financial constraints, the judgment standard is not merely ROI turning positive, but its "rationalization." Previously, internet platforms were often willing to burn money first and monetize later, because under Metcalfe's law, more connected users strengthened network effects, and the marginal cost of serving new users approached zero. But AI is different; it embodies "real-time consumption": each interaction corresponds to real computing power and operational expenses. If user demand remains stuck in non-essential, low-value scenarios like "tell me a joke," such traffic not only fails to accumulate as assets but becomes a "computing burden" for enterprises. This implies that the scale benefits of AI no longer come from simply expanding user numbers, but from standardizing and stabilizing delivery mechanisms. A McKinsey survey in June 2025 revealed a "high adoption, low conversion" dilemma: about 80% of companies had not yet gained substantive benefits [5]. The turning point aims to clear out enterprises lacking clear implementation scenarios or still adhering to old-era logic. Specifically, ROI rationalization mainly manifests in the development of two capabilities: First, Engineering Economics: By embedding AI into structured workflows, the delivery process transforms from "one-off trial and error" into "standardized digital production units." As task chains become further modularized, the reusability of Skills increases. When entering new scenarios, enterprises can focus more on differentiated configuration rather than development from scratch, thereby reducing costs. Second, Governance and Compliance: Introducing AI often accompanies a "responsibility vacuum": who is accountable when problems occur? During the turning point, enterprises must reshape their authority and responsibility governance frameworks, ensuring AI execution paths are traceable, auditable, anomalies locatable, and rollbacks possible when necessary. Only when management uncertainties are brought under control can AI transform into long-term, accountable productivity. 3. Deployment Period Synergy Stage: From Local Success to Scaled Penetration If the turning point marks a "qualitative change" in trust, solving the problem of *whether* AI can be implemented; then the Synergy stage of the Deployment Period initiates a "quantitative change and fission" in scale. At this point, enterprise valuation returns from paper narratives to real value, financial and production capital reintegrate, and the proven feasible AI paradigm is continuously replicated, achieving scaled penetration throughout the industry's capillaries. In Perez's theory, the Synergy stage means technology, institutions, and demand structures begin to mutually reinforce, initiating broader prosperity. But in the AI wave, the diffusion speed is often determined not by external demand explosion, but by internal organizational贯通: if an organization cannot first internally establish closed loops for delivery, governance, and reuse, any amplification on the demand side struggles to transform into sustainable, scalable expansion. Therefore, this article focuses this stage's discussion on diffusion mechanisms and challenges at the organizational level, which is not just about local efficiency gains but the prerequisite for synergistic prosperity to unfold. In practice, diffusion is not random blooming everywhere but follows a "point-to-area" pattern: first succeeding in a few areas with clear, measurable, process-oriented demands, then gradually spreading along adjacent processes to peripheral departments. Simultaneously, cluster collaboration among Agents not only improves accuracy in handling non-standard tasks but also reduces overall risk through division of labor, propelling AI towards scaled business deployment. Entering the Synergy stage, the challenge often shifts away from the models themselves to data. High-quality data is the foundation for AI model training and operation, but enterprises commonly face issues like unstable data quality, inconsistent standards, and "data silos" in real deployments. Gartner, in its assessment of abandoned GenAI projects, explicitly listed "poor data quality" as a key reason [6]. Meanwhile, a global Splunk survey indicated that about 55% of data within organizations is "dark data" [7], often沉淀ed in forms like emails, recordings, contracts, and unstructured documents across systems and departments. In other words, AI doesn't lack "more data," but lacks "data that can enter the production loop": one type is dark data not yet effectively utilized, the other is low-quality data filled with ambiguity, errors, outliers, and unusable records. Furthermore, with Agent proliferation, the logic of data usage changes: when employees use AI to query or draft, AI's output, in turn,沉淀s into knowledge bases, becoming new data assets [8]. While more open data makes AI "smarter," once data continuously flows during invocation, generation, and write-back, risks of overstepping, misuse, and leakage also rise. Therefore, future leaders will be those organizations that can organize分散ed data, rules, and business details into sustainably callable production resources, while maximizing data flow within secure boundaries. The Synergy stage will exhibit the following two characteristics: 3.1 Increase in Localized and Private Deployments In sectors like finance and high-end manufacturing, core systems and sensitive data often require private deployment solutions to achieve secure AI loops. The widespread adoption of such deployment methods signifies that agents have been formally granted execution authority, beginning to undertake high-frequency, high-pressure production tasks in isolated environments, completing the transformation from "external plugin" to "native capability." 3.2 Proliferation of Vertical Applications This is a practical path to promote scaled AI implementation under the premise of coexisting with human and AI "hallucinations." In finance or healthcare, for example, erroneous decision suggestions can trigger significant compliance risks or life-threatening consequences. Verticalization收敛s uncertainty to verifiable, controllable ranges by narrowing task boundaries and introducing domain rules and knowledge constraints. Simultaneously, verticalization更容易 establishes targeted control mechanisms, such as setting manual reviews before high-risk actions, intercepting sensitive intents and unauthorized requests, requiring key conclusions to include evidence citations, etc., thereby increasing the feasibility of AI entering actual business processes. In highly constrained industries like finance and healthcare, AI often initially enters specific processes in verticalized forms. As Agents gradually become new invocation portals, the past paradigm of "people finding software" shifts towards "Agents finding software," with software receding into the background, providing data, functions, or workflows to Agents. In this context, the value of horizontal software focusing on UI and interaction experience will be diminished; vertical software possessing professional data, industry know-how, or mature workflows will see their value enhanced. 4. Deployment Period Maturity Stage: Infrastructureization and Stock Competition The Maturity stage described by Perez focuses more on the later evolution of the technological paradigm, emphasizing diminishing returns, competitive收敛, and capital moving on. In contrast, this article focuses more on the impact after AI is deeply embedded within organizations: deployment红利 recede, AI gradually sinks into the foundational layer of production and management, becoming a universal capability akin to electricity. This also意味着 that AI is no longer an additional advantage for leaders but increasingly a basic condition for enterprises to remain at the table. The competitive focus consequently shifts: enterprises truly compete based on who can achieve the switch from "seeking incremental红利" to "protecting the存量 profit base"凭借 superior cost structures and more robust governance systems. Precisely because of this, financial capital begins to withdraw from this technological paradigm,转而 chasing the next round of more imaginative new narratives. Meanwhile, Physical AI has long surpassed the early overflow phase, frequently achieving scaled implementation in more complex, non-standard scenarios. The continuous decline in unit costs and highly predictable delivery quality constitute key conditions for AI's sustained commercialization in complex scenarios. OpenAI Board Chairman Bret Taylor pointed out that AI Agents represent a new software paradigm, but in practical application, enterprise positioning still remains at the "advanced assistant" stage, limited to辅助ive work like content generation and information summarization. This cognitive bottleneck prevents enterprises from delegating execution authority, resulting in AI never touching the core of governance and responsibility. By the Maturity stage, this misalignment between cognition and application is corrected. Enterprises no longer view AI as a localized efficiency tool but incorporate it into the responsibility system as "organizational units" capable of bearing results. Correspondingly, transformation is not "adding a segment of automation to the original process," but redesigning division of labor, processes, and delivery loops around AI. At this point, the enterprise operational system can be further decomposed into three layers: Direction, Execution, and沉淀. In the Direction layer, human managers, as controllers of long-term variables, focus on quarterly, annual, or longer cycles,致力于 setting strategic goals, compliance red lines, retaining human adjudication rights for key decisions, and负责 risk control and global correction. In the Execution layer, AI, as the executor of short-term variables, focuses on daily, weekly, or real-time changing tasks, achieving the full process from response to delivery,突破ing the efficiency boundaries of organizational operations. In the沉淀 layer, both humans and AI act as bearers of mid-term variables, focusing on bi-weekly, monthly review and optimization cycles,沉淀ing the gains and losses from execution processes into reusable assets like operational manuals and exception handling rules, and conducting continuous iteration. Under this new operational system, human-machine relations also change: from "assistant" to "teammate," from single-point collaboration to multi-agent cluster collaboration, where "one person directing a group of agents" becomes the norm. As described by Ma Jie, Co-founder of 01.AI, humans are responsible for strategic decisions and goal setting,更像 goal architects; AI becomes the execution engine, constituted by a multi-agent collaboration network forming the execution system, achieving full-process closure [9]. Kai-Fu Lee provided a concrete scenario: recruitment Agents autonomously integrate全channel resources and conduct initial screenings/interviews; after employees onboard, performance Agents provide feedback on evaluation results, guiding recruitment Agents to more accurately identify "super employees" in the future, enabling the entire organization to continuously evolve within a closed loop [10]. Regarding Physical AI, human-machine relations may iterate further: robots in scenarios like elderly care assume not just "assistant" or "teammate" roles but also more emotionally attuned "companion" roles. By the Maturity stage, persistent memory and multimodal interaction become常规 capabilities, safety boundaries, supervision mechanisms, and accountability chains become basic configurations, and such human-machine relations gradually become normalized and普及. 5. Current State and Future Outlook The current AI industry is at the tail end of the Frenzy period, accelerating towards the turning point. Connectivity issues are largely resolved, but delivery capability and ROI still need breakthroughs. Taking "Lobster Fever" (referencing a hypothetical scenario or product like OpenClaw) as an example, although installations surged, frequent exposure of issues like software mis-unloading and security vulnerabilities means users dare not truly delegate authority, so AI's actual delivery capability remains limited. Coupled with依然 high Token prices, true implementation still has some distance. A more likely future development is the parallel advancement of price reductions, capability expansion, and supporting control mechanisms. Products like OpenClaw push AI from "able to talk" towards "able to do," which aligns with the direction of AI evolution but remains in early stages. Moreover, currently, many are not using it to create new productivity but monetizing through paid installation; when results fall short of expectations, a business around paid uninstallation rapidly emerges – this is the portrait of the Frenzy period. For enterprises, while waiting for implementation conditions to mature further, attention should be前置ed to a long-term key variable: the assetization of Tacit Knowledge. Looking at the overall AI progression, after the turning point, implementation and scaling depend not only on model capabilities but also on whether enterprises can率先 transform the tacit knowledge沉淀ed within employees into data callable by AI. For example, experienced salespeople know when to push for a close and when to step back; professional客服 personnel, judging by user tone and subtext, immediately know when a case needs escalation. But such experience often involves大量难以言明的 judgment, difficult to fully digitize and沉淀. Consequently, future competition will differentiate along two dimensions: First, a博弈 over the breadth of assetization – whoever can convert unstructured knowledge into AI-callable assets faster and across a wider range will gain a领先 advantage. Second, a contest of high-level judgment. After AI absorbs大量通用 experience and standard processes, the remaining small portion that is "difficult to digitize" will become even more scarce. AI will depress the scarcity value of通用 abilities but elevate the value of high-level judgment capabilities; enterprises can提前储备 and cultivate such talent. From "can we use AI?" to "how does the organization reorganize around AI?", under the great surge of development, the growth paradigms for individuals and organizations are being redefined.回顾ing history, every technological revolution is a civilizational transition: old skill maps gradually瓦解, new knowledge territories加速 emerge. Ultimately, in this AI wave, what determines the position of individuals and organizations in the new industrial ecosystem is the ability to stand on the "high starting point" provided by AI, to突入 deeper cognitive territories at faster speeds, to reach the capability boundaries of human-machine collaboration with greater penetration – and beyond those boundaries, to率先构建 one's own cognitive depth.