Major Breakthrough in Efficient Large Model Fine-tuning: Qifu Technology's PrAd Framework Selected for EMNLP 2025

Deep News
08/28

During this critical phase of intense global competition in artificial intelligence technology and the accelerated empowerment of industries by large models, efficient fine-tuning technology has become a core breakthrough for driving practical implementation. Recently, Qifu Technology's latest research achievement in efficient parameter fine-tuning for large models, "PrAd: Prompt Adaptive Tuning for Decoder-only Language Models," was accepted by EMNLP 2025 Findings, a top-tier international NLP academic conference. This recognition signifies that Chinese technology companies have once again gained international academic acknowledgment in fundamental AI research.

EMNLP (Conference on Empirical Methods in Natural Language Processing) is one of the most prestigious and influential international academic conferences in the natural language processing field, forming the "Big Three conferences" alongside ACL and NAACL. Known for its extremely rigorous review process and low acceptance rate, EMNLP serves as the preferred platform for global NLP researchers to publish findings and exchange ideas. The acceptance of Qifu Technology's research by EMNLP 2025 marks the company's continued significant progress in fundamental AI research and technological innovation.

As large language models become widely deployed across various real-world business scenarios, achieving efficient and cost-effective multi-task adaptation has become a common industry challenge. Traditional full-parameter fine-tuning methods, while effective, incur extremely high computational and storage costs. Existing parameter-efficient fine-tuning methods such as Prompt Tuning and Adapter Tuning still face limitations including training instability, high inference latency, and sequence expansion.

To address these pain points, Qifu Technology's research team proposed PrAd, a novel fine-tuning framework specifically designed for decoder-only architecture large models. This method innovatively integrates structural optimization with inference processes, introducing lightweight Adapter modules for feature transformation of prompts only during the prefill stage, while maintaining the original structure completely during the decoding stage without introducing any additional computation.

The PrAd framework achieves significant breakthroughs in three key areas:

High Training Efficiency: Does not increase input length, features simple initialization, strong training stability, with performance comparable to or exceeding mainstream baselines.

Efficient Inference: Only adds minimal latency during the first token generation, with no additional overhead in subsequent decoding. Supports multi-task shared batch inference, with실测 speeds showing up to 10x improvement over LoRA in multi-task scenarios.

Substantially Reduced Operational Costs: Adapter management scale and memory usage can be reduced by up to 50%, simplifying deployment and batch inference processes for multi-task models.

Experimental results demonstrate that PrAd achieved performance equal to or better than optimal methods across six diverse NLP tasks, while showing significant advantages in inference efficiency and resource utilization. This makes it particularly suitable for financial sector applications that commonly require multi-task, high-concurrency, and low-latency scenarios.

Fei Haojun, Chief Algorithm Scientist at Qifu Technology, stated: "PrAd is not only a technical breakthrough but also a concrete implementation of Qifu's philosophy of 'technology empowering finance.' We are committed to promoting efficient, reliable, and scalable applications of large models in financial scenarios." Looking ahead, Qifu Technology will continue to increase R&D investment in AI foundation models, efficient fine-tuning, and trusted computing, driving more research achievements to transform into actual productivity and supporting the intelligent upgrade of the financial industry.

免责声明:投资有风险,本文并非投资建议,以上内容不应被视为任何金融产品的购买或出售要约、建议或邀请,作者或其他用户的任何相关讨论、评论或帖子也不应被视为此类内容。本文仅供一般参考,不考虑您的个人投资目标、财务状况或需求。TTM对信息的准确性和完整性不承担任何责任或保证,投资者应自行研究并在投资前寻求专业建议。

热议股票

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10