Why Wealth Management Firms Need an AI Acceptable Use Policy -- Barrons.com

Dow Jones
2025/07/04

By John O'Connell

If your wealth management firm hasn't yet established an AI acceptable use policy, it's past time to do so.

Once a futuristic concept, artificial intelligence is now an everyday tool used in all business sectors, including financial advice. A Harvard University research study found that approximately 40% of American workers now report using AI technologies, with one in nine using it every workday for uses like enhancing productivity, performing data analysis, drafting communications, and streamlining workflows.

The reality for investment advisory firms is straightforward: The question is no longer whether to address AI usage, but how quickly a comprehensive policy can be crafted and implemented.

The widespread adoption of artificial intelligence tools has outpaced the development of governance frameworks, creating an unsustainable compliance gap.

Your team members are already using AI technologies, whether officially sanctioned or not, making retrospective policy implementation increasingly challenging. Without explicit guidance, the use of such tools presents potential risks related to data privacy, intellectual property, and regulatory compliance -- areas of particular sensitivity in the financial advisory space.

What it is. An AI acceptable use policy helps team members understand when and how to appropriately leverage AI technologies within their professional responsibilities. Such a policy should provide clarity around:

-- Which AI tools are authorized for use within the organization, including: large language models such as OpenAI's ChatGPT, Microsoft CoPilot, Anthropic's Claude, Perplexity, and more; AI Notetakers, such as Fireflies, Jump AI, Zoom AI, Microsoft CoPilot, Zocks, and more; AI marketing tools, such as Gamma, Opus, and others.

-- Appropriate data that can be processed through AI platforms. Include: restrictions on client data such as personal identifiable information $(PII)$; restrictions on team member data such as team member PII; restrictions on firm data such as investment portfolio holdings.

-- Required security protocols when using approved AI technologies.

-- Documentation requirements for AI-assisted work products, for instance when team members must document AI use for regulatory, compliance, or firm standard reasons.

-- Training requirements before using specific AI tools.

-- Human oversight expectations to verify AI results.

-- Transparency requirements with clients regarding AI usage.

Prohibited activities. Equally important to outlining acceptable AI usage is explicitly defining prohibited activities. By establishing explicit prohibitions, a firm creates a definitive compliance perimeter that keeps well-intentioned team members from inadvertently creating regulatory exposure through improper AI usage. For investment advisory firms, these restrictions typically include:

-- Prohibition against inputting client personally identifiable information (PII) into general-purpose AI tools.

-- Restrictions on using AI to generate financial advice without qualified human oversight, for example, generating financial advice that isn't reviewed by the advisor of record for a client.

-- Prohibition against using AI to circumvent established compliance procedures, for example using a personal AI subscription for work purposes or using client information within a personal AI subscription.

-- Ban on using unapproved or consumer-grade AI platforms for firm business, such as free AI models that may use data entered to train the model.

-- Prohibition against using AI to impersonate clients or colleagues.

-- Restrictions on allowing AI to make final decisions on investment allocations.

Responsible innovation. By establishing parameters now, firm leaders can shape AI adoption in alignment with their values and compliance requirements rather than attempting to retroactively constrain established practices.

This is especially crucial given that regulatory scrutiny of AI use in financial services is intensifying, with agencies signaling increased focus on how firms govern these technologies.

Furthermore, an AI acceptable use policy demonstrates to regulators, clients, and team members your commitment to responsible innovation -- balancing technological advancement with appropriate risk management and client protection. We recommend using a technology consultant whose expertise can help transform this emerging challenge into a strategic advantage, ensuring your firm harnesses AI's benefits while minimizing associated risks.

John O'Connell is founder and CEO of The Oasis Group , a consultancy that specializes in helping wealth management and financial technology firms solve complex challenges. He is a recognized expert on artificial intelligence and cybersecurity within the wealth management space.

This content was created by Barron's, which is operated by Dow Jones & Co. Barron's is published independently from Dow Jones Newswires and The Wall Street Journal.

 

(END) Dow Jones Newswires

July 03, 2025 15:44 ET (19:44 GMT)

Copyright (c) 2025 Dow Jones & Company, Inc.

應版權方要求,你需要登入查看該內容

免責聲明:投資有風險,本文並非投資建議,以上內容不應被視為任何金融產品的購買或出售要約、建議或邀請,作者或其他用戶的任何相關討論、評論或帖子也不應被視為此類內容。本文僅供一般參考,不考慮您的個人投資目標、財務狀況或需求。TTM對信息的準確性和完整性不承擔任何責任或保證,投資者應自行研究並在投資前尋求專業建議。

熱議股票

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10