Why AI Chatbots Can't Be Trusted for Financial Advice: They're Sociopaths -- Journal Report

Dow Jones
11 hours ago

By Peter Coy

Should you use AI for financial advice?

Andrew Lo, a finance professor at the Massachusetts Institute of Technology's Sloan School of Management, says not yet. Large language models like Copilot or ChatGPT aren't suited to being used as financial advisers because they are the digital equivalent of sociopaths -- smooth, persuasive and devoid of empathy.

If an adviser powered by artificial intelligence "is able to communicate both good and bad financial advice with the same pleasant and convincing affect, its clients will rightfully view this as a problem," Lo and one of his graduate students, Jillian Ross, wrote in 2024 in an article for the Harvard Data Science Review. (Today's robo advisers, like those offered by Betterment and Wealthfront, went into action before the era of large language models and aren't built on them for the most part, so Lo's critiques don't apply to them.)

Still, people are already using AI tools for help with their finances. This past August, a survey of 11,000 individual investors in 13 countries commissioned by eToro, a trading and investment platform, found that 19% were already using ChatGPT-style AI tools to manage their portfolios, up from 13% in 2024.

That worries Lo. "The AI people are using now can be dangerous, especially if the user isn't fully aware of the biases, inaccuracies and other limits" of large language models, he wrote in an email.

Understanding ethics

Despite his reservations about current AI models, Lo believes that large language models will eventually be able to help investors -- especially people with small accounts and limited experience with investing. In fact, he is working to build one that is specialized for financial advice. He doesn't plan to charge for it, he says.

Lo's goal is to develop an AI financial adviser that is a true fiduciary -- namely, an entity that always puts the client's interests first and tailors its advice to their particular needs, including emotional needs. He thinks it will take something less than four more years.

To get there, it will need a rich understanding of financial ethics, he says. For that, he proposes feeding the model all the laws, regulations and court cases involving questions of financial ethics in the U.S., from the Securities Act of 1933 up to the latest fraud trial.

"This rich history can be viewed as a fossil record of all the ways that bad actors have exploited unsuspecting retail and institutional clients, " he and Ross wrote. The hope is that the large language model will learn from its training what not to do.

Lo acknowledges that a large language model might use its newfound knowledge of financial rights and wrongs to choose the wrongs because LLMs don't have ethics built in. To counter such misuse, he says, authorities will need to fight fire with fire, developing AI models that detect crime by auditing users' tax returns, for example.

Large language models aren't good at math, which is a problem when it comes to financial planning. Lo said the models will need to hand off the number-crunching part of the job to specialized financial-planning software.

The human touch

But knowledge is only part of the solution. An AI financial adviser will also need digital equivalents of empathy, humility and a sense of fairness, Lo says.

Those humanlike qualities won't emerge simply by making AI more powerful, he says. Instead, AI models will require specialized modules that produce "analogs" of empathy (since as machines they can't actually be empathetic). These modules would correspond to specialized bits of the human brain, Lo says.

Lo has taught generations of MIT students who went on to careers on Wall Street. He also developed what he calls the adaptive markets hypothesis, which uses the principles of evolution to explain behaviors such as loss aversion and overconfidence.

Evolution occurs through random variation and natural selection: The strong survive and reproduce; the weak perish. Lo wants to use a kind of computer-accelerated natural selection to spur the development of better AI models.

Peter Coy is a writer in New York. Follow him at petercoy.substack.com. He can be reached at reports@wsj.com.

 

(END) Dow Jones Newswires

February 09, 2026 13:00 ET (18:00 GMT)

Copyright (c) 2026 Dow Jones & Company, Inc.

At the request of the copyright holder, you need to log in to view this content

Disclaimer: Investing carries risk. This is not financial advice. The above content should not be regarded as an offer, recommendation, or solicitation on acquiring or disposing of any financial products, any associated discussions, comments, or posts by author or other users should not be considered as such either. It is solely for general information purpose only, which does not consider your own investment objectives, financial situations or needs. TTM assumes no responsibility or warranty for the accuracy and completeness of the information, investors should do their own research and may seek professional advice before investing.

Most Discussed

  1. 1
     
     
     
     
  2. 2
     
     
     
     
  3. 3
     
     
     
     
  4. 4
     
     
     
     
  5. 5
     
     
     
     
  6. 6
     
     
     
     
  7. 7
     
     
     
     
  8. 8
     
     
     
     
  9. 9
     
     
     
     
  10. 10