Harry Geels: Different investor mistakes because of AI?

Harry Geels: Different investor mistakes because of AI?

Asset Allocation Artificial Intelligence

This column was originally written in Dutch. This is an English translation.

By Harry Geels

Behavioral finance argues that people and investors make mistakes in their thinking and behavior (“biases”). Factor investing is based on a number of these biases. But what if investors soon start making (investment) decisions along with AI, or worse, only with AI? Will the biases disappear, or will they change?

AI is rapidly gaining popularity. Many people automatically have their CoPilot or ChatGPT turned on. Nowadays, when you Google something, you first get an answer from AI and only then are you given links to other relevant websites. I now use ChatGPT as an extra proofreader. However, I refuse (for now) to let AI write my columns, because I am afraid of losing my writing skills and creativity. If we are not careful, AI will soon prevent people from learning languages or writing.

But AI goes beyond language. It is also being used more and more in my field. This raises some very important questions. Will AI make markets function more efficiently? Will AI, rather than investors, determine stock prices in the future? What does that mean for the allocation of capital in the economy? And how will it influence certain forms of analysis, such as technical analysis and factor investing, which are based on all kinds of errors in investors' thinking and behavior?

The human investor: a barrel full of biases

Investors are notoriously irrational. They buy at peaks (or all-time highs) because they attract attention. They sell in panic. They extrapolate: something that has risen 100% will double again. They overestimate their knowledge. They are overconfident (especially men, women less so, which makes them generally slightly better investors than men). Well-known biases include confirmation bias, anchoring, hindsight bias, recency effect, and so on. They are not incidental, but systematic. But will that remain the case?

It is mainly technical analysts and factor investors who say they ‘use’ investors' biases. For example, value stocks should perform better in the longer term because investors pay too much for growth (and the underlying story of getting rich quick). Shares that have been rising sharply for a few months will continue to rise for a while longer due to herd behavior and naive extrapolation by investors; a possible explanation for why momentum might work.

AI: rational salvation or new illusion?

More and more investment decisions are being made by algorithms: from quant funds to robo-advisors, and AI is increasingly being used in this process. The idea is that AI is more rational than humans and therefore cannot make emotional or logical errors. But is that really the case? Apple's most recent paper, “The Illusion of Thinking,” shows that LLMs (Large Language Models) and LRMs (Large Reasoning Models) fail when problem and data complexity increases. See also Figure 1.
 

Figure 1

Source: Apple Research/via Mark Worrall
 

The Apple paper shows that large language models (LLMs) and retrieval-based models (LRMs) have three major shortcomings: (1) their performance declines significantly as questions become more complex, (2) they often give confident but incorrect answers to difficult tasks, and (3) LLMs invent answers (“hallucinate”), while LRMs tend to retrieve irrelevant information. AI can also suffer from an illusion of understanding, similar to human overconfidence or errors in reasoning.

A new form of market dynamics?

The big question: if AI plays an increasingly important role in determining stock prices, will the rhythm of the market also change? The Apple paper makes it difficult to reach a definitive conclusion. Other “thinking errors” are likely to emerge, such as model bias, data drift, and “algorithmic echo chambers” (is that a term?).

New ‘feedback loops’ between humans and machines may also emerge, which are self-reinforcing, similar to flash crashes or over-optimization.

Cautious conclusion: errors in reasoning will not disappear, they will mutate

What I am certain of is that AI is changing our skill set. Writing and speaking foreign languages will probably become ‘boomer skills’ (as my son would say). As far as market efficiency and the investment analyses and strategies that use it are concerned, I do not think, at least for the time being, that errors in thinking will disappear when the (human) thinker disappears. AI seems rational, but it is only as good as its training, the data used, and its design. Biases will take on different forms and names.

The latter conclusion is not particularly bold, incidentally. In behavioral finance, we have already seen that biases can change. They change because people become aware of them and adjust to them, with or without models. The world also changes, leading to regime shifts. Nobel Prize winner Daniel Kahneman has also admitted that some of the biases he (together with Amos Tversky) identified have been overtaken by time or only applied within certain (artificial) experiments.

To paraphrase Nietzsche: “Where human error disappears, a mechanical variant emerges; equally invisible, but just as decisive.”
 

This article contains the personal opinion of Harry Geels.