Today, almost everyone uses GPS in their cars to find driving directions. If you use GPS regularly, you will have experienced a situation when the system came back with incorrect directions. While not being totally surprised, you were very disappointed as you recovered from the random routing.
A different situation would be your driving, getting stuck in traffic, and then making a decision to take an alternative route. If that ended up being a poor decision and you lost lots of time on this route, you would be equally disappointed with the bad decision that you made.
With the GPS, your advisor was a zillion complex algorithms working to make a good routing forecast. In the second situation, the advisor was a trusted human making a rerouting forecast based on their experience. Which advisor should be the most trusted? How do we react to the errors being made by these two advisors – man versus machine?
A recent study by Berkeley Dietvorst(1) at The Wharton School provided some interesting results, and they have been supported by other studies. Reflecting on the above example, the study showed that people quickly lost confidence in the GPS system (i.e. the algorithm). However, with us humans, the trust is either not lost or is quickly regained. And it did not matter if it was the actual decision maker or someone else; this higher trust of humans over machines was material.
The two principal conclusions were:
- Algorithms consistently outperform humans
- Humans consistently prefer other humans (even when conclusion #1 is known)
While there are reasons for this behavior by humans, it is not logical. This behavior will lead to more inferior forecasting decisions. In other words, it will be predictably expensive. And if you are in the research and investment management business, this illogical bias could be exceptionally expensive.
Dietvorst labeled this phenomenon Algorithm Aversion. Of course, most of the human vs. algorithm studies are comparing the eventual accuracy of forecasts. The types for forecasting decisions tested have been in numerous disciplines. The studies have shown that even the simple linear forecasting algorithms have outperformed the human predictions. When someone is given data that proves that an algorithm’s predictions will be more beneficial than some expert’s predictions, why would a decision maker choose the expert? It doesn’t seem logical. Here are some of the reasons cited in the various studies:
- A Perfect Forecast – We ever hopeful humans are always looking for the perfect forecast. Although algorithms may be outperforming humans on average, studies have shown that people believe that an expert in the field has a better chance of making a perfect forecast.
- Human Learning – Perceived advancement toward that perfect forecast may come from the knowledge that people get better with practice.
- Humans learn from their mistakes. Here again, algorithms are underappreciated. In reality, evolving models/algorithms learn, retain and grow much faster. Algorithms learn from their consistency of application, research & selection of data items, model weightings, etc.
- It was also interesting that participants would consistently lose confidence in the model after experiencing relatively small mistakes. However, after seeing the human experience relatively large mistakes, a lowering of confidence was not consistent. Decision makers were more forgiving and tolerant of mankind than the output of the more accurate model.
- Qualitative Absence – This concern relates more to a misunderstanding of the qualitative analysis process. Qualitative analysis is used with all forecasts and decisions.
- Traditional qualitative analysis is when one or more individuals create rules and make decisions based on their experience, judgment and assessment of the data.
- In building algorithms, analysts do the same thing. However, the algorithm is coded, run over multiple time periods and conditions, and enhanced over time. The qualitative analysis needs to be quantified and modified to evolve to more optimal decisions.
- Dehumanizing & Ethicality – This concern relates to how decisions are controlled. Simple controls with direct human oversight are more comforting than complex automated controls … the fear of machines run amuck. We trust our fellow humans more than the unhuman animated images, like the imaginary scientists behind Frankenstein.
Why is this Important to You?
CornerCap is in the investment business – making assessments of future events and making significant financial commitments to those assessments. It would be exceptionally costly for an investment professional to make decisions based on a bias that consistently underperforms. Human emotions, like excitement and fear, cause extreme herd-like biases that can devastate an investment portfolio. Much of our quantitative algorithm-laced research is specifically designed to not only avoid those costly biases but to place contra-wagers that will take advantage of other investors’ naivety.
Our extensive research requires that we be in the algorithm business. With our Fundametrics® equity research system, we have dozens of return and risk measuring algorithms (Attributes) where we test thousands of stocks … by security, size, sector, industry, style, etc. We have over 30-years of real-time weekly data (not back tested). We know which algorithms have worked, how much they worked, the cycles they tended to make, and generally how they behaved in combination with other algorithms (Composite Attributes).
CornerCap is also in the human business – convincing clients and potential clients of the logic, quality and consistency of our process. The natural bias of the buying public for the human guru over our “brilliant machine” (Fundametrics®) presents a communication challenge. Our only bias must always be toward doing what we believe will provide the greatest long term risk-adjusted return for our clients.
In summary, one of CornerCap’s significant investment advantages over other firms is the human bias that most of these firms exhibit. Investment firms are prone to exaggerate their subjective expertise and will confidently override the algorithmic recommendations. Statistically, the probabilities will always favor our tested models over the investment experts.
Past performance is no guarantee of future results, and all investments are subject to risk of loss.
(1) Dietvorst, Berkely J., and Simmons, Joseph P. and Massey, Cade, Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err (July 6, 2014) Journal of Experimental Psychology: General. Available at SSRN: https://ssm.com/abstract=2466040 or https://dx.doi.org/10.2139/ssrn.2466040