The DIY Financial Advisor 1 – Experts

DIY Financial Advisor

Today’s post is our first visit to a new book – The DIY Financial Advisor from the team at Alpha Architect.

DIY Financial Advisor

The book has three authors:

Wes Gray

Wes Gray

Jack Vogel

Jack Vogel

David Foulke

and David Foulke.

At the time it was written (2015), all three were working at Alpha Architect (AA), but Foulke has since left.

  • AA is an asset manager and a consultancy to family offices, and the book acts as a synopsis of their research findings in carrying out this work.

AA has a broad definition of a family office, which could extend to almost any individual investor.

The key advantage of the family office – or as we’ll call them, the DIY Investor –  is that they can make long-term decisions designed to maximise risk-adjusted performance after fees and taxes.

  • The role of the fund manager can create short-term incentives which conflict with this long-term goal.

The themes of the book are that you can do better than the “experts”, but that you need to avoid psychological traps and stick to an investing framework.

  • The one they present in the book is called FACTS (fees, access, complexity, taxes and search).

Beating the experts

The first part of the book explains why you can beat the experts.

  • AA detail the performance of experts, explain their self interest and susceptibility to the same behavioural biases as the rest of us, and show how they rely on stories rather than facts.

They start withe story of Victor Niederhoffer, a noted hedge fund manager and academic who blew up his first fund with a volatile options strategy and his second a decade later during the 2007 crisis.

  • Outside of these two blowups, he delivered high returns without obvious excessive downside risk.

But it’s the blow-up which matter.

  • No DIY investor could afford to run his portfolio that way.

Later in the book they tell the similar story of Jon Corzine, who bankrupted MG Global through repo-to-maturity bond transactions leveraged at 40-1.

Contrasting Niederhoffer with co-author Wes Gray – who has a similar academic and financial background – they say the main difference is Wes’ scepticism about his own abilities to beat the market.

  • This protects him from overconfidence.

The next section looks at the checkerboard illusion, where a shadow makes a square appear to be lighter in colour than another square which in fact is exactly the same shade of grey.

It’s part of a wide body of evidence that:

Humans are prone to poor decision-making across a broad range of situations.

Or as Dan Ariely puts it:

The human mind is predictably irrational.

And systematic decision-making (which uses models) outperforms discretionary decision making (using experts).

Incentives

AA look at the incentives of experts:

  1. They are short-term focused rather than long-term.
  2. Fund managers are motivated by clients and assets rather than outperformance.
    • So they may implement sub-optimal strategies with less chance of underperforming a benchmark.
  3. They use complicated jargon to make things seem more difficult for “normal people” to achieve.
    • Complexity supports higher fees.
See also:  The DIY Financial Advisor 4 - Risk Management and Stock Selection
The process

AA detail a three-stage process:

  1. Research and development (build the system)
  2. Systematic implementation
  3. Evidence-based assessment (review)

Stages 1 and 3 need human experts (or at least the products of their labour) whereas AA suggest that humans should not be involved in stage 2.

  • AA reference the development of standard operating procedures (SPOs) within the US Marines (where Wes Gray served as an officer for four years).

Execution should always be as systematic as possible, to minimise human error.

Decision  myths

The (erroneous) idea that experts can outperform models relies on three incorrect assumptions:

  1. That qualitative information increases forecast accuracy.
  2. That more information increases forecast accuracy.
  3. That experience and intuition increase forecast accuracy.

The opposite is true in fact.

  • All of these things make forecasts worse.
Evidence

In Chapter 2, AA present some of the evidence for models beating experts:

  1. The Pennsylvania Parole Board found that a simple model – based on offense type, number of convictions and prison rule violations – outperformed expert parole officers in predicting recidivism rates.
    • Algorithmic parole decisions are now the norm rather than the exception.
  2. Grove et all carried out a meta analysis of 136 published studies in areas such as academic performance, advertising sales, medical diagnosis, business failure, university admissions and wine quality.
    • Models beat experts 46% of the time.
    • Models were at least as good as experts 94% of the time.
    • Experts beat models just 6% of the time.
  3. When provided with the results from a (superior) model, experts improve, but they still underperform the model.
    • That’s because on occasion they use human “judgement” to overrule it.
    • Models are not a floor on performance, but rather a ceiling from which experts detract.
  4. Joel Greenblatt (famous for his Little Book that Beats the Market) runs a firm which provides formula investing (actually the name of the firm) but also allows clients to override the system.
    • The formula accounts beat the index, but the discretionary accounts lag both the system and the index.
Why experts fail?

To explain why experts fail to beat models, AA take us back to Kahneman’s work on the two decision making systems in his book Thinking, Fast and Slow.

  • System 1 is fast, instinctual and based around heuristics.
  • System 2 is calculated and analytical.

System 1 is good for running away from tigers, but not so good at choosing the best investments.

Here are some of the key problems:

  1. Inconsistency
    • Different humans will produce different results from the same data, and even the same human will produce different results at different times.
    • Issues included anchoring, framing and the availability bias.
    • Models always produce the same result.
  2. Overconfidence
    • This can be dues to hindsight bias and self-attribution bias.
    • We believe events were more predictable than they seemed at the time.
    • And we take credit for good outcomes, but blame poor ones on bad luck.
  3. Reliance on stories
    • Humans like stores that fit the facts, but don’t necessarily explain them.
    • This is particularly the case where probabilities are concerned.
    • If “necessary”, we will invent a story to explain the facts.
Conclusions

That’s it for today.

  • We’re about a fifth of the way through the book, so I expect another four articles in this series (plus a summary at the end).
See also:  The DIY Financial Advisor 5 - Practicalities

I like it so far – it’s clear and mostly based on academic research, with a few anecdotes thrown in for flavour.

  • This is not a surprise to me, as I’m a regular reader of the AA blog.

So far, AA have made a convincing case that a systematic DIY investor – armed with a model – can beat the experts.

  • But the proof of the pudding will be in the second part of the book, when their particular DIY solution is presented.

Until next time.

Mike is the owner of 7 Circles, and a private investor living in London. He has been managing his own money for 40 years, with some success.

You may also like...

Leave a Reply

Your email address will not be published.

The DIY Financial Advisor 1 – Experts

by Mike Rawson time to read: 4 min