Systematic Trading 2 – Fitting and Allocation

Systematic Trading

Today’s post is our second visit to Robert Carver’s book Systematic Trading.

Fitting

Chapter three of Rob’s book is about creating trading rules from data.

  • My plan (initially at least) is to borrow proven rules from other sources, rather than build my own.
  • So this section is not directly relevant to me.

Robs describes the process of choosing rules that work and discarding those that don’t as fitting.

  • The danger to be avoided is over-fitting, where the rules fit the pasts data too well and are unlikely to work so well in the future.

Note that Rob classifies my use of proven rules as a form of over-fitting.

Ideas first

One way to end up with over-fitting is to sift a few apparently profitable rules from thousands of possible variations.

  • Rob prefers the “ideas first” approach to rule generation.

If the basic idea looks good against the historical data, it can be taken through the process of calibration, where small variations on the idea are tested to see if they make things better or worse.

  • Note that taking this calibration too far will lead to over-fitting.

Rob uses calibration to find rules that behave in a particular way, such as trading at a given speed, or that have certain correlations with other rules, rather than simply looking for variations with better returns.

Sharpe ratio

Rob selects between rules and variations using the Sharpe Ratio (SR), which we’ve written about before.

When we looked at one of Rob’s other book (Smart Portfolios), I found the focus on SR less than ideal, because the max SR portfolio doesn’t have high enough returns for my purposes.

  • The way around this is to allocate more funds to higher risk, higher return assets (stocks – usually by capping the allocation to bonds).

This time around, things are different.

  • I will implement my trading rules as part of a core and satellite approach.

My passive allocation will be the core, and all my active strategies will together form the satellite.

  • In the satellites, I’m not looking for higher returns so much as a different patterning of returns.

As such, SR is a fine test for a rule (I might prefer Sortino, but that’s more complicated to calculate).

  • And the higher the SR the better.
Time periods

Fitting usually uses two time periods:

  1. the time (data) used to fit the rule
  2. the (usually later) period used to test it again

These are known as in sample and out of sample data periods.

Bad practice if you have a limited sample of data (say 10 years) is to use the entire 10 years to fit the rule.

  • The issue is that at the start of the 10 years, you wouldn’t have the data from the subsequent years, so this approach will flatter your rules.

A common alternative is to split the data into two periods.

  • In one sense, this “wastes” half of your data
  • And your rule won’t take into account any changes between the two periods (though your re-test should show the effects of this).

Rob prefers to use an expanding window.

  • For each year of data, you test on that year using a rule fitted to all of the preceding years.
See also:  Smart Portfolios 5 - Top-Down, Assets, Alternatives

This approach has the opposite issue of data-splitting.

  • If the data changes half-way through, you will still be including the less relevant data (though your re-tests should gradually account for this).

The final approach to get around this is a rolling window.

  • The issue here is getting the window long enough to produce statistically significant results (which can take decades).
Rule variations

Robe prefers to run a blend of rule variations since we rarely have enough data to prove that one is better than another.

  • Whatever threshold (eg. SR hurdle) we set for their selection, a few bad rules will sneak past.

The lower the SR target, the more rules tested and the fewer years of data we have, the easier it is for bad rules to be included.

Finding a good rule

Rob’s next step is to assume a positive SR for a rule, and generate random daily returns, to produce a distribution of annual SRs.

  • He wants to know how confident he can be that the SR really is greater than zero.

The T-test is traditionally used for this, with a two-sigma (95%) threshold.

  • An average SR that is two standard deviations above zero has only a 2.5% chance of being produced by a rule with a negative SR.

With an SR of 0.5, the mean quickly converges to the true value, but it takes 10 years of data before the lower two-sigma band moves above zero.

  • With a “better” rule (one with a higher SR of 1.0), you only need a few years of data.

For the average rule with an SR of 0.3, you need 37 years of history to be reasonably confident!

Two rules

Next Rob compares a rule with SR 0.3 to a rule with SR 0.8.

  • It takes three decades of data before the lower two sigma band of the difference between the two SRs goes above zero.

The initial test assumed no correlation between the rules.

  • Later on, Rob looks at higher correlations and different levels of SR advantage.

It’s still pretty hard to distinguish between rules unless they are fairly similar (highly correlated) and one is much better than the other (large SR gap).

Instruments

Many traders fit a different rule for each instrument, even those which are closely related.

  • Rob is against this and sees it as another version of over-fitting.

You should design trading rules that are generic and can work with any instrument.

Rob uses the same rules across all instruments.

  • This makes them easier to distinguish (since the SRs over a set of instruments are much higher).
How many rules

Rob uses eight rules which map to five different themes.

  • But he would start with just two: trend following and carry.

Price momentum needs several rules (Rob suggests four or five) to catch moves with different speeds.

Diversification across instruments is more important, as different rules within the same instrument are usually more closely correlated that the same rule across instruments.

  • Any pair of variations with a correlation of 0.95 or higher should be pruned back to a single rule.

You can also remove variations whose trading costs will be too high, or which trade extremely slowly and so are unlikely to give you significant returns.

Rob won’t dop rules simply based on historic performance – he saves this data for deciding on forecast weights.

See also:  Leveraged Trading 4 - Diversifying

I suppose, in theory, it might be possible even to use rules with negative expected performance if the correlations with the other rules are low (or negative enough).

  • In asset allocation, holding gold (in moderation) is a good example of this principle.

In practice, Rob rules this out, since you can’t short trading rules.

  • The minimum weight is zero.
Portfolio allocation

This process of deciding how to share out your trading capital between instruments and trading rules is known as portfolio allocation.

  • And it’s the subject of the next chapter in Rob’s book.

Systems traders also need to decide what “forecast weights” to use when combining rules together to forecast the price of an instrument.

We’ve come across this problem before in one of Rob’s other books – Smart Portfolios –  when we needed to allocate capital across assets, geographies and styles, to create a passive portfolio.

So we already know about:

  1. the dangers of formal optimisation (a portfolio weight equivalent of over-fitting)
  2. the bootstrapping technique that Rob uses to “fix” the standard optimisation process, and
  3. the benefits of Rob’s hand-crafting approach. (( Something close to which I had been using for decades without having a name for it ))

Rob uses volatility standardisation (“risk weights”) to ensure that trading rules have an identical expected standard deviation of returns.

  • Which means that you only need to use expected Sharpe ratios and correlations to work out your weights.
Handcrafting

We’ve looked at the handcrafting process in some detail before, but there are a couple of useful charts and tables in this earlier book.

Rob assumes equal volatility and equal SRs between assets (or instruments/rules if you prefer), which means that correlations are the variable driving allocation weights.

The basic process is to group similar (highly correlated) assets together, and then to allocate within and between groups according to correlations.

  • Top-level groups can be broken down into lower-level groups as required.
SR adjustments

Handcrafting assumes equal SRs since usually, you don’t have enough data to prove otherwise.

  • Higher costs are one situation that might produce different SRs.

Big differences in rule performance are another.

  • Rob describes a process for adjusting weights to reflect SRs and a table of adjustments.

Rob notes that backtested SRs are unlikely to be repeated in the future.

  • He quotes a “pessimism factor” of 65% for the backtested SR where variable SRs are used to produce handcrafted weights – the future SR will probably be only 65% as high.

The pessimism factor is only 70% where all SRs are assumed to be equal.


Note that any rule with a positive SR could be helpful – depending on its correlation to say an existing passive portfolio.

  • Residual (future, post-pessimism) SRs above 0.2 are very likely to be useful.
Conclusions

Considering that fitting is a process I am unlikely to use directly, and that hand-crafting is a process with which I am familiar from Rob’s other book, I found today’s chapters surprisingly entertaining.

  • We can now move on to Rob’s framework for building a trading system.

Until next time.

Mike is the owner of 7 Circles, and a private investor living in London. He has been managing his own money for 39 years, with some success.

You may also like...

Leave a Reply

Your email address will not be published.

Systematic Trading 2 – Fitting and Allocation

by Mike Rawson time to read: 5 min