This blog post is the fourth in a series “Be skeptical of what you read in the paper – question everything” which profiles incredibly idiotic stories in the popular press.  While this doesn’t fit exactly into that category, I feel that the reporter missed an incredible opportunity to write a really provocative story by not looking at the “negative space”.  In this post, we will look at what isn’t there rather than what is there.

The Wall Street Journal ran a story on August 27th titled “Transamerica Reaches Pact with SEC”.  The article stated that Transmerica agreed to pay a $97.6 million fine in relation to a settlement with the US Securities and Exchange Commission. The story goes on to say that the Transamerica portfolios under scrutiny were based on quantitative models and that, “The models… …were developed solely by an inexperienced junior analyst, contained numerous errors and didn’t work as promised…” (emphasis added)

In this type of case, the SEC typically issues an order and SEC orders are usually entertaining reading.  Here is the link:

The basic fact pattern was that between July 2011 and June 2015, AEGON Investment Management (a subsidiary of Transamerica) was marketing 15 funds saying that they were “managed using a proprietary quant model”. Then, “During the summer of 2013, AUIM (the subadviser of the Products and Strategies) discovered that certain of the models contained errors and concluded that these errors rendered at least one of the models “to not be fit for purpose.” (emphasis added)

Quoting from the SEC order:

Starting in 2010, AUIM tasked the Analyst, who had recently earned his MBA, but had no experience in portfolio management or any formal training in financial modeling, with developing quantitative models for use in managing investment strategies (i.e., models making investment allocation and models making trading decisions). AUIM ultimately used these models to manage each of the Products and Strategies. The Analyst did not follow any formal process to confirm the accuracy of his work, and AUIM failed to provide him meaningful guidance, training, or oversight as he developed the models or to confirm that the models worked as intended before using them to manage client assets.

During the summer of 2013, AUIM determined that its allocation models used to manage the TTI Fund and AAA Portfolios contained material errors. For example, AUIM found that the TTI Fund asset allocation model contained “numerous errors in logic, methodology, and basic math” and concluded that these errors rendered it to “not be fit for purpose.”

While the reporter focused on the sensational aspects of the story – I thought there might be something even more provocative under the surface.  So my first question was – How did the fund actually perform during the time period from inception until September of 2013?

Looking at data from the Morningstar website – the performance of the Fund was actually not that bad.  Below is a chart of IGTAX (the TTI Fund) and a category Morningstar has of funds with 15% to 30% equities.  The TTI Fund prospectus stated, “Under normal circumstances, the fund’s equity allocation will generally vary between 20% and 35% of its net assets.” So this index would seem to be appropriate for our purposes.

To look at performance a different way, by comparing the TTI Fund to its peers, we see that in 2012 the fund ranked 52 and in 2013 it ranked 50. What we don’t know is exactly how many funds were in the category in those years as Morningstar doesn’t list it.  But for five-year return, they list 154 funds, and for ten-year return, they list 83 funds. If we estimate that there were 110 funds six or seven years ago, this would put the TTI Fund right in the middle.

So that means that half of the funds in the category performed better and half performed worse.  Wait a second… …The TTI fund was managed by someone without experience and contained numerous errors… …but it outperformed half of the funds in the category.  Who was managing the funds that performed worse?  Presumably, they were run by people who were competent and knew (or thought they knew) what they were doing. What kinds of models were they using???? I find that shocking – like the realization that the proverbial monkey throwing darts at a page in the newspaper will pick stocks better than a professional.  Not good news for “active” management.

This example shows that there is randomness in all performance, yet we tend to ascribe skill to fund managers who outperform their peers.  This cobbled together model outperformed half of its peers yet it later turned out that it was a fluke.  There is a lesson here – be careful of ascribing skill to any manager who has outperformed their peers over a short period of time.

To read more about our views on active management, please refer to pages 150 to 155 in our book Pitch the Perfect Investment. There are two boxes “Sobering Advice to Students” and “Swimming with Sharks in Las Vegas and Wall Street” which are particularly interesting.

For more information please visit our website or order our book use this link to Amazon.


Leave a Reply

Your email address will not be published. Required fields are marked *