Optimal portfolio management requires managing a portfolio in real-time, including taxes, rebalancing, risk, and circumstantial variables like cashflows. It’s our job to fine-tune these to help our clients, and it’s very important we have these decisions be robust to the widest possible array of potential futures they might face.
We recently re-optimized our portfolio to include more complex asset allocations and risk models (and it will soon be available). Next up was optimizing our portfolio management algorithms, which manage cashflows, rebalances, and tax exposures. It’s as if we optimized the engine for a car, and now we needed to test it on the race track with different weather conditions, tires, and drivers.
Normally, this is a process that can literally take years (and may explain why legacy investing services are slow to switch to algorithmic asset allocation and advice.) But we did things a little differently, which saved us thousands of computing hours and hundreds of thousands of dollars.
First, the Monte Carlo
The testing framework we used to assess our algorithmic strategies needed to fulfill a number of criteria to ensure we were making robust and informed decisions. It needed to:
Include many different potential futures
Include many different cash-flow patterns
Respect path dependence (taxes you pay this year can’t be invested next year)
Accurately test how the algorithm would perform if run live.
To test our algorithms-as-strategies, we simulated the thousands of potential futures they might encounter. Each set of strategies was confronted with both bootstrapped historical data and novel simulated data. Bootstrapping is a process by which you take random chunks of historical data and re-order it. This made our results robust to the risk of solely optimizing for the past, a common error in the analysis of strategies.
We used both historic and simulated data because they complement each other in making future-looking decisions:
The historical data allows us to include important aspects of return movements, like auto-correlation, volatility clustering, correlation regimes, skew, and fat tails. It is bootstrapped (sampled in chunks) to help generate potential futures.
The simulated data allows us to generate novel potential outcomes, like market crashes bigger than previous ones, and generally, futures different than the past.
The simulations were detailed enough to replicate how they’d run in our live systems, and included, for example, annual tax payments due to capital gains over losses, cashflows from dividends and the client saving or withdrawing. It also showed how an asset allocation would perform over the lifetime of an investment.
During our testing, we ran over 200,000 simulations of 12 daily level returns of our 12 asset classes for 20 year’s worth of returns. We included realistic dividends at an asset class level. In short, we tested a heckuva a lot of data.
Normally, running this Monte Carlo would have taken nearly a full year to complete on a single computer, but we created a far more nimble system by piecing together a number of existing technologies. By harnessing the power of Amazon Web Services (specifically EC2 and S3) and a cloud-based message queue called IronMQ we reduced that testing time to just six hours—and for a total cost of less than $500.
How we did it
1. Create an input queue: We created a bucket with every simulation—more than 200,000—we wanted to run. We used IronMQ to manage the queue, which allows individual worker nodes to pull inputs themselves instead of relying on a system to monitor worker nodes and push work to them. This solved the problem found in traditional systems where a single node acts as the gatekeeper, which can get backed up, either breaking the system or leading to idle testing time.
2. Create 1,000 worker instances: With Amazon Cloud Service, we signed up to access time on 1,000 virtual machines. This increased our computing power by a thousandfold, and buying time is cheap on these machines. We employed the m1.small instances, relying on the quality of quantity.
3. Each machine pulls a simulation: Thanks the the maturation of modern message queues it is more advantageous and simple to orchestrate jobs in a pull-based fashion, than the old push system, as we mentioned above. In this model there is no single controller. Instead, each worker acts independently. When the worker is idle and ready for more work, it takes it upon itself to go out and find it. When there’s no more work to be had, the worker shuts itself down.
4. Store results in central location: We used another Amazon Cloud service called S3 to store the results of each simulation. Each file — with detailed asset allocation, tax, trading and returns information — was archived inexpensively in the cloud. Each file was also named algorithmically to allow us to refer back to it and do granular audits of each run.
5. Download results for local analysis: From S3, we could download the summarized results of each of our simulations for analysis on a “regular” computer. The resulting analytical master file was still large, but small enough to fit on a regular MacBook Pro.
We ran the Monte Carlo simulations over two weekends. Keeping our overhead low, while delivering top-of-the-line portfolio analysis and optimization is a key way we keep investment fees as low as possible. This is just one more example of where our quest for efficiency—and your happiness—paid off.
This post was written with Dan Egan.