Sign up now before the month’s end: free monthly trade email alerts, both position entry and position exit reminders, have been implemented. The emails are sent out automatically by the backend a day before entry or exit. Positions are exited on the last trading day of every month; new positions are entered on the first trading day of the new month.

Note: the historical performance results on adaptivwealth are based on using market on close orders, an order type that allows you to buy/sell stocks right as the market closes.

Other features include the ability to view the adaptive Minimum Variance Portfolio’s historical allocations. One can see the benefits of being dynamic (vs. static, such as wealthfront.com or betterment.com) during the last few months of 2007, going into 2008: the MVP during this time period was around 80% in US intermediate term bonds (IEF), which largely protected the portfolio from the precipitous losses experienced by the stock market in the next year.

The MVP vs. VTI performance table (shown below, or when you mouse over the performance time series chart on the main adaptivwealth page) also show the benefits of an adaptive/dynamic allocation model.

The Minimum Variance Portfolio has a comparable compound annualized growth rate (since June 2006, when the ETFs it uses came online) to that of VTI, the Vanguard Total Stock Market ETF, a proxy for the overall US stock market. The MVP has a much lower maximum drawdown (-16.5% compared to VTI’s -55%), and almost double the Sharpe Ratio (0.62 vs. 0.35): in essence, it seems that the adaptive Minimum Variance Portfolio achieves stock-market like returns over the long-run with much lower volatility than the stock market.

Standard

# adaptivwealth: the new web app that I made to bring adaptive asset allocation to the masses

I recently finished the beta version of a web app I’ve been building, a web app that brings adaptive asset allocation to the masses.

I’ve written about it in several previous posts. Essentially, it’s the idea that traditional Markowitz mean-variance asset allocation can be improved–generating portfolios that have better risk-adjusted performance–by making the models more adaptive to market changes.

What’s the point of the web app?

adaptivwealth’s goal is to make models that try to improve upon the weaknesses of traditional asset allocation more accessible to individual investors.

Asset allocation–allocating one’s money to different asset classes such as equities, bonds, and commodities–often produces more diversified portfolios than, for example, just picking stocks. Portfolios constructed using asset allocation can have decreased risk and increased returns (see the above screen shot of the performance of the Minimum Variance Portfolio vs. the performance of the S&P 500 for an example). A portfolio’s holdings can be optimized such that return is maximized given a level of risk. Asset allocation is powerful: the famous Brinson, Hood, and Beebower study showed that asset allocation is responsible for 91.5% of pension funds’ returns. Not stock selection, not market timing.

Asset allocation is traditionally not very accessible to individual investors. Individual investors have data, computation, knowledge, and/or time constraints that prevent them from running asset allocation algorithms to optimize their portfolios; asset allocation services are usually performed by financial advisers for individual investors, and large institutions like pension funds and hedge funds obviously have the resources to do it for themselves. Companies like https://www.wealthfront.com/ are closing this gap, taking out the middle man, financial advisers, and lowering the costs of implementing asset allocation for the individual investor.

Companies like wealthfront implement traditional asset allocation algorithms. adaptivwealth differentiates itself by using models that try to improve upon the weaknesses of traditional asset allocation, and by making these models more accessible to individual investors. One approach to addressing the weakness of traditional asset allocation is by making the models more adaptive to market changes.

A call for help

adaptivwealth is still very rough around the edges, and I have a whole list of features that I want to implement, ideas for growth, etc. But I wanted to get a minimum viable product out there and collect feedback as quickly as possible. Let me know your thoughts! Questions, suggestions for features, advice, criticisms, anything and everything helps. Thank you.

Standard

# Adaptive Asset Allocation: update to reflect investor data constraints

I realized that the portfolios presented so far would be pretty difficult for the individual retail investor to implement due to data constraints.

The problem

Say today is January 31, and the market has just closed. The adaptive asset allocation portfolios I constructed assume that the investor exists at the close of the last day of the month. Which is definitely reasonable assuming a brokerage account at a place like Interactive Brokers with market on close orders. However, the algorithms also assumed that we would enter the new positions on January 31. This could be possible if live streaming quotes were used, and the weights were calculated seconds before actual close and the positions entered right before close, but it’s definitely not possible for a normal retail investor to do.

The solution

So I decided to test the effect on returns of delaying the entry by one day; specifically, entering the new positions on the close of February 1 in the example above (and still exiting the positions on January 31). Again, this is a reasonable simulation for what a retail investor would actually do: he would exit his old positions on January 31, calculate new portfolio weights on February 1, and enter the new positions on the market close of February 1.

Sharpe Ratio drops from around 0.8 to 0.62. CAGR is only 7.6%. It’s interesting that performance deteriorates so much from delaying entry by just one day. Perhaps the performance decrease represents the costs of (unrealistically) both entering and exiting positions on the same day.

An interesting caveat

I wanted to test the next logical variation: what if we, instead of entering the new positions one day later, exited the old positions one day earlier? Using our example, the investor would exit his positions on January 30, calculate the new portfolio weights on January 31 (using data looking back from January 30, not 31), and then enter the new positions on February 1. Below are the results.

Both CAGR and Sharpe Ratio are higher than if we entered new positions one day late: CAGR is 2% higher, Sharpe Ratio is also 0.77 compared to 0.62. It seems we’re missing out on a lot more of the returns if we skip the first day of each month instead of the last day of each month. Is this evidence of the end of month/first of month effect (basically that the returns on the first day of a month are significantly higher than average)? Maybe, but for now, I need to move forward with my project. Creating the adaptive asset allocation algorithms is only the first part… more to come.

Standard

# Adaptive Asset Allocation: minimum variance portfolios

This is a continuation of my previous post on adaptive asset allocation.

Introduction to Mean-Variance Optimization

Mean-variance optimization (the implementation of Markowitz’s modern portfolio theory) basically allows one to find the optimal weights of assets in a portfolio that maximizes expected return given a level of risk/variance, or equivalently, minimize risk/variance given a level of expected return. The biggest ingredient in mean-variance optimization is the covariance matrix of the assets’ returns. The covariance matrix contains information on not only how volatile the assets are (their variance) but also how they move with each other (covariance). Covariance adds a piece to the adaptive asset allocation puzzle that we did not have before: how the asset classes move with each other, how correlated they are, if they’re good hedges for each other. The minimum variance portfolio is the set of asset class weights that minimizes the variance of the portfolio (regardless of our expectations of future returns).

This is a step up from risk parity, which assigns each asset class a weight such that all asset classes in the portfolio contribute the same amount of variance to the portfolio variance. The overall portfolio variance could still be relatively high. Now our portfolios are being optimized to have the smallest variance possible.

I won’t get too deep into the details of the math, but there is a closed form solution to finding the set of optimal portfolio weights. It’s minimizing the quadratic function   where

• $w$ is a vector of holding weights such that $\sum w_i = 1$
• $\Sigma$ is the covariance matrix of the returns of the assets
• $q \ge 0$ is the “risk tolerance”: $q = 0$ works to minimize portfolio variance and $q = \infty$works to maximize portfolio return
• $R$ is the vector of expected returns
• $w^{T} \Sigma w$ is the variance of portfolio returns
• $R^{T} w$ is the expected return on the portfolio

For the minimum variance portfolio, $q = 0$ , so we’re actually just minimizing . Actual implementation was done with python’s cvxopt (convex optimization) library.

## Results

(like last time, all portfolios are rebalanced monthly)

Minimum Variance Portfolio

tl;dr: CAGR lower, max drawdown less severe, Sharpe Ratio slightly higher than that of the momentum + risk parity portfolio.

Where all ETFs are traded, and are weighted in the portfolio such that the variance of the portfolio is minimized.

Momentum and Minimum Variance Portfolio

tl;dr: compared to the momentum + risk parity portfolio (highest Sharpe Ratio so far, talked about in the previous post): CAGR about the same, max drawdown less severe (what’s even more impressive is that the max drawdown during the recent financial crisis was only -13%), Sharpe Ratio slightly higher

Where only the top five ETFs are selected based on their momentum, and are weighted in the portfolio such that the variance of the portfolio is minimized.

Summary

The last portfolio dominates all the other portfolios tested: it has the highest CAGR, smallest max drawdown, and highest Sharpe Ratio. This is because it includes all three pieces of the asset allocation puzzle: returns, variance, and correlation/covariance. We’ve made asset allocation more adaptive by filtering assets by momentum (with the expectation that high momentum assets will continue to perform well in the near term) and using shorter timeframes for variance and correlation/covariance–through this, the portfolios are more responsive to more recent asset price action.

Standard

# Adaptive Asset Allocation: momentum and risk parity

Asset allocation is powerful: the famous Brinson, Hood, and Beebower study showed that asset allocation is responsible for 91.5% of pension funds’ returns. Not stock selection, not market timing.

Also, I need to put my money to work. I don’t have time for frequent trading. I don’t trust my fundamental analysis, and I know that if I don’t have a quantitative, rule based system my emotions will get the best of me and I will make bad decisions.

Asset allocation should be easy these days, with low-cost, liquid ETFs tracking everything from gold to international REITs.

The the million dollar questions is, as always, how do we determine how much of our money to allocate to what asset classes?

I decided to implement what’s known as Adaptive Asset Allocation, an intuitive extension of the traditional Markowitz mean-variance model. Essentially, it makes traditional portfolio optimization more “adaptive” by using shorter term metrics as inputs instead of long run averages/standard deviations.

The portfolios are rebalanced monthly. There is only a universe of 10 ETFs (gold, bonds, REITs, equities, the usual). So trading and actually implementing these portfolios should be easy.

A strategy’s ease of use is worthless if it doesn’t make money. So how does it perform? To help answer that question, I tested several portfolio construction methods to use as comparison. Here are the (incomplete) results:

Equal Weighted Portfolio

Where all 10 ETFs are given an equal weight.

Momentum Portfolio

tl;dr: compared to equal weighted there is a higher CAGR, slightly higher Sharpe, much worse max draw down.

Where only the top 5 ETFs ranked by momentum are selected to be traded (equal weighted). The momentum effect has been shown to exist across asset classes and countries.

Risk Parity Portfolio

tl;dr: compared to equal weighted, CAGR is slightly higher, max draw down is smaller, Sharpe Ratio is higher

Where all 10 ETFs are included in the universe, but are weighted such that each position contributes the same amount of volatility to the portfolio (the entire portfolio has 100% exposure, i.e. the sum of the position weights equals one).

Momentum and Risk Parity Portfolio

tl;dr: compared to equal weighted, there is a much higher CAGR, smaller max draw down, much better Sharpe Ratio

Where only the top five ETFs are selected every rebalance based on their momentum, and the weighted according to risk parity.

Momentum and Minimum Variance Portfolio

Where the top five ETFs are selected by momentum, then weighted with a minimum variance optimization (weights that minimize the variance of the portfolio).

To be continued

Standard
Projects

# Naive Bayes classification and Livingsocial deals

Problem: I was planning my trip to Florida and looking for fun things (“adventure” activities like jet ski rentals, kayaking, and go karting) to do in Orlando and Miami. I like saving money, so I subscribed to Groupon, Livingsocial, and Google Offers for those cities. Those sites then promptly flooded my inbox with deals for gym membership, in-ear headphones, and anti-cellulite treatment. Not useful. Going to each site and specifying my deal preferences took a while. Plus, if I found a deal that I liked, I had to copy-paste the link to that deal in another document so that I had it for future reference (in case I wanted to buy it later). Too many steps, too much hassle, unhappy email inbox.

Solution: So I wanted to build a site that scraped the fun/adventure deals automatically from these deal sites. Example use case: if a person plans to visit a new city (e.g. Los Angeles), he or she could just visit the site and see in one glance a list of the currently active adventure deals (e.g. scuba diving) in that city. Sure, it seems that aggregator sites like Yipit solve this. Almost all aggregation sites like Yipit require users to give them their email address before showing them any deals (most are also difficult to navigate). More unnecessary steps for the user. Plus, I found that the Yipit deals weren’t the same as the ones displayed on the actual Groupon/Livingsocial/Google Offer sites.

“pre” minimum viable product: I gathered feedback for my idea to see if other people besides me would actually use it. This time, I just made a few quick posts on reddit (in the city subreddits), and got many comments. People said they would use it. Next.

MVP: The site I built scrapes Livingsocial; Groupon generates its pages dynamically with ajax… can’t scrape that w/o a JS engine, a big PITA to set up. Google Offers didn’t have very many quality deals, and I thought I’d simplify by making the MVP only for Livingsocial for now.

## Applying the Naive Bayes classifier

After scraping all the deals, they need to be classified as “adventure” or not. Obviously, doing this by hand is not scalable if I wanted to scrape deals for more than a couple cities. So I implemented the Naive Bayes classifier. Naive Bayes is often used in author text identification, e.g. finding out if Madison or Hamilton wrote certain unidentified essays in the Federalist Papers.

At a high level, Naive Bayes treats each “document” or block of text as a “bag of words”, meaning that it doesn’t care about the order of the words. When given a new “document” to classify, Naive Bayes asks and answers the question, “given each classification/category, what is the probability that this new document belongs to that classification/category?” The category with the highest probability is then the category that Naive Bayes has “predicted” the new document should belong to.

The site currently uses the deal “headline” (e.g. “Five Women’s Fitness Classes” or “Chimney Flue Sweep”) as the document text that Naive Bayes uses. I also tried using the actual deal description (i.e. the paragraph or two of text that Livingsocial writes to describe the deal), and from eyeballing the predictions, it looked like both gave similar prediction accuracy. Using the deal headline is a lot faster though.

Prediction accuracy is still pretty bad. I didn’t want Naive Bayes to automatically assign its predicted categories to the deals, so I decided to keep categorizing the deals manually, but with the help of Naive Bayes’s recommendations. I also decided to make its binary classification decisions more “fuzzy”. Here’s a screenshot of the admin page that tells me the predicted deal type of the scraped deals, with a column called “prediction confidence”, which is a score derived from the Naive Bayes output that signifies how strong its prediction is.

## No better way to learn than to do

Doing is the best way to learn, because working on your own projects forces you to engage in deliberate practice (Cal Newport’s key to living a remarkable life). Not only do you practice your skills, but you also learn about learning: when faced with an obstacle while working on a personally initiated project, you have just you and your own resourcefulness–no boss telling you what to do or professor giving guidelines. For example, this time, I encountered the issue of my requests timing out when in production on Heroku, since Heroku has a max request time of 30 seconds and some of my requests were taking up to a few  minutes (when my Naive Bayes implementation was inefficient). I googled my problem, found a stackoverflow post, and learned about worker queues and the Ruby library delayed_job, which fixed my problem by allowing more time intensive requests to be run in the background.