Designing Experiences
Featured articles
-
How We Engineered Betterment’s Tax-Coordinated Portfolio™
For our latest tax-efficiency feature, Tax Coordination, Betterment’s solver-based portfolio ...
How We Engineered Betterment’s Tax-Coordinated Portfolio™ For our latest tax-efficiency feature, Tax Coordination, Betterment’s solver-based portfolio management system enabled us to manage and test our most complex algorithms. Tax efficiency is a key consideration of Betterment’s portfolio management philosophy. With our new Tax Coordination feature, we’re continuing the mission to help our customers’ portfolios become as tax efficient as possible. While new products can often be achieved using our existing engineering abstractions, TCP brought the engineering team a new level of complexity that required us to rethink how parts of our portfolio management system were built. Here’s how we did it. A Primer on Tax Coordination Betterment’s TCP feature is our very own, fully automated version of an investment strategy known as asset location. If you’re not familiar with asset location, it is a strategy designed to optimize after-tax returns by placing tax-inefficient securities into more tax-advantaged accounts, such as 401(k)s and Individual Retirement Accounts (IRAs). Before we built TCP, Betterment customers had each account managed as a separate, standalone portfolio. For example, customers could set up a Roth IRA with a portfolio of 90% stocks and 10% bonds to save for retirement. Separately, they could set up a taxable retirement account invested likewise in 90% stocks and 10% bonds. Now, Betterment customers can turn on TCP in their accounts, and their holdings in multiple investment accounts will be managed as a single portfolio allocation, but rearranged in such a way that the holdings across those accounts seek to maximize the overall portfolio’s after-tax returns. To illustrate, let’s suppose you’re a Betterment customer with three different accounts: a Roth IRA, a traditional IRA, and a taxable retirement account. Let’s say that each account holds $50,000, for a total of $150,000 in investments. Now assume that the $50,000 in each account is invested into a portfolio of 70% stocks and 30% bonds. For reference, consider the diagram. The circles represent various asset classes, and the bar shows the allocation for all the accounts, if added together. Each account has a 70/30 allocation, and the accounts will add up to 70/30 in the aggregate, but we can do better when it comes to maximizing after-tax returns. We can maintain the aggregate 70/30 asset allocation, but use the available balances of $50,000 each, to rearrange the securities in such a way that places the most tax-efficient holdings into a taxable account, and the most tax-inefficient ones into IRAs. Here’s a simple animation solely for illustrative purposes: Asset Location in Action The result is the same 70/30 allocation overall, except TCP has now redistributed the assets unevenly, to reduce future taxes. How We Modeled the Problem The fundamental questions the engineering team tried to answer were: How do we get our customers to this optimal state, and how do we maintain it in the presence of daily account activity? We could have attempted to construct a procedural-style heuristic solution to this, but the complexity of the problem led us to believe this approach would be hard to implement and challenging to maintain. Instead, we opted to model our problem as a linear program. This made the problem provably solvable and quick to compute—on the order of milliseconds per customer. Let’s consider a hypothetical customer account example. Meet Joe Joe is a hypothetical Betterment customer. When he signed up for Betterment, he opened a Roth IRA account. As an avid saver, Joe quickly reached his annual Roth IRA contribution limit of $5,500. Wanting to save more for his retirement, he decided to open up a Betterment taxable account, which he funded with an additional $11,000. Note that the contribution limits mentioned in this example are as of the time this article was published. Limits are subject to change from year to year, so please defer to IRS guidelines for current limits. See IRA limits here and 401(k) limits. Joe isn’t one to take huge risks, so he opted for a moderate asset allocation of 50% stocks and 50% bonds in both his Roth IRA and taxable accounts. To make things simple, let’s assume that both portfolios are only invested in two asset classes: U.S. total market stocks and emerging markets bonds. In his taxable account, Joe holds $5,500 worth of U.S. total market stocks in VTI (Vanguard Total Stock Market ETF), and $5,500 worth of emerging markets bonds in VWOB (Vanguard Emerging Markets Bond ETF). Let’s say that his Roth IRA holds $2,750 of VTI, and $2,750 of VWOB. Below is a table summarizing Joe’s holdings: Account Type: VTI (U.S. Total Market) VWOB (Emerging Markets Bonds) Account Total Taxable $5,500 $5,500 $11,000 Roth $2,750 $2,750 $5,500 Asset Class Total $8,250 $8,250 $16,500 To begin to construct our model for an optimal asset location strategy, we need to consider the relative value of each fund in both accounts. A number of factors are used to determine this, but most importantly each fund’s tax efficiency and expected returns. Let’s assume we already know that VTI has a higher expected value in Joe’s taxable account, and that VWOB has a higher expected value in his Roth IRA. To be more concrete about this, let’s define some variables. Each variable represents the expected value of holding a particular fund in a particular account. For example, we’re representing the expected value of holding VTI in your Taxable as which we’ve defined to be 0.07. More generally, Let’s let be the expected value of holding fund F in account A. Circling back to the original problem, we want to rearrange the holdings in Joe’s accounts in a way that’s maximally valuable in the future. Linear programs try to optimize the value of an objective function. In this example, we want to maximize the expected value of the holdings in Joe’s accounts. The overall value of Joe’s holdings are a function of the specific funds in which he has investments. Let’s define that objective function. You’ll notice the familiar terms—measuring the expected value of holding each fund in each account, but also you’ll notice variables of the form Precisely, this variable represents the balance of fund F in account A. These are our decision variables—variables that we’re trying to solve for. Let’s plug in some balances to see what the expected value of V is with Joe’s current holdings: V=0.07*5500+0.04*5500+0.06*2750+0.05*2750=907.5 Certainly, we can do better. We cannot just assign arbitrarily large values to the decision variables due to two restrictions which cannot be violated: Joe must maintain $11,000 in his taxable account and $5,500 in his Roth IRA. We cannot assign Joe more money than he already has, nor can we move money between his Roth IRA and taxable accounts. Joe’s overall portfolio must also maintain its allocation of 50% stocks and 50% bonds—the risk profile he selected. We don’t want to invest all of his money into a single fund. Mathematically, it’s straightforward to represent the first restriction as two linear constraints. Simply put, we’ve asserted that the sum of the balances of every fund in Joe’s taxable account must remain at $11,000. Similarly, the sum of the balances of every fund in his Roth IRA must remain at $5,500. The second restriction—maintaining the portfolio allocation of 50% stocks and 50% bonds—might seem straightforward, but there’s a catch. You might guess that you can express it as follows: The above statements assert that the sum of the balances of VTI across Joe’s accounts must be equal to half of his total balance. Similarly, we’re also asserting that the sum of the balances of VWOB across Joe’s accounts must be equal to the remaining half of his total balance. While this will certainly work for this particular example, enforcing that the portfolio allocation is exactly on target when determining optimality turns out to be too restrictive. In certain scenarios, it’s undesirable to buy or to sell a specific fund because of tax consequences. These restrictions require us to allow for some portfolio drift—some deviation from the target allocation. We made the decision to maximize the expected after-tax value of a customer’s holdings after having achieved the minimum possible drift. To accomplish this, we need to define new decision variables. Let’s add them to our objective function: is the dollar amount above the target balance in asset class AC. Similarly, is the dollar amount below the target balance in asset class AC. For instance, is the dollar amount above the target balance in emerging markets bonds—the asset class to where VWOB belongs. We still want to maximize our objective function V. However, with the introduction of the drift terms, we want every dollar allocated toward a single fund to incur a penalty if it moves the target balance for that fund’s asset class below or above its target amount. To do this, we can relate the terms with the terms using linear constraints. As shown above, we’ve asserted that the sum of the balances in funds including U.S. total market stocks (in this case, only VTI), plus some net drift amount in that asset class, must be equal to the target balance of that asset class in the portfolio (which in this case, is 50% of Joe’s total holdings). Similarly, we’ve also done this for emerging markets bonds. This way, if we can’t achieve perfect allocation, we have a buffer that we can fill—albeit at a penalty. Now that we have our objective function and constraints set up, we just need to solve these equations. For this we can use a mathematical programming solver. Here’s the optimal solution: Managing Engineering Complexity Reaching the optimal balances would require our system to buy and sell securities in Joe’s investment accounts. It’s not always free for Joe to go from his current holdings to optimal ones because buying and selling securities can have tax consequences. For example, if our system sold something at a short-term capital gain in Joe’s taxable account, or bought a security in his Roth IRA that was sold at a loss in the last 30 days—triggering the wash-sale rule, we would be negatively impacting his after-tax return. In the simple example above with two accounts and two funds, there are a total of four constraints. Our production model is orders of magnitude more complex, and considers each Betterment customer’s individual tax lots, which introduces hundreds of individual constraints to our model. Generating these constraints that ultimately determine buying and selling decisions can often involve tricky business logic that examines a variety of data in our system. In addition, we knew that as our work on TCP progressed, we were going to need to iterate on our mathematical model. Before diving head first into the code, we made it a priority to be cognizant of the engineering challenges we would face. Key Principles for Using Tax Coordination on a Retirement Goal As a result, we wanted to make sure that the software we built respected four key principles, which are: Isolation from third-party solver APIs. Ability to keep pace with changes to the mathematical model, e.g., adding, removing, and changing the constraints and the objective function must be quick and painless. Separation of concerns between how we accessed data in our system and the business logic defining algorithmic behavior. Easy and comprehensive testing. We built our own internal framework for modeling mathematical programs that was not tied to our trading system’s domain-specific business logic. This gave us the flexibility to switch easily between a variety of third-party mathematical programming solvers. Our business logic that generates the model knows only about objects defined by our framework, and not about third-party APIs. To incorporate a third-party solver into our system, we built a translation layer that received our system-generated constraints and objective function as inputs, and utilized those inputs to solve the model using a third-party API. Switching between third-party solvers simply meant switching implementations of the interface below. We wanted that same level of flexibility in changing our mathematical model. Changing the objective function and adding new constraints needed to be easy to do. We did this by providing well-defined interfaces that give engineers access to core system data needed to generate our model. This means that an engineer implementing a change to the model would only need to worry about implementing algorithmic behavior, and not about how to retrieve the data needed to do that. To add a new set of constraints, engineers simply provide an implementation of a TradingConstraintGenerator. Each TradingConstraintGenerator knows about all of the system related data it needs to generate constraints. Through dependency injection, the new generator is included among the set of generators used to generate constraints. The sample code below illustrates how we generated the constraints for our model. With hundreds of constraints and hundreds of thousands of unique tax profiles across our customer base, we needed to be confident that our system made the right decisions in the right situations. For us, that meant having clear, readable tests that were a joy to write. Below is a test written in Groovy, which sets up fixture data that mimics the exact situation in our “Meet Joe” example. We not only had unit tests such as the one above to test simple scenarios where a human could calculate the outcome, but we also ran the optimizer in a simulated production-like environment, through hundreds of thousands of scenarios that closely resembled real ones. During testing, we often ran into scenarios where our model had no feasible solution—usually due to a bug we had introduced. As soon as the bug was fixed, we wanted to ensure that we had automated tests to handle a similar issue in the future. However, with so many sources of input affecting the optimized result, writing tests to cover these cases was very labor-intensive. Instead, we automated the test setup by building tools that could snapshot our input data as of the time the error occurred. The input data was serialized and automatically fed back into our test fixtures. Striving for Simplicity At Betterment, we aim to build products that help our customers reach their financial goals. Building new products can often be done using our existing engineering abstractions. However, TCP brought a new level of complexity that required us to rethink the way parts of our trading system were built. Modeling and implementing our portfolio management algorithms using linear programming was not easy, but it ultimately resulted in the simplest possible system needed to reliably pursue optimal after-tax returns. To learn more about engineering at Betterment, visit the engineering page on the Betterment Resource Center. All return examples and return figures mentioned above are for illustrative purposes only. For much more on our TCP research, including additional considerations on the suitability of TCP to your circumstances, please see our white paper. See full disclosure for our estimates and Tax Coordination in general. -
How We Develop Design Components in Rails
Learn how we use Rails components to keep our code D.R.Y. (Don’t Repeat Yourself) and to ...
How We Develop Design Components in Rails Learn how we use Rails components to keep our code D.R.Y. (Don’t Repeat Yourself) and to implement UX design changes effectively and uniformly.. A little over a year ago, we rebranded our entire site . And we've even written on why we did it. We were able to achieve a polished and consistent visual identity under a tight deadline which was pretty great, but when we had our project retrospective, we realized there was a pain point that still loomed over us. We still lacked a good way to share markup across all our apps. We repeated multiple styles and page elements throughout the app to make the experience consistent, but we didn’t have a great way to reuse the common elements. We used Rails partials in an effort to keep the code DRY (Don’t Repeat Yourself) while sharing the same chunks of code and that got us pretty far, but it had its limitations. There were aspects of the page elements (our shared chunks) that needed to change based on their context or the page where they were being rendered. Since these contexts change, we found ourselves either altering the partials or copying and pasting their code into new views where additional context-specific code could be added. This resulted in app code (the content-specific code) becoming entangled with “system” (the base HTML) code. Aside from partials, there was corresponding styling, or CSS, that was being copied and sometimes changed when these shared partials were altered. This meant when the designs were changed, we needed to find all of the places this code was used to update it. Not only was this frustrating, but it was inefficient. To find a solution, we drew inspiration from the component approach used by modern design systems and JavaScript frameworks. A component is a reusable code building block. Pages are built from a collection of components that are shared across pages, but can be expanded upon or manipulated in the context of the page they’re on. To implement our component system, we created our internal gem, Style Closet. There are a few other advantages and problems this system solves too: We’re able to make global changes in a pretty painless way. If we need to change our brand colors, let’s say, we can just change the CSS in Style Closet instead of scraping our codebase and making sure we catch it everywhere. Reusable parts of code remove the burden from engineers for things like CSS and allows time to focus on and tackle other problems. Engineers and designers can be confident they’re using something that’s been tested and validated across browsers. We’re able to write tests specific to the component without worrying about the use-case or increasing testing time for our apps. Every component is on brand and consistent with every other app, feels polished, high quality and requires lower effort to implement. It allows room for future growth which will inevitably happen. The need for new elements in our views is not going to simply vanish because we rebranded, so this makes us more prepared for the future. How does it work? Below is an example of one of our components, the flash. A flash message/warning is something you may use throughout your app in different colors and with different text, but you want it to look consistent. In our view, or the page where we write our HTML, we would write the following to render what you see above: Here’s a breakdown of how that one line, translates into what you see on the page. The component consists of 3 parts: structure, behavior and appearance. The view (the structure): a familiar html.erb file that looks very similar to what would exist without a component but a little more flexible since it doesn’t have its content hard coded in. These views can also leverage Rails’ view yield functionality when needed. Here’s the view partial from Style Closet: You can see how the component.message is passed into the dedicated space/ slot keeping this code flexible for reuse. A Ruby class (the behavior aside from any JavaScript): the class holds the “props” the component allows to be passed in as well as any methods needed for the view, similar to a presenter model. The props are a fancier attr_accessor with the bonus of being able to assign defaults. Additionally, all components can take a block, which is typically the content for the component. This allows the view to be reusable. CSS (the appearance): In this example, we use it to set things like the color, alignment and the border. A note on behavior: Currently, if we need to add some JS behavior, we use unobtrusive JavaScript or UJS sprinkles. When we add new components or make changes, we update the gem (as well as the docs site associated with Style Closet) and simply release the new version. As we develop and experiment with new types of components, we test these bigger changes out in the real world by putting them behind a feature flag using our open source split testing framework, Test Track. What does the future hold? We’ve used UJS sprinkles in similar fashion to the rest of the Rails world over the years, but that has its limitations as we begin to design more complex behaviors and elements of our apps. Currently we’re focusing on building more intricate and and interactive components using React. A bonus of Style Closet is how well it’s able to host these React components since they can simply be incorporated into a view by being wrapped in a Style Closet component. This allows us to continue composing a UI with self contained building blocks. We’re always iterating on our solutions, so if you’re interested in expanding on or solving these types of problems with us, check out our career page! Addition information Since we introduced our internal Rails component code, a fantastic open-source project emerged, Komponent, as well as a really great and in-depth blog post on component systems in Rails from Evil Martians. -
Supporting Face ID on the iPhone X
We look at how Betterment's mobile engineering team developed Face ID for the latest phones, ...
Supporting Face ID on the iPhone X We look at how Betterment's mobile engineering team developed Face ID for the latest phones, like iPhone X. Helping people do what’s best with their money requires providing them with responsible security measures to protect their private financial data. In Betterment’s mobile apps, this means including trustworthy but convenient local authentication options for resuming active login sessions. Three years ago, in 2014, we implemented Touch ID support as an alternative to using PIN entry in our iOS app. Today, on its first day, we’re thrilled to announce that the Betterment iOS app fully supports Apple’s new Face ID technology on the iPhone X. Trusting the Secure Enclave While we’re certainly proud of shipping this feature quickly, a lot of credit is due to Apple for how seriously the company takes device security and data privacy as a whole. The hardware feature of the Secure Enclave included on iPhones since the 5S make for a readily trustworthy connection to the device and its operating system. From an application’s perspective, this relationship between a biometric scanner and the Secure Enclave is simplified to a boolean response. When requested through the Local Authentication framework, the biometry evaluation either succeeds or fails separate from any given state of an application. The “reply” completion closure of evaluatePolicy(_:localizedReason:reply:) This made testing from the iOS Simulator a viable option for gaining a reasonable degree of certainty that our application would behave as expected when running on a device, thus allowing us to prepare a build in advance of having a device to test on. LABiometryType Since we’ve been securely using Touch ID for years, adapting our existing implementation to include Face ID was a relatively minor change. Thanks primarily to the simple addition of the LABiometryType enum newly available in iOS 11, it’s easy for our application to determine which biometry feature, if any, is available on a given device. This is such a minor change, in fact, that we were able to reuse all of our same view controllers that we had built for Touch ID with only a handful of string values that are now determined at runtime. One challenge we have that most existing iOS apps share is the need to still support older iOS versions. For this reason, we chose to wrap LABiometryTypebehind our own BiometryType enum. This allows us to encapsulate both the need to use an iOS 11 compiler flag and the need to call canEvaluatePolicy(_:error:) on an instance of LAContext before accessing its biometryType property into a single calculated property: See the Gist. NSFaceIDUsageDescription The other difference with Face ID is the new NSFaceIDUsageDescriptionprivacy string that should be included in the application’s Info.plist file. This is a departure from Touch ID which does not require a separate privacy permission, and which uses the localizedReason string parameter when showing its evaluation prompt. Touch ID evaluation prompt displaying the localized reason While Face ID does not seem to make a use of that localizedReason string during evaluation, without the privacy string the iPhone X will run the application’s Local Authentication feature in compatibility mode. This informs the user that the application should work with Face ID but may do so imperfectly. Face ID permissions prompt without (left) and with (right) an NSFaceIDUsageDescription string included in the Info.plist This compatibility mode prompt is undesirable enough on its own, but it also clued us into the need to check for potential security concerns opened up by this forwards-compatibility-by-default from Apple. Thankfully, the changes to the Local Authentication framework were done in such a way that we determined there wasn’t a security risk, but it did leave a problematic user experience in reaching a potentially-inescapable screen when selecting “Don’t Allow” on the privacy permission prompt. Since we believe strongly in our users’ right to say “no”, resolving this design issue was the primary reason we prioritized shipping this update. Ship It If your mobile iOS app also displays sensitive information and uses Touch ID for biometry-based local authentication, join us in making the easy adaption to delight your users with full support for Face ID on the iPhone X.