Meet our writer
The Betterment engineering teams have teamed up to share how we built things and what we learned along the way.
Articles by Betterment Engineering
CI/CD: Standardizing the Interface
How We Develop Design Components in Rails
Engineering the Launch of a New Brand for Betterment
In 2017, Betterment set out to launch a new brand to better define the voice and feel of ...Engineering the Launch of a New Brand for Betterment In 2017, Betterment set out to launch a new brand to better define the voice and feel of our product. After months of planning across all teams at the company, it was time for our engineering team to implement new and responsive designs across all user experiences. The key to the success of this project was to keep the build simple, maintain a low risk of regressions, and ensure a clear path to remove the legacy brand code after launch. Our team learned a lot, but a few key takeaways come to mind. Relieving Launch Day Stress with Feature Flags Embarking on this rebrand project, we wanted to keep our designs under wrap until launch day. This would entail a lot of code changes, however, as an engineering team we believe deeply in carving up big endeavors into small pieces. We’re constantly shipping small, vertical slices of work hidden behind feature flags and we’ve even built our own open-source system, TestTrack, to help us do so. This project would be no exception. On day one, we created a feature flag and started shipping rebranded code to production. Our team could then use TestTrack’s browser plugin to preview and QA the new views along the way. When the day of the big reveal arrived, all that would be left to do was toggle the flag to unveil the code we’d shipped and tested weeks before. We then turned to the challenge of rebranding our entire user experience. Isolating New Code with ActionPack Variants ActionPack variants provide an elegant solution to rolling out significant front end changes. Typically, variants are prescribed to help render distinct views for different device types, but they are equally powerful when rendering distinct HTML/CSS for any significant redesign. We created a variant for our rebrand, which would be exposed based on the status of our new feature flag. Our variant also required a new CSS file, where all our new styles would live. Rails provides rich template resolver logic at every level of the view hierarchy, and we were able to easily hook into it by simply modifying the extensions of our new layout files. The rebranded version of our application’s core layout imported the new CSS file and just like that, we were in business. Implementing the Rebrand without a Spaghetti of “IF” Statements Our rebranded experience would become the default at launch time, so another challenge we faced was maintaining two worlds without creating unneeded complexity. The “rebrand” variant and correlating template file helped us avoid a tangled web of conditionals, and instead boiled down the overhead to a toggle in our ApplicationController. This created a clean separation between the old and new world and protected us against regressions between the two. Rebranding a feature involved adding new styles to the application_rebrand.css and implementing them in new rebrand view files. Anything that didn’t get a new, rebranded template stayed in the world of plain old production. This freedom from legacy stylesheets and markup were critical to building and clearly demonstrating the new brand and value proposition we wanted to demonstrate to the world. De-scoping with a Lightweight Reskin To rebrand hundreds of pages in time, we had to iron out the precise requirements of what it meant for our views to be “on brand”. Working with our product team, we determined that the minimum amount of change to consider a page rebranded was adoption of the new header, footer, colors, and fonts. These guidelines constituted our “opted out” experience — views that would receive this lightweight reskin immediately but not the full rebrand treatment. This light coat of paint was applied to our production layer, so any experience that couldn’t be fully redesigned within our timeline would still get a fresh header and the fonts and colors that reflected our new brand. As we neared the finish line, the rebranded world became our default and this opt-out world became a variant. A controller-level hook allowed us to easily distinguish which views were to display opt-out mode with a single line of code. We wrote a controller-level hook to update the variant and render the new layout files, reskinning the package. Using a separate CSS manifest with the core changes enumerated above, we felt free to dedicate resources to more thoroughly rebranding our high traffic experiences, deferring improvements to pages that received the initial reskin until after launch. As we’ve circled back to clean up these lower-traffic views and give them the full rebrand treatment, we’ve come closer to deleting the opt_out CSS manifest and deprecating our our legacy stylesheets for good. Designing an Off Ramp Just as we are committed to rolling out large changes in small portions, we are careful to avoid huge changesets on the other side of a release. Fortunately, variants made removing legacy code quite straightforward. After flipping the feature flag and establishing “rebrand” as the permanent variant context, all that remained was to destroy the legacy files that were no longer being rendered and remove the variant name from the file extension of the new primary view template. Controllers utilizing the opt_out hook made their way onto a to-do list for this work without the stress of a deadline. The Other Side of the Launch As the big day arrived, we enjoyed a smooth rebrand launch thanks to the thoughtful implementation of our existing tools and techniques. We leveraged ActionPack variants built into Rails and feature flags from TestTrack in new ways, ensuring we didn’t need to make any architecture changes. The end result: a completely fresh set of views and a new brand we’re excited to share with the world at large.
Supporting Face ID on the iPhone X
We look at how Betterment's mobile engineering team developed Face ID for the latest ...Supporting Face ID on the iPhone X We look at how Betterment's mobile engineering team developed Face ID for the latest phones, like iPhone X. Helping people do what’s best with their money requires providing them with responsible security measures to protect their private financial data. In Betterment’s mobile apps, this means including trustworthy but convenient local authentication options for resuming active login sessions. Three years ago, in 2014, we implemented Touch ID support as an alternative to using PIN entry in our iOS app. Today, on its first day, we’re thrilled to announce that the Betterment iOS app fully supports Apple’s new Face ID technology on the iPhone X. Trusting the Secure Enclave While we’re certainly proud of shipping this feature quickly, a lot of credit is due to Apple for how seriously the company takes device security and data privacy as a whole. The hardware feature of the Secure Enclave included on iPhones since the 5S make for a readily trustworthy connection to the device and its operating system. From an application’s perspective, this relationship between a biometric scanner and the Secure Enclave is simplified to a boolean response. When requested through the Local Authentication framework, the biometry evaluation either succeeds or fails separate from any given state of an application. The “reply” completion closure of evaluatePolicy(_:localizedReason:reply:) This made testing from the iOS Simulator a viable option for gaining a reasonable degree of certainty that our application would behave as expected when running on a device, thus allowing us to prepare a build in advance of having a device to test on. LABiometryType Since we’ve been securely using Touch ID for years, adapting our existing implementation to include Face ID was a relatively minor change. Thanks primarily to the simple addition of the LABiometryType enum newly available in iOS 11, it’s easy for our application to determine which biometry feature, if any, is available on a given device. This is such a minor change, in fact, that we were able to reuse all of our same view controllers that we had built for Touch ID with only a handful of string values that are now determined at runtime. One challenge we have that most existing iOS apps share is the need to still support older iOS versions. For this reason, we chose to wrap LABiometryTypebehind our own BiometryType enum. This allows us to encapsulate both the need to use an iOS 11 compiler flag and the need to call canEvaluatePolicy(_:error:) on an instance of LAContext before accessing its biometryType property into a single calculated property: See the Gist. NSFaceIDUsageDescription The other difference with Face ID is the new NSFaceIDUsageDescriptionprivacy string that should be included in the application’s Info.plist file. This is a departure from Touch ID which does not require a separate privacy permission, and which uses the localizedReason string parameter when showing its evaluation prompt. Touch ID evaluation prompt displaying the localized reason While Face ID does not seem to make a use of that localizedReason string during evaluation, without the privacy string the iPhone X will run the application’s Local Authentication feature in compatibility mode. This informs the user that the application should work with Face ID but may do so imperfectly. Face ID permissions prompt without (left) and with (right) an NSFaceIDUsageDescription string included in the Info.plist This compatibility mode prompt is undesirable enough on its own, but it also clued us into the need to check for potential security concerns opened up by this forwards-compatibility-by-default from Apple. Thankfully, the changes to the Local Authentication framework were done in such a way that we determined there wasn’t a security risk, but it did leave a problematic user experience in reaching a potentially-inescapable screen when selecting “Don’t Allow” on the privacy permission prompt. Since we believe strongly in our users’ right to say “no”, resolving this design issue was the primary reason we prioritized shipping this update. Ship It If your mobile iOS app also displays sensitive information and uses Touch ID for biometry-based local authentication, join us in making the easy adaption to delight your users with full support for Face ID on the iPhone X.
Modern Data Analysis: Don’t Trust Your Spreadsheet
To conduct research in business, you need statistical computing that you easily ...Modern Data Analysis: Don’t Trust Your Spreadsheet To conduct research in business, you need statistical computing that you easily reproduce, scale, and make accessible to many stakeholders. Just as the Ford Motor Company created efficiency with assembly line production and Pixar opened up new worlds by computerizing animation, companies now are innovating and improving the craft of using data to do business. Betterment is one of them. We are built from the ground up on a foundation of data. It’s only been about three decades since companies started using any kind of computer-assisted data analysis. The introduction of the spreadsheet defined the beginning of the business analytics era, but the scale and complexity of today’s data has outgrown that origin. To avoid time-consuming manual processes, and the human error typical of that approach, analytics has become a programming discipline. Companies like Betterment are hiring data scientists and analysts who use software development techniques to reliably answer business questions which have quickly expanded in scale and complexity. To do good data work today, you need to use a system that is reproducible, versionable, scalable, and open. Our analytics and data science team at Betterment uses these data best practices to quickly produce reliable and sophisticated insights to drive product and business decisions. A Short History of Data in Business First, a step back in the business time machine. With VisiCalc, the first-ever spreadsheet program, in 1979 and Excel in 1987, the business world stepped into two new eras in which any employee could manage large amounts of data. The bottlenecks in business analytics had been the speed of human arithmetic or the hours available on corporate mainframes operated by only a few specialists. With spreadsheet software in every cubicle, analytical horsepower was commoditized and Excel jockeys were crowned as the arbiters of truth in business. But the era of the spreadsheet is over. The data is too large, the analyses are too complex, and mistakes are too dangerous to trust to our dear old friend the spreadsheet. Ask Carmen Reinhart and Kenneth Rogoff, two Harvard economists who published an influential paper on sovereign debt and economic growth, only to find out that the results rested in part on the accidental omission of five cells from an average. Or ask the execs at JPMorgan who lost $6 billion in the ‘London Whale’ trading debacle, also due in part of poor data practices in Excel. More broadly, a 2015 survey of large businesses in the UK reported that 17% had experienced direct financial losses because of spreadsheet errors. It’s a new era with a new scale of data, and it’s time to define new norms around management of and inferences from business data. Requirements for Modern Data Analysis Spreadsheets fundamentally lack these properties essential to modern data work. To do good data work today, you need to use a system that is: Reproducible It’s not personal, but I don’t trust any number that comes without supporting code. That code should take me from the raw data to the conclusions. Most analyses contain too many important detailed steps to plausibly communicate in an email or during a meeting. Worse yet, it’s impossible to remember exactly what you’ve done in a point and click environment, so doing it the same way again next time is a crap shoot. Reproducible also means efficient. When an input or an assumption changes, it should be as easy as re-running the whole thing. Versionable Code versioning frameworks, such as git, are now a staple in the workflow of most technical teams. Teams without versioning are constantly asking questions like, “Did Jim send the latest file?”, “Can I be sure that my teammate selected all columns when he re-sorted?”, or “The bottom line numbers are different in this report; what exactly changed since the first draft?” These inefficiencies in collaboration and uncertainties about the calculations can be deadly to a data team. Sharing code in a common environment also enables the reuse of modular analysis components. Instead of four analysts all inventing their own method for loading and cleaning a table of users, you can share as a group the utils/LoadUsers() function and ensure you are talking about the same people at every meeting. Scalable There are hard technical limits to how large an analysis you can do in a spreadsheet. Excel 2013 is capped at just more than 1 million rows. It doesn’t take a very large business these days to collect more than 1 million observations of customer interactions or transactions. There are also feasibility limits. How long does it take your computer to open a million row spreadsheet? How likely is it that you’ll spot a copy-paste error at row 403,658? Ideally, the same tools you build to understand your data when you’re at 10 employees should scale and evolve through your IPO. Open Many analyses meet the above ideals but have been produced with expensive, proprietary statistical software that inhibits sharing and reproducibility. If I do an analysis with open-source tools like R or Python, I can post full end-to-end instructions that anyone in the world can reproduce, check, and expand upon. If I do the same in SAS, only people willing to spend $10,000 (or more if particular modules are required) can review or extend the project. Platforms that introduce compatibility problems between versions and save their data in proprietary formats may limit access to your own work even if you are paying for the privilege. This may seem less important inside a corporate bubble where everyone has access to the same proprietary platform, but it is at the very least a turnoff to most new talent in the field. I don’t hear anyone saying that expensive proprietary data solutions are the future. What to Use, and How Short answer: R or Python. Longer answer: Here at Betterment, we use both. We use Python more for data pipeline processes and R more for modeling, analyses, and reporting. But this article is not about the relative merits of these popular modern solutions. It is about the merits of using one of them (or any of the smaller alternatives). To get the most out of a programmatic data analysis workflow, it should be truly end-to-end, or as close as you can get in your environment. If you are new to one or both of these environments, it can be daunting to sort through all of the tools and figure out what does what. These are some of the most popular tools in each language organized by their layer in your full-stack analysis workflow: Full Stack Analysis R Python Environment RStudio iPython / Jupyter, PyCharm Sourcing Data RMySQL, rpostgresql, rvest, RCurl, httr MySQLdb, requests, bs4 Cleaning, Reshaping and Summarizing data.table, dplyr pandas Analysis, Model Building, Learning see CRAN Task Views NumPy, SciPy, Statsmodels, Scikit-learn Visualization ggplot2, ggvis, rCharts matplotlib, d3py, Bokeh Reporting RMarkdown, knitr, shiny, rpubs IPython notebook Sourcing Data If there is any ambiguity in this step, the whole analysis stack can collapse on the foundation. It must be precise and clear where you got your data, and I don’t mean conversationally clear. Whether it’s a database query, a Web-scraping function, a MapReduce job, or a PDF extraction, script it and include it in your reproducible process. You’ll thank yourself when you need to update the input data, and your successors and colleagues will be thankful they know what you’re basing your conclusions on. Cleaning, Reshaping, Summarizing Every dataset includes some amount of errant, corrupted, or outlying observations. A good analysis excludes them based on objective rules from the beginning and then tests for sensitivity to these exclusions later. Dropping observations is also one of the easiest ways for two people doing similar analyses to reach different conclusions. Putting this process in code keeps everyone accountable and removes ambiguity about how the final analysis set was reached. Analysis, Model Building, Learning You’ll probably only present one or two of the scores of models and variants you build and test. Develop a process where your code organizes and saves these variants rather than discarding the ones that didn’t work. You never know when you’ll want to circle back. Try to organize analyses in a structure similar to how you present them so that the connection from claims to details is easy to make. Visualization, Reporting Careful, a trap is looming. So many times, the chain of reproducibility is broken right before the finish line when plots and statistical summaries are copied onto PowerPoint slides. Doing so introduces errors, breaks the link between claims and process, and generates huge amounts of work in the inevitable event of revisions. R and Python both have great tools to produce finished reports as static HTML or PDF documents, or even interactive reporting and visualization products. It might take some time to convince the rest of your organization to receive reports in these more modern formats. Moving your organization towards these ideals is likely to be an imperfect and gradual process. If you’re the first convert, absolutism is probably not the right approach. If you have influence in the hiring process, try to push for candidates who understand and respect these principles of data science. In the near term, look for smaller pieces of the analytical workflow which would benefit especially from the efficiencies of reproducible, programmatic analysis and reporting. Good candidates are reports that are updated frequently, require extensive collaboration, or are constantly hung up on discussions over details of implementation or interpretation. Changing workflows and acquiring new skills is always an investment, but the dividends here are better collaboration, efficient iteration, transparency in process and confidence in the claims and recommendations you make. It’s worth it.
Women Who Code: An Engineering Q&A with Venmo
Betterment recently hosted a Women in Tech meetup with Venmo developer Cassidy Williams, ...Women Who Code: An Engineering Q&A with Venmo Betterment recently hosted a Women in Tech meetup with Venmo developer Cassidy Williams, who spoke about impostor syndrome. Growing up, I watched my dad work as an electrical engineer. Every time I went with him on Take Your Child to Work Day, it became more and more clear that I wanted to be an engineer, too. In 2012, I graduated from the University of Portland with a degree in computer science and promptly moved to the Bay Area. I got my first job at Intel, where I worked as a Scala developer. I stayed there for several years until last May, when I uprooted my life to New York for Betterment, and I haven’t looked back since. As an engineer, I not only love building products from the ground up, but I’m passionate about bringing awareness to diversity in tech, an important topic that has soared to the forefront of social justice issues. People nationwide have chimed in on the conversation. Most recently, Isis Wenger, a San Francisco-based platform engineer, sparked the #ILookLikeAnEngineer campaign, a Twitter initiative designed to combat gender inequality in tech. At Betterment, we’re working on our own set of initiatives to drive the conversation. We’ve started an internal roundtable to voice our concerns about gender inequality in the workplace, we’ve sponsored and hosted Women in Tech meetups, and we’re starting to collaborate with other companies to bring awareness to the issue. Cassidy Williams, a software engineer at mobile payments company Venmo, recently came in to speak. She gave a talk on impostor syndrome, a psychological phenomenon in which people are unable to internalize their accomplishments. The phenomenon, Williams said, is something that she has seen particularly among high-achieving women—where self-doubt becomes an obstacle for professional development. For example, they think they’re ‘frauds,’ or unqualified for their jobs, regardless of their achievements. Williams’ goal is to help women recognize the characteristic and empower them to overcome it. Williams has been included as one of Glamour Magazine's 35 Women Under 35 Who Are Changing the Tech Industry and listed in the Innotribe Power Women in FinTech Index. As an engineer myself, I was excited to to speak with her after the event about coding, women in tech, and fintech trends. Cassidy Williams, Venmo engineer, said impostor syndrome tends to be more common in high-achieving women. Photo credit: Christine Meintjes Abi: Can you speak about a time in your life where ‘impostor syndrome’ was limiting in your own career? How did you overcome that feeling? Cassidy: For a while at work, I was very nervous that I was the least knowledgeable person in the room, and that I was going to get fired because of it. I avoided commenting on projects and making suggestions because I thought that my insight would just be dumb, and not necessary. But at one point (fairly recently, honestly), it just clicked that I knew what I was doing. Someone asked for my help on something, and then I discussed something with him, and suddenly I just felt so much more secure in my job. Can you speak to some techniques that have personally proven effective for you in overcoming impostor syndrome? Asking questions, definitely. It does make you feel vulnerable, but it keeps you moving forward. It's better to ask a question and move forward with your problem than it is to struggle over an answer. As a fellow software engineer, I can personally attest to experiencing this phenomenon in tech, but I’ve also heard from friends and colleagues that it can be present in non-technical backgrounds, as well. What are some ways we can all work together to empower each other in overcoming imposter syndrome? It's cliché, but just getting to know one another and sharing how you feel about certain situations at work is such a great way to empower yourself and empower others. It gets you both vulnerable, which helps you build a relationship that can lead to a stronger team overall. Whose Twitter feed do you religiously follow? InfoSec Taylor Swift. It's a joke feed, but they have some great tech and security points and articles shared there. In a few anecdotes throughout your talk, you mentioned the importance of having mentors and role models. Who are your biggest inspirations in the industry? Jennifer Arguello - I met Jennifer at the White House Tech Inclusion Summit back in 2013, where we hit it off talking about diversity in tech and her time with the Latino Startup Alliance. I made sure to keep in touch because I would be interning in the Bay Area, where she’s located, and we’ve been chatting ever since. Kelly Hoey - I met Kelly at a women in tech hackathon during my last summer as a student in 2013, and then she ended up being on my team on the British Airways UnGrounded Thinking hackathon. She and I both live in NYC now, and we see each other regularly at speaking engagements and chat over email about networking and inclusion. Rane Johnson - I met Rane at the Grace Hopper Celebration for Women in Computing in 2011, and then again when I interned at Microsoft in 2012. She and I started emailing and video chatting each other during my senior year of college, when I started working with her on the Big Dream Documentary and the International Women’s Hackathon at the USA Science and Engineering Festival. Ruthe Farmer - I first met Ruthe back in 2010 during my senior year of high school when I won the Illinois NCWIT Aspirations Award. She and I have been talking with each other at events and conferences and meetups (and even just online) almost weekly since then about getting more girls into tech, working, and everything in between. One of the things we chatted about after the talk was how empowering it is to have the resources and movements of our generation to bring more diversity to the tech industry. The solutions that come out of that awareness are game-changing. What are some specific ways in which companies can contribute to these movements and promote a healthier and more inclusive work culture? Work with nonprofits: Groups like NCWIT, the YWCA, the Anita Borg Institute, the Scientista Foundation, and several others are so great for community outreach and company morale. Educate everyone, not just women and minorities: When everyone is aware and discussing inclusion in the workplace, it builds and maintains a great company culture. Form small groups: People are more open to talking closely with smaller groups than a large discussion roundtable. Building those small, tight-knit groups promotes relationships that can help the company over time. It’s a really exciting time to be a software engineer, especially in fintech. What do you think are the biggest trends of our time in this space? Everyone's going mobile! What behavioral and market shifts can we expect to see from fintech in the next five to 10 years? I definitely think that even though cash is going nowhere fast, fewer and fewer people will ever need to make a trip to the bank again, and everything will be on our devices. What genre of music do you listen to when you’re coding? I switch between 80s music, Broadway show tunes, Christian music, and classical music. Depends on my feelings about the problem I'm working on. ;) IDE of choice? Vim! iOS or Android? Too tough to call.
Engineering the Trading Platform: Inside Betterment’s Portfolio Optimization
To complete the portfolio optimization, Betterment engineers needed enhance the code in ...Engineering the Trading Platform: Inside Betterment’s Portfolio Optimization To complete the portfolio optimization, Betterment engineers needed enhance the code in our existing trading platform. Here's how they did it. In just a few weeks, Betterment is launching an updated portfolio -- one that has been optimized for better expected returns. The optimization will be partly driven by a more sophisticated asset allocation algorithm, which will dynamically vary individual asset allocations within the stock and bond basket based on a goal’s overall allocation. This new flexible set of asset allocations significantly affects our current trading processes. Until now, we executed transactions based on fixed weights or a precise allocation of assets to every level of risk. Now, in our updated portfolio with a more sophisticated way to allocate, we are using a matrix to manage asset weights—and that requires more complex trading logic. From an engineering perspective, this means we needed to enhance the code in our existing trading platform to accommodate dynamic asset allocation, with an eye towards future enhancements in our pipeline. Here's how we did it. 1. Build a killer testing framework When dealing with legacy code, one of our top priorities is to preserve existing functionality. Failure to do so could mean anything from creating a minor inconvenience to blocking trades from executing. That means the next step was to build a killer testing framework. The novelty of our approach was to essentially build partial, precise scaffolding around our current platform. This kind of scaffolding allowed us to go in and out of the current platform to capture and store precise inputs and outputs, while isolating them away from any unnecessary stuff that wasn’t relevant to the core trading processes. 2. Isolate the right information With this abstraction, we were able to isolate the absolute core objects that we need to perform trades, and ignore the rest. This did two things: it took testing off the developers’ plates early in the process, allowing them to focus on writing production code, and also helped isolate the central objects that required most of their attention. The parent object of any activity inside the Betterment platform is a “user transaction” — that includes deposits or withdrawals to a goal, dividends, allocation changes, transfer of money between goals and so on. The parent object of any activity inside the Betterment platform is a “user transaction” — that includes deposits or withdrawals for a goal, dividends, allocation changes, transfer of money between goals and so on. These were our inputs. In most cases, a user transaction will eventually be the parent of several trade objects. These were our outputs. In our updated portfolio, the number of possible transactions types did not change. What did change, however, was how each transaction type was translated into trading activity, which is what we wanted to test exhaustively. We captured a mass of user transaction objects from production for use in testing. However, a user transaction object contains a host of data that isn’t relevant to the trades that will eventually be created, and is associated with other objects that are also not relevant. So stripping out all non-trading data was the key to focusing on the right things to test for this project. 3. Use SQLite database to be efficient The best way to store the user transaction objects was to use JSON, a human-readable representation of Java objects. To do this, we used GSON, which lets you convert Java objects into JSON, and vice versa. We didn’t want to store the JSON in a MySQL database, because managing it would be unnecessary overhead for this purpose. Instead, we stored them in a flat SQLite database. On the way into SQLite, GSON allowed us to “flatten” the objects, leaving only the bits that pertained to trading and discarding the rest. Then, we could rearrange these chunks to replicate all sorts of trading activity patterns. On the way out, GSON would re-inflate the JSON back into Java objects, using dummy values for the irrelevant fields, providing us with test inputs ready to be pushed through our system. We did the same for outputs, which were also full of “noise” for our purposes. We’d shrink the expected results we got from production, then re-inflate and compare them to what our tests produced. 4. Do no harm to others' work At Betterment, we are constantly pushing through new features and enhancements, some visible to customers, but many not. Development on these is concurrent, sometimes impacting global objects and schemas, and it was essential to insulate the team working on core trading functionality from all other development being done at the company. Just the portfolio transition work alone includes significant new code for front-end enhancements which have nothing to do with trading. The GSON/JSON/SQLite testing framework helped the trading team maintain laser focus on their task, as they worked under the hood. Otherwise, we’d be putting a sweet new set of tires on a car that won’t start!