1. Introduction to Feature Flagging
Hi, my name is Diani Yedono, and I've been developing software for over a decade. I would like to share one effective practice called feature flagging that I've been using to keep integrating and deploying my code. Sometimes changes to a code wouldn't affect any other part of the software system. However, even a little change can unexpectedly affect other parts. There are many processes that people use to ensure the software is protected from migrations introduced by any code change. One of the processes is to have long-lived feature branches.
Hi, my name is Diani Yedono, and I've been developing software for over a decade. I would like to share one effective practice called feature flagging that I've been using to keep integrating and deploying my code.
Let's start with thinking back in time. Can you remember that time when a system failure happened right after a big release? If you've never experienced that, I wish you don't ever have to because it's just awful. This comment shows a person asking another person to imagine they deployed a new version of their customer management system and 15 minutes later the online shop crashes.
Sometimes changes to a code wouldn't affect any other part of the software system. However, in many cases, even a little change can unexpectedly affect other parts. And that's quite normal. There are many processes that people use to ensure the software is protected from migrations introduced by any code change. One of the processes is to have long-lived feature branches. Each of the branches is extensively reviewed and tested before being merged. However, this can lead to another type of problem. This comment shows a developer who's trying to merge their branch into main. First, they pull the changes from main into their branch. Once they're happy, they're getting ready to merge their branch back into main. However, another developer has just merged a big chunk of change into main. Merging big changes can lead to merge conflict and that can keep developers up at night. And it's not a sustainable practice.
2. Introduction to Virtuflex
Continuous integration is a popular practice in software development, but merging half-baked code can be challenging. Thankfully, Virtuflex offers a solution. This talk focuses on using Virtuflex for continuous integration and deployment. Let's explore an example web application that utilizes Virtuflex to serve random comics to users, with the ability to switch between a math randomizer and an AI-based randomizer.
These pains have led people to start practicing continuous integration, where developers continuously integrate their code changes into main, little by little and regularly. But what if the new feature takes rather long to develop? How can we merge half-baked code without affecting the users or consumers of the software?
Thankfully, Virtuflex can help us. Virtuflex has been used for many purposes, some of which are not part of this talk. So this talk is not about using Virtuflex for canary release, A-B testing, or Kailh switch. But this talk is about using Virtuflex for continuous integration, which is to continuously merge code changes into main or trunk, and continuous deployment, which is to continuously deploy code changes to production safely.
Now let's take a look at an example web application that is going to use Virtuflex. This is a web app that stores a collection of digital comics. It has one feature, which is to serve a random comic to the user. Initially, the website uses a math randomizer. Then it is decided to have a new feature and use an AI to serve a random comic to users. And this AI can serve the random comic depending on the weather, for example, or depending on what's trending in the world. I'm just making up some new features here. The randomizer needs to be changed to use AI. And it can take a while to do that.
3. Using Feature Flags for AI System Development
While under development, we use feature flags to control the behavior of the AI system. By introducing a feature flag called 'AI randomizer,' we can switch between using the math randomizer or the AI randomizer based on the environment. The flag is stored as an environment variable, allowing us to safely build and deploy the AI randomizer code little by little. Automated testing is also implemented by creating tests for both flag states. Once the feature is complete and tested on the production environment, we can remove the code and the flag.
While under upward development, we want to show the progress of the AI system. For example, on a development or a UAT or QA environment. However, we don't want to release this half-baked feature to production yet. So we need a way to use either the math randomizer, or the AI randomizer, depending on the environment.
This is the initial code that uses a math randomizer to serve a random comic. Now, let's see how feature flag can make the web app serve a comic from either the math randomizer or from AI randomizer. On line six, we check a feature flag called feature flag AI randomizer. When it's on, we use AI randomizer to serve the comic. When it's off, we use a math randomizer instead. The feature flag is stored as an environment variable. On development environment, the value is set to be on, while in production, the value is set to be off. This way we can safely build and deploy AI randomizer code little by little and regularly to both dev and production. It can be continuously tested on dev and not affecting any production users. Once the AI randomizer feature is complete, we can turn the flag on on prod. That is when we release the feature. However, the code itself has been laying dormant on prod for a little while. So that's one way to do feature flagging. This example is for backend feature flagging. One nice thing about this technique is that it's very versatile and it can be used in other parts of the software. For example, on a web app front-end code or mobile app front-end code.
Right, so how about automated testing? How are we going to test features with flags? Let's check the test for this code. This is the initial test when the web app is only using the math randomizer. When the random API is called, it should return a fake comic, which is the only comic stored in the test data storage. When the flag is introduced, we explicitly set the flag to be off for the existing test. Then we add a test for when the flag is on. This way we have automated tests for both when the flag is on and off. For this particular example, the test is similar. They return the fake comic either when the flag is on or off. It's because we only have one fake comic in our test data storage. In many cases, a new feature has a different behavior and therefore different tests for when the flag is on and off.
So, we just went through the secret art of feature flagging. To sum up, we start by introducing the flag, we test for both when the flag is on and when the flag is off, and then we write the implementation code for when the flag is on and when the flag is off. We make sure that the flag is on onDEV and off onPROD. And then we deploy the code to DEV and we deploy the code to production as well, and it's going to be dormant on production because the flag is off. Once the release, once the feature is complete and we want to release it, we turn the flag on onPROD. And once we are happy that the feature has been behaving in an expected way onPROD for a little while, we can then remove the implementation code for when the flag is off, remove the test for when the flag is off. And then once we're sure that the flag is not used anywhere, we remove the flag. So this is the secret art of feature flagging.