Streamline your development life-cycle with git branch environment using Azure DevOps and Cloud resources

David Lee
9 min readMay 23, 2019

--

Feature branching is a key concept if you are using git as your source control system. It allows developers to code in the comfort of their own space and pull in new changes from other branches as necessary for integration when ready. A developer will feel confident when doing a pull-request (PR) back to the main branch with frequent integrations from the main branch.

With feature branching, developers can also expect to spin up a new environment based on their feature branch. To be clear, an environment contains the full set of resources necessary to run your product/application independently. This means if your application is a web application with a back-end database, all these resources are exclusively used by only this environment and not (typically) shared. There could be a few caveats such as a shared Single Sign On (SSO) System but this has minimal impact for the environment to be independent if we are just talking about authentication. This approach means developers will have the ability to add, update or remove i.e. manage resources as they see fit to meet the requirements.

Why create an environment per feature branch?

One of the core tenant of agile is the practice of fast and frequent feedback loop so we can improve the process continuously. The Product Owner and/or QA analyst will now have an opportunity to comment and provide feedback to the developer before it is available in a static environment such as Development, QA or Staging. In some cases, we can even loop in end-customers to provide additional feedback which will help improve the usability of the product.

Secondly, we can perform integration and/or functional tests early in the life-cycle and continue performing those tests as code is pulled in from the main branch. This can further help improve the quality of the code given we are able to find issues earlier in the development phase. In fact, we can start adding on new integration/ functional tests as necessary without having to wait for the code to be available in the main branch.

Thirdly, the development team will have full control over the management of resources. When we have a new resource to be added (or removed), the developer is free to manage the resource without affecting other feature branches. For example, if we were to use a new caching mechanism such as redis, it would only be leveraged by that single feature branch initially. This means if there are issues with using redis, it would not affected ongoing efforts in other branches. When it is time to merge the code with redis into the main branch, there are already sufficient integration and/or functional tests.

Considerations

With this approach, cost and effort may be a concern. Let’s consider the following constraints:

  • It is not easy to spin up a new environment. The product/ application has several moving parts/ components and tight dependencies which makes it difficult to be stood up independently. There could also be many manual steps needed to stand up the environment.
  • Cost for the hardware/ software dependencies have already been allocated. Dynamically standing up new environments is not cost effective given we may have purchase additional hardware for those short-term environments.
  • The organization and the development team(s) may not be comfortable with owning the responsibility of the CI/CD process given that they may not have the skillsets or desire. In traditional development shops, there is typically a group (also known as DevOps) that handles the CI/CD process when development is completed. They help handle the development of the CI/CD pipeline and resource management. Thus, a developer may require additional skillsets in order to be productive and it may be best left to that specialized DevOps team.

Some constraints are related to the architecture of the product/ application but some are related to infrastructure. This is where the (power of the) Cloud comes in, with services such as Azure DevOps. Let’s consider the following features in Azure DevOps.

Infrastructure-as-code

In traditional development, the developer may have a CI/CD process on the local development environment (local machine) which is different than the CI/CD process used in a higher environment (remote servers). This difference will have potential consequence when an issue occur because of differences in configuration management, security context, to name a few examples. We can eliminate this difference by having a “local” environment in the Cloud which will give us the ability to have a consistent way to perform CI/CD process throughout the environments.

Infrastructure-as-code is a practice where we write code to describe and execute the CI/CD process. In Azure DevOps, this is in the form of a YAML file. Within a YAML file, you can define variables, steps and other actions. Within each step, you can define tasks and even specify conditions on when that particular step should run. Azure provides plenty of pre-built tasks for building, testing and deployment our product/ application.

By creating a yaml file named azure-pipelines.yml in the root of the git repository, Azure repository will automatically kick off the Azure pipeline and execute the code. By not tying the yaml file to a specific branch, we now have this powerful ability of having CI/CD pipeline for every branch.

ARM Template and Service Connections

We can also define a task to deploy specific resources to our environment defined in an ARM Template file. In Azure, we can define an environment in the context of a Resource Group. We can create a service connection associated with a Resource Group in our Azure Subscription. The service connection will have a name which we can use in the YAML file. We can further leverage the pipeline library which we will discuss shortly to make that a variable rather than hard coding it. Note that we may have different subscriptions to represent each tier of environment such as Sandbox, and Production. This separation keeps a layer of security so that we can have different groups of users. For example, developers can only access Sandbox while Administrators have access to both Azure subscriptions. Thus, we may have two different service connections in this specific scenario to represent two different Azure subscriptions.

We can also create a service connection without a Resource Group name. This would fit into our idea of having a separate environment/ Resource Group per feature branch. As we are done with each feature branch, i.e. merged into the main branch, the specific environment/ Resource Group will be removed. To do this, we may consider having an Azure runbook that runs nightly to compare git branches in Azure Repository and remove the corresponding branches that have been merged.

Pipeline library, your Configuration Management

With the pipeline based on our checked in yaml file, we will need to be able to manage the various configurations specific to each environment. This is accomplished with pipeline library. We can define a variable group to store a common set of configurations. Secrets such as connection strings or keys can be set here. Note that we should never expose secrets in our check-in code, even if we control the source control system on-premise, as this may expose potential attack vectors for hackers. As best security practice, we should leverage Azure Key Vault where the library is backed by. Use the “Link secrets from an Azure key vault as variables” option when setting this up.

Each variable group can be managed with a specific Security role group. This means Administrators can create a new variable group that may contain sensitive configuration settings and share the name of the variable group to the development team where they can set in the yaml file. The development team may still keep a separate library variable group that does not contain sensitive configuration settings.

Putting it all together, an example

Let’s consider the following yaml as an example: https://github.com/seekdavidlee/Eklee-Exams-Api/blob/master/azure-pipelines.yml

In this example, we have define a convention of using the name of the branch as the name of the environment. We call this the stack name. This is passed as a global variable used throughout the pipeline. This approach makes it easy to define a unique and sensible name of the Resource Group, as well as the name of the Azure resources.

The AzureResourceGroupDeployment@2 task allows us to deploy the ARM Template into a new (or existing) Resource Group with the branch name. If you are familiar with ARM Template, there is a Template variable file we can pass in. However, with the task, we can leverage the overrideParameters to pass in the variables used in the ARM Template.

For integration testing purposes, we are using postman for running the tests. Notice that the API service requires a JWT token from Azure Active Directory. In order to acquire the JWT token, we need to pass in credentials which is stored in the library variable group named Postman. There are other configuration settings stored in the library variable group named Deploy. The results of the integration tests are stored and published so we have a nice report showing success or failures.

Finally, we are leveraging a practice known as Blue-green deployment. In Azure Web App, we have the ability to deploy to a slot which we can think of as a sub-environment. Here, we have define a Staging slot. Initially, we deploy to a Staging slot and run the integration tests against that slot. When all the integration tests passed, we can easily swap out the Production slot with the Staging slot. In the case of issues found later, we can rollback the deployment by doing a swap again. This technique further ensure the quality of our release for each environment including the actual production environment.

A consistent, predictable and cost effective CI/CD process

Let’s consider the constraints discussed earlier. There is certainly going to be a learning curve to learn YAML and ARM Template. Azure DevOps has a good amount of pre-built tasks which should hopefully reduce the learning curve by allowing most common tasks to be leveraged directly. In addition, the documentation on Azure DevOps is pretty intensive. There are also training modules available. All these coupled with the ease of tooling will help get a Developer up to speed.

By having a single YAML file, the idea of documentation as code ensures every developer is familiar with how code and resources are managed in every environment, up to and including production. This consistency and predictability reduces the knowledge gap that might exist in today’s setup if the DevOps team is controlling and maintaining that.

The DevOps team does not have to be intricately involved with each individual development team’s CI/CD process as it is now managed by the development team. This means the DevOps team is freed up to be responsible for something higher such as common requirements from all teams. This may include Security, Configuration management or Automation tooling. As an organization, this also means more time for the DevOps team to work on other key areas such as Automation on operational concerns (which is a really good topic down the road).

Given we have the ability to stand up and tear down environments, we can even consider the possibility of not having any static environments expect for Production as we become more mature. We can be really cost effective in controlling the environments we really need, on an on-demand basis. We are guaranteed of the consistency of the environment so we do not have to worry about dynamically standing up environments on the fly.

However, be aware that this approach might not be suitable for every architecture. For example, it would be hard for a monolith application/ product given it has hard dependencies. We also have to consider the culture of the organization and team(s). We may not be able to take this leap without a focus on the developer to pivot from purely software development to also learning new tooling available on the cloud. I strongly believe this would be the trend going forward given the innovation happening on the Cloud to help us do more for less.

--

--

David Lee
David Lee

Written by David Lee

Cloud Solution Architect/Software/DevOps Engineer

No responses yet