Monday, June 27, 2016

git flow with bamboo and docker

git flow is well known branching model for git repositories. The challenge however is how to make it work when it comes to integrating with build, release and deployment process. This post describes one simple recipe to implement E2E process. The gist of the recipe is to use software version of the package as the docker image tag.

Git flow branching model

git flow branching model is the original post that describes the branching model in detail. I found this link very useful that provides good summary of the git flow with commands for implementing the branching model.

If you are using atlassian suite of products then it is best to name branches after JIRA tickets for better integration and traceability.

Bamboo build process

For every repository create 3 plans as following:

CI and CD Plan

This plan builds from develop branch and create docker image with tag as "latest". The bamboo plan can deploy the image automatically to CD environment. In addition, QualDev (QA) team can request deployment in QualDev environment.

Release Plan

This plan builds from master and release*/hotfix* branches. The docker images are created with tag as (npm package version or maven version). The deployment of images from this build are typically on demand.

Feature Plan

This plan builds from feature* branches. This plan doesn't generate any docker image. This is primarily for running unit and integration tests.


Bamboo plan and Docker Image

Following is the sample job in the bamboo plan to create docker image and push to AWS ECR. This is based on a nodejs project. The project source include a build.json file with placeholders for build key and build number. The dockerfile replaces them with the value passed in the build-arg parameters to docker build command. build.json along with npm package version provide complete context of the build currently deployed in a given environment.

#!/bin/bash
# Configure aws
echo $bamboo_AWS_AKEY > 1.txt
echo $bamboo_AWS_SKEY >> 1.txt
echo "" >> 1.txt
echo "" >> 1.txt
aws configure < 1.txt
# Login to AWS ECR
LOGIN_STRING=`aws ecr get-login --region us-east-1`
${LOGIN_STRING}
PRODUCT=
COMPONENT=
PACKAGE_VERSION=$(cat package.json | grep version | head -1 | awk -F: '{ print $2 }' | sed 's/[",]//g' | tr -d '[[:space:]]')
TAG=
BUILDKEY=${bamboo.buildKey} 
BUILDNUMBER=${bamboo.buildNumber}
                    REPOURL= 
# Build and Push docker image
docker build --build-arg BUILD_KEY=$BUILDKEY --build-arg BUILD_NUMBER=$BUILDNUMBER -t $PRODUCT/$COMPONENT:$TAG -f dockerbuild/Dockerfile --no-cache=true .
docker tag $PRODUCT/$COMPONENT:$TAG      $REPOURL/$PRODUCT/$COMPONENT:$TAG
docker push $REPOURL/$PRODUCT/$COMPONENT:$TAG

Following command in dockerfile updates the build.json
# Update build key and number
RUN sed -i -- "s/BUILDKEY/$BUILD_KEY/g; s/BUILDNUMBER/$BUILD_NUMBER/g" ./build.json

Further an API like following can make the information available to internal users about the details of the service running.
    const build = require('./build.json');

    {
      method: 'GET',
      path: '/about',
      config: {
        handler: function (request, reply) {
          var about = {
            "name": process.env.npm_package_name,
            "version": process.env.npm_package_version,
            "buildKey":  build.buildKey,
            "buildNumber": build.buildNumber,
            "config": conf.getProperties()
          }
          return reply(about);
        }
      }
    }


No comments: