Monday, January 30, 2017

Controller Area Network - CAN

Found couple of interesting links on CAN.

What is CAN?

Automative CAN Bus System

Thursday, January 5, 2017

Keycloak - Update user's username

User in Keycloak has a username and email attribute. User's username (used for login) is different from user's email address. In lot of applications user's email address gets used as username. This brings up the usecase where user changes his/her email and the user's username in Keycloak must also be updated.

Keycloak by default doesn't allow admin to update user's username either via UI or API. To update username one need to first enable it at the realm level. Following are the steps:

1. Authenticate as admin

URL: POST https:///auth/realms//protocol/openid-connect/grants/access

2. Get realm

URL: GET https:///auth/admin/realms/

  "id": "bbb4b7eb-ea1e-4ca2-a925-896763cef01a",
  "realm": "",
  "notBefore": 0,
  "accessTokenLifespan": 300,
  "ssoSessionIdleTimeout": 1800,
  "ssoSessionMaxLifespan": 36000,
  "accessCodeLifespan": 60,
  "accessCodeLifespanUserAction": 300,
  "accessCodeLifespanLogin": 1800,
  "enabled": true,
  "sslRequired": "external",
  "registrationAllowed": false,
  "registrationEmailAsUsername": false,
  "rememberMe": false,
  "verifyEmail": false,
  "resetPasswordAllowed": true,
  "editUsernameAllowed": false,
  "userCacheEnabled": true,
  "realmCacheEnabled": true,
  "bruteForceProtected": false,
  "maxFailureWaitSeconds": 900,
  "minimumQuickLoginWaitSeconds": 60,
  "waitIncrementSeconds": 60,
  "quickLoginCheckMilliSeconds": 1000,
  "maxDeltaTimeSeconds": 43200,
  "failureFactor": 30,
  "publicKey": "",
  "certificate": "",
  "requiredCredentials": [
  "otpPolicyType": "totp",
  "otpPolicyAlgorithm": "HmacSHA1",
  "otpPolicyInitialCounter": 0,
  "otpPolicyDigits": 6,
  "otpPolicyLookAheadWindow": 1,
  "otpPolicyPeriod": 30,
  "browserSecurityHeaders": {
    "contentSecurityPolicy": "frame-src 'self'",
    "xFrameOptions": "SAMEORIGIN"
  "smtpServer": {},
  "eventsEnabled": false,
  "eventsListeners": [
  "enabledEventTypes": [],
  "adminEventsEnabled": false,
  "adminEventsDetailsEnabled": false,
  "identityFederationEnabled": false,
  "internationalizationEnabled": false,
  "supportedLocales": [],
  "browserFlow": "browser",
  "registrationFlow": "registration",
  "directGrantFlow": "direct grant",
  "resetCredentialsFlow": "reset credentials",
  "clientAuthenticationFlow": "clients"

3. Update realm to allow updating username

URL: PUT https:///auth/admin/realms/

 "editUsernameAllowed": true,

4. Get user

URL: GET https:///auth/admin/realms//users/a552d630-a696-43ea-9c56-9fe132e5a9a4

  "id": "a552d630-a696-43ea-9c56-9fe132e5a9a4",
  "createdTimestamp": 1483624857856,
  "username": "test",
  "enabled": true,
  "totp": false,
  "emailVerified": true,
  "requiredActions": []

5. Update user's username

URL: https:///auth/admin/realms//users/a552d630-a696-43ea-9c56-9fe132e5a9a4

"username": "test1",
"enabled": true,
"emailVerified": true

note: Keycloak 1.5.0 updates the enabled and emailVerified attributes to false upon update when not explicitly passed. I haven't checked if there are other such attributes.

6. Get user

URL: GET https:///auth/admin/realms//users/a552d630-a696-43ea-9c56-9fe132e5a9a4

  "id": "a552d630-a696-43ea-9c56-9fe132e5a9a4",
  "createdTimestamp": 1483624857856,
  "username": "test1",
  "enabled": true,
  "totp": false,
  "emailVerified": true,
  "requiredActions": []

7. Validate by performing login with username test1 and password.

Feel free to leave comment.

Monday, June 27, 2016

git flow with bamboo and docker

git flow is well known branching model for git repositories. The challenge however is how to make it work when it comes to integrating with build, release and deployment process. This post describes one simple recipe to implement E2E process. The gist of the recipe is to use software version of the package as the docker image tag.

Git flow branching model

git flow branching model is the original post that describes the branching model in detail. I found this link very useful that provides good summary of the git flow with commands for implementing the branching model.

If you are using atlassian suite of products then it is best to name branches after JIRA tickets for better integration and traceability.

Bamboo build process

For every repository create 3 plans as following:

CI and CD Plan

This plan builds from develop branch and create docker image with tag as "latest". The bamboo plan can deploy the image automatically to CD environment. In addition, QualDev (QA) team can request deployment in QualDev environment.

Release Plan

This plan builds from master and release*/hotfix* branches. The docker images are created with tag as (npm package version or maven version). The deployment of images from this build are typically on demand.

Feature Plan

This plan builds from feature* branches. This plan doesn't generate any docker image. This is primarily for running unit and integration tests.

Bamboo plan and Docker Image

Following is the sample job in the bamboo plan to create docker image and push to AWS ECR. This is based on a nodejs project. The project source include a build.json file with placeholders for build key and build number. The dockerfile replaces them with the value passed in the build-arg parameters to docker build command. build.json along with npm package version provide complete context of the build currently deployed in a given environment.

# Configure aws
echo $bamboo_AWS_AKEY > 1.txt
echo $bamboo_AWS_SKEY >> 1.txt
echo "" >> 1.txt
echo "" >> 1.txt
aws configure < 1.txt
# Login to AWS ECR
LOGIN_STRING=`aws ecr get-login --region us-east-1`
PACKAGE_VERSION=$(cat package.json | grep version | head -1 | awk -F: '{ print $2 }' | sed 's/[",]//g' | tr -d '[[:space:]]')
# Build and Push docker image
docker build --build-arg BUILD_KEY=$BUILDKEY --build-arg BUILD_NUMBER=$BUILDNUMBER -t $PRODUCT/$COMPONENT:$TAG -f dockerbuild/Dockerfile --no-cache=true .

Following command in dockerfile updates the build.json
# Update build key and number
RUN sed -i -- "s/BUILDKEY/$BUILD_KEY/g; s/BUILDNUMBER/$BUILD_NUMBER/g" ./build.json

Further an API like following can make the information available to internal users about the details of the service running.
    const build = require('./build.json');

      method: 'GET',
      path: '/about',
      config: {
        handler: function (request, reply) {
          var about = {
            "name": process.env.npm_package_name,
            "version": process.env.npm_package_version,
            "buildKey":  build.buildKey,
            "buildNumber": build.buildNumber,
            "config": conf.getProperties()
          return reply(about);

Friday, April 22, 2016

Polymer 1.0 Vulcanize Error

I recently started getting following error when running "gulp" on polymer starter kit:

Starting 'vulcanize'...
ERROR finding /starter-kit/app/elements/bower_components/bower_components/promise-polyfill/Promise.js
ERROR finding /starter-kit/app/elements/bower_components/whenever.js/whenever.js
ERROR finding /starter-kit/app/elements/bower_components/bower_components/bower_components/bower_components/bower_components/web-animations-js/web-animations-next-lite.min.js
ERROR finding starter-kit/app/elements/bower_components/paper-datatable/weakCache.js
ERROR finding starter-kit/app/elements/weakCache.js

I spent multiple hours looking on google but couldn't find a concrete answer. The same code and command works on my collegues machines.

I figured out that one difference between their and my machine was the npm repository. They were using external repo while I was using internal repo.

The Polymer/vulcanize team released 1.14.9 and it had a critical bug ( As soon as they found that, they unpublished 1.14.9 version. However, before they could unpublish, our internal repo had cached it.

To resolve this I had to manually downgrade to 1.14.8 which I did by changing repo path to public npm repo.

Saturday, April 2, 2016

SSL Error - SSL_VERIFY_CERT_CHAIN:certificate verify failed


Recently (last week), we installed new SSL certificate on the Tomcat instances in production. The process involved:

  1. Create a new Java Keystore
  2. Generate a new CSR
  3. Obtain the certificate for our domain along with certificate chain
  4. Import the certificate with the certificate chain in the keystore
  5. Update Tomcat server.xml to point to new keystore
  6. Restart Tomcat process

The Tomcat instance hosts a SOAP WebService. The verification steps involved

  1. Checking the certificate details in multiple browsers
  2. Verifying SOAP API invocation using SOAP-UI tool

The verification was successful and we applied the change in production.


Within few hours couple of customers reported issue that they are not able to access the API. One customer shared the error log:

Caused by: nested fault: SSL protocol error
error:140CF086:SSL routines:SSL_VERIFY_CERT_CHAIN:certificate verify failed
error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed

The first reaction was that we messed up something in deployment. Without reviewing 
the error and understanding the root cause the decision was to restore the service 
and the change was rolled back. Since the old certificate was valid for few more weeks,
it was a good decision.


Later in the day I analyzed the error and concluded that there was no issue with the 
certificate or the deployment. Following is the new certificate chain when viewed in the 

Following is the old certificate chain when viewed in the browser

While the root CA is same, the intermediate CA changed from "Verisign Class 3 Secure Server CA - G3" to "Symantec Class 3 Secure Server CA - G4". This change happened because the new certificate that we requested was SHA2. Verisign class 3 certificate is SHA1 whereas Symantec class 3 certificate is SHA2. Symantec has issued new intermediate CA certs with after Verisign aquisition in 2010. 

Clients that don't have Symantec class 3 intermediate certificate in their truststore will fail with error SSL_VERIFY_CERT_CHAIN.


To overcome this error, customers must import the intermediate certificate from following link into their truststore:

Following page has instructions for installing certificate on various platforms:


  1. When installing new certificates, notify customers in advance (few weeks). Do this even if the change is limited to just extension of expiry date or domain name change.
  2. Any change in hashing algorithm i.e. SHA1 to SHA2 or SHA2 to SHA3 should be announced well in time to all customers. Different browsers have different timelines when it comes to migrating from SHA1 to SHA2. The biggest risk of such seemingly minor changes is on API integration.
  3. Observe the certificate chain carefully. Just seeing the green page icon in the browser bar is not sufficient. Share the chain with customer if it different from existing certificate chain.

Update - 04/20/2016

I missed one important part in my analysis. I verified the certificate chain using browser but never bothered to look at the chain in the keystore. It turns out the keystore didn't have full certificate chain and that caused clients to fail. If clients had the intermediate certificate in their truststore it would not have mattered. So the fix on our side was to import root CA.

Update - 04/20/2016

Today we went through another issue which was related to SHA1 to SHA2 update. One of the key customers was not prepared and post update they were not able to access out services. The client software was running on a Windows 2003 server that was never patched and was lacking support for SHA2. They were seeing following error while connecting to our service:

The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.

While this key customer was trying to figure out how to patch their system(which is not easy), we put together a workaround solution for them so that they can continue to use the server. Here is what we did:
  1. Asked customer to use non-secure port. Since customer connects to our APIs over VPN it was okay to use non-secure port. However, it was not possible because the URL was hardcoded in the code and nobody knew where the source is or how to build it. So we went to option #2
  2. We setup a new server. 
    1. Installed the required software (which is Java + Tomcat + WAR file)
    2. Created a new self signed SHA1 certificate for the domain
    3. Configured tomcat to use the new keystore and self signed certificate
    4. Shared the certificate with customer to import in their truststore
    5. Asked customer to update the /etc/hosts (or equivalent for Windows) on their machine to point domain name to the IP of this new server. This avoided the need for changing the hardcoded URL in the code.
Following links were esp useful when troubleshooting and recommending solution to customer to patch their Windows 2003 server:

Friday, April 1, 2011

Cassandra 0.7.x - Understanding the output of nodetool cfhistograms

Command - Usage and Output
Cassandra provides nodetool cfhistograms command to print statistic histograms for a given column family. Following is the usage:
./nodetool -h -p cfhistograms

The output of the command has following 6 columns:
  • Offset
  • SSTables
  • Write Latency
  • Read Latency
  • Row Size
  • Column Count

Interpreting the output
  • Offset: This represents the series of values to which the counts for below 5 columns correspond. This corresponds to the X axis values in histograms. The unit is determined based on the other columns.
  • SSTables: This represents the number of SSTables accessed per read. For eg if a read operation involved accessing 3 SSTables then you will find a +ve value against Offset 3. The values are recent i.e. for duration lapsed between two calls.
  • Write Latency: This shows the distribution of number of operations across the range of Offset values representing latency in microseconds. For eg. If 100 operations took say 5 ms then you will find a +ve value against offset 5.
  • Read Latency: This is similar to write latency. The values are recent i.e. for duration lapsed between two calls.
  • Row Size: This shows the distribution of rows across the range of Offset values representing size in bytes. For eg. If you have 100 rows of size 2000bytes then you will find a +ve value against offset 2000.
  • Column Count: This is similar to row size. The offset values represent column count.

Some additional details
  • Typically in a histogram the values are plotted over discrete intervals. Similarly Cassandra defines buckets. The number of buckets is 1 more than the bucket offsets. The last element is values greater than the last offset. The values you see in the Offset column in the output is bucket offsets.
  • The bucket offset starts at 1 and grows by 1.2 each time (rounding and removing duplicates). It goes from 1 to around 36M by default (creating 90+1 buckets), which will give us timing resolution from microseconds to 36 seconds, with less precision as the numbers get larger. (see EstimatedHistogram class)

Friday, March 11, 2011

Schema Management in Cassandra 0.7

Schema Management in Cassandra

Starting with Cassandra 0.7 the schema management in Cassandra is very easy. It is as good as centralized schema management with no SPoF . Typically schema operations involve loading schema initially, making changes to existing schema like adding CF and/or modifying existing CF attributes, and dropping schema elements like CFs and Keyspaces.

There are 3 ways these operations can be performed:

Load schema from cassandra.yaml using schematool or JMX Console: This option can be used to load schema only once. Running it twice in a cluster won't have any impact. So this is good for loading initial schema.

schematool import
JConsole:MBeans->org.apache.cassandra.db->StorageService -> Operations -> loadSchemaFromYAML

Create/Modify schema using Thrift APIs: This provides high flexiibility and good for applications that wish to create/drop Keyspaces and ColumnFamilies on fly. You cannot modify existing ColumnFamilies using the APIs. Refer to Cassandra Wiki - API for details of the APIs available. Following APIs are available:
  • describe_keyspace
  • describe_keyspaces
  • system_add_column_family
  • system_drop_column_family
  • system_add_keyspace
  • system_drop_keyspace

Create/Modify schema using cassandra-cli: This is the most flexible option available. It allow practically everything that option #1 and #2 allow collectively. Following commands are supported. You can see the commands by entering "help;" command on cassandra-cli. For details of specific command type "help ;". For eg "help create keyspace;".
  • Describe keyspace
  • Show list of keyspaces
  • Add a new keyspace with the specified attribute(s) and value(s)
  • Update a keyspace with the specified attribute(s) and value(s)
  • Create a new column family with the specified attribute(s) and value(s)
  • Update a column family with the specified attribute(s) and value(s)
  • Delete a keyspace
  • Delete a column family

Under the hood

The Cassandra Wiki - Schema Updates describes the operations in good details. Following is the high level summary:

  • Cassandra uses Schema and Migrations ColumnFamily in system keyspace for maintaining schema and changes to schema respectively.
  • Schema changes done on one node are propagated on other nodes in the cluster
  • Migrations CF tracks individual changes to schema. Schema CF contains reference to the latest version in use
  • Some manual cleanup may be needed if node crashes while schema changes are being applied to the cluster
  • To avoid concurrency issues always push schema changes through one node


Dropping a Keyspace

  • Connect to cassandra-cli on a node and run drop keyspace command.

[root@rwc-sb6240-1 bin]# ./cassandra-cli
Welcome to cassandra CLI.

Type 'help;' or '?' for help. Type 'quit;' or 'exit;' to quit.
[default@unknown] connect;
Connected to: "NarenCluster072" on
[default@unknown] drop keyspace KeyspaceMigration;
[default@unknown] exit;
[root@rwc-sb6240-1 bin]#

  • The logs on the node will show following events (DEBUG MODE)

DEBUG [pool-1-thread-151] 2011-03-09 11:21:03,334 (line 759) drop_keyspace
DEBUG [MigrationStage:1] 2011-03-09 11:21:03,343 (line 397) applying mutation of row 35666261336631662d346138322d313165302d623865652d663930663861336635653166
DEBUG [CompactionExecutor:1] 2011-03-09 11:21:04,146 (line 109) Checking to see if compaction of Schema would be useful DEBUG [MigrationStage:1] 2011-03-09 11:21:04,146 (line 106) Announcing my schema is 5fba3f1f-4a82-11e0-b8ee-f90f8a3f5e1f
DEBUG [CompactionExecutor:1] 2011-03-09 11:21:04,147 (line 109) Checking to see if compaction of Migrations would be useful
DEBUG [ReadStage:14] 2011-03-09 11:21:04,150 (line 87) Their data definitions are old. Sending updates since d052796e-4a80-11e0-b8ee-f90f8a3f5e1f
DEBUG [ReadStage:15] 2011-03-09 11:21:04,151 (line 87) Their data definitions are old. Sending updates since d052796e-4a80-11e0-b8ee-f90f8a3f5e1f
DEBUG [pool-1-thread-151] 2011-03-09 11:21:05,629 (line 628) My version is 5fba3f1f-4a82-11e0-b8ee-f90f8a3f5e1f DEBUG [pool-1-thread-151] 2011-03-09 11:21:05,629 (line 659) Schemas are in agreement.

  • On the other nodes the log entries will look like

DEBUG [ReadStage:9] 2011-03-09 11:12:19,250 (line 82) My data definitions are old. Asking for updates since d052796e-4a80-11e0-b8ee-f90f8a3f5e1f
DEBUG [ReadStage:9] 2011-03-09 11:12:19,253 (line 106) Announcing my schema is d052796e-4a80-11e0-b8ee-f90f8a3f5e1f
DEBUG [MigrationStage:1] 2011-03-09 11:12:19,273 (line 36) Received schema check request.
DEBUG [MigrationStage:1] 2011-03-09 11:12:20,681 (line 106) Announcing my schema is 5fba3f1f-4a82-11e0-b8ee-f90f8a3f5e1f