Sunday, August 31, 2014

How to sync between forked github repository and origin

If you are trying to figure out about github and the concepts behind it, you can easily get started by       referring this tutorial.

most of the open source projects exist in the github. You can fork them in to your repository, modify     them and you can commit the changes back to the origin repository. But while doing development,         most of the folks forget to get sync with the latest commits from the upstream origin. Therefore, this       blog post describes about how to sync the forked repository with upstream repository.

If you have just cloned the forked out repository from origin or if you haven't added the remote               upstream repository, then you have to follow the steps #1-#4.


If you haven't cloned the forked repository, you have to start the proceedings by cloning the forked repository.
List the currently configured remote repository with forked one.

git clone


Check the currently configured remote repository.

git remote -v


Specify the upstream repository that will be synced with forked repository.

git remote add upstream


Verify the upstream repository and origin by checking the remote URLs.

git remote -v

The tasks #1 to #4 are one time task. You have to do them when you cloned a forked repository. 

Rest of the tasks should be done when ever you are trying to push changes to the forked master branch. By making this as a habit you can avoid unnecessary conflicts which may arise when merging pull requests.


Fetch all the commits from upstream repository.

git fetch upstream


Now checkout the master and merge it with the upstreams master.

git checkout master

git merge upstream/master

If the upstream had changes, git will be printing out the summary on current updates.

Now you are having an updated repository.

In additionally you can check the difference between upstream and local repo using following command

git diff master upstream/master > bps.diff

Wednesday, August 27, 2014

Building a JAX-RS service in 5 minutes ...

This post seems to be bit crazy. But with the help of Grizzly, Jersey, and Glassfish I was able to implement it with in 5 minutes.

So here how I did.

Step 1

Create a http server by using GrizzlyServerFactory.

Step 2

Now create a resource handling class inside the package domain. You might have noticed as we are mentioning to look-up for resource handling service classed inside "domain" package. 
You can test this example by using link http://localhost:9999/users/all.

You can find the source code here

Tuesday, August 26, 2014

Trouble shooting the apache2 installation in ubuntu 13.04

As usual without any buzz in ubuntu I used the typical command without any issues.

apt-get install apache2

Next I tried to start the installed apache2. 

  apachectl start

But it responded with an error message, saying

AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using Set the 'ServerName' directive globally to suppress this message
(98)Address already in use: AH00072: make_sock: could not bind to address [::]:80
(98)Address a
lready in use: AH00072: make_sock: could not bind to address

So by default apache2 server runs on port 80. I used the netstat command to kill the process, responsible to that port. But it kept me giving this result

netstat -ltnp | grep :80 

tcp6 0 0 :::80 :::* LISTEN 15237/apache2
tcp6 0 0 :::8080 :::* LISTEN 10470/java
tcp6 0 0 :::* LISTEN 10470/java
tcp6 0 0 :::8009 :::* LISTEN 10470/java

Later I realized, In all cases Killing the process may not work, as the process using the port 80 will get restarted and doesn't allow to use the port. So what can be done is to change the port for apache, if that doesn't matter.

Therefore as a solution for that, I opened the /etc/apache2/ports.conf and changed the Listen port to 8081.

Now, again I tried to start the server and it complained to me as 

" AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using Set the 'ServerName' directive globally to suppress this message "

So i added a new line "ServerName localhost" in the file /etc/apache2/httpd.conf.

Now the server started without much trouble.


Implementing Store and Forward pattern Using WSO2 ESB/JMS MessageStores with fault handling

WSO2 ESB can be used for implementing Store and Forward messaging patterns using message stores and Processors.

The store and forward messaging pattern is used by clients to send the messages to a WSO2 ESB proxy service. Proxy service will be storing the messages in a message store. Lately Message Processor will fetch those messages and send it to the back end. This main objective of this sort of a design is to make sure the client doesn't loose any messages in the event of failure of back end services.

This entire operation can be depicted in this diagram.

Let me explain the operation in full detail.

  1. Initially client sends a message to the Proxy service of WSO2 ESB.
  2. Proxy service stores the message in to persistent/in-memory Message store. Before the message being inserted to storage, ESB's message store serializes the messages by using store mediator.
  3. Message Processor will act either as sampling processor or forwarding processor. This diagram depicts of a message forwarding processor where the processor won't pop the message from queue until the message delivered successfully. Message store can just pass the message back to the endpoint or before sending to the back end it can perform some operations on the message.
  4. Eventually the message will be send to back end server.

In this blog post I will be using ActiveMQ (as JMS store ) and Message Forwarding Processor( as message processor). 

Implementing Store and Forward messaging patterns using WSO2 ESB

Download and install the ActiveMQ from here.

Create a Proxy Service in WSO2 ESB. You can go through the ESB samples on how to create proxy services.


In this proxy service, I have defined only the essential components to explain the Store and Forward messaging pattern. Here "inSequence" receives the incoming message and performs following tasks.

  1. Logs the incoming message
  2. Sends the message to back-end service,
  3. Sends back the HTTP STATUS code 202 to client message. 
In addition Proxy service has a faultSequence too. I will get back to it later.

Next we have to define a message store and corresponding properties to that store. Here we have selected ActiveMQ persistent message store.

Once we have sorted out the message store, next we have to define a way to pop or peek messages from the queue. So next we have to define the message processor properties like the message processing class, polling interval etc ...

Now we are almost done except one thing. We haven't mentioned any where to store the incoming message :) . So I have to add a small bit of code in to Proxy's inSequence. we can add the message storing tag in between FORCE_SC_ACCEPTED and OUT_ONLY properties. 

We are recommending this order for a purpose. Lets consider that the JMSMessageStore crashed due to some reason. On that scenario, when proxy service tries to write the message in to store, will return with an exception. Thus triggering the fault sequence. We can pack the error message in to a payload factory and we can send it to the client. Therefore by preserving this order, we make sure that we send 202 HTTP STAUTS message to client only if the storage is succesful.

Now we can look in to complete ESB synapse configuration.

Tuesday, August 19, 2014

Cross domain jQuery ajax call with cookies

CORS ( Cross Origin Resource Sharing ) is a W3C spec which defines the way to communicate over cross domains from the browser.

Cross domain communication occurs when a service from domain A tries to connect with a service from domain B to retrieve some data. But traditionally browsers don't allow this sort of communication due to Same Origin Policy

By using the CORS, the Cross domain communication can be enabled from domain B's side by adding some more additional header values which will enable the browser to let domain A to access the services of domain B.

Following browser's support CORS
  • Internet Explorer 8+
  • Firefox 3.5+
  • Safari 4+
  • Chrome

By default standard CORS requests don't have cookies embedded with it. Therefore by setting the XMLHttpRequest’s <.withCredentials> attribute to true we can enable cookie transport with the request. This is the only thing that have to be done from the client side.

On the other hand we have to set some of the header values from the server side too. Those are

  1. Access-Control-Allow-Origin field indicates the origin of the request header. Wild card operator value doesn't work on all browsers.
  2. Access-Control-Allow-Headers param indicates the headers supported by autothe server. If any of the request which is not expecting any one of the listed values as headers, browser won't be displaying the content.  
  3. Access-Control-Allow-Methods param defines the comma delimited supported HTTP methods by server.
  4. Set the Access-Control-Allow-Credentials parameter value to true. This header field indicates that cookie should be included in request.

JQuery client code.

Server code.