Tuesday, December 30, 2014

How To Get API request originator IP address in ESB


Scenario

Retrieve the IP address of the actual originator of the API request in below mentioned scenario.



In the above deployment scenario, I had a requirement to the get IP address of the client from ESB.

Prerequisite
Solution

 'X-Forwarded-For' header property is a transport level header property. This property is normally used by Load Balancer to stamp the IP address of the client which is requesting a service from back-end server. 

This property can be used append Client-IP address in the WSO2 API Manager. Later on by accessing this property, relevant Client-IP address can be fetched from ESB side. I included following filter configuration in WSO2 API Manager to append  Client-IP address to  'X-Forwarded-For' header property.




In API manager, each API definitions are stored in separate synapse config files. Therefore you have to add the above mentioned configuration in each and every files. Since this is a cumbersome task, you can use mediation extension. By doing so, the filter configuration will be included in each and every synapse API definitions.

Now from the ESB side, you can obtain the Client-IP address from  'X-Forwarded-For' header property by having following synapse configuration.





How to get a session cookie from a webapp which uses SAML SSO

Scenario


  1. WSO2 IS is configured for SSO and acts as IDP.
  2. Web app is placed in Tomcat 7.x
  3. Web app contains the SAML Assertion Response from IDP.
  4. By using the SAML Assertion Response from web app, we should obtain the cookie to access admin services like updating password, retrieving tenants etc ....


You can learn more about enabling SSO in WSO2 IS 5.0 by going through this references. In order to demo this scenario, I have modified the code provided in this article.
Source code for above mentioned article can be found here.

I have written a sample web app to update the password by invoking the admin service.
In this sample app, cookie is obtained by logging in to the admin service  "SAML2SSOAuthenticationService" by passing the the SAML response.

In the event of successful login to the admin service, it returns a cookie string. This cookie can be lately used to access the admin service method "changePasswordByUser".


The diagram flow can be displayed below.


[1] SAML Request to the IDP.
[2] SAML Assertion Response to the Web app
[3] Login in to the admin services by using SAML Assertion response
[3] Cookie will be returned to valid login


Prerequisite
  • WSO2 IS server 5.0 - PORT 9443 ( PortOffset 0 ) . You can download it from here.
  • TOMCAT 7 - PORT 8080 

Steps
  • Check out and build the code from here (In order to test this sample you have to build the POM file using "mvn clean install").  The war file "saml2.sso.demo.war" can be found in target folder. 
  • Configure WSO2 IS 5.0 for SSO by using following link.  While configuring the SP please make sure to have following configuration values when configuring SP.
    1. Issuer - "saml2.sso.demo"
    2. Assertion Consumer URL - "http://localhost:8080/saml2.sso.demo/consumer"
    3. Check the options in Enale Response Signing and Enable Response Signing   in SP.                                                                                                                     

  • Navigate to repository/conf/security/authenticators.xml and update the "ServiceProviderID" value same as Issuer's value.
  • Now you can deploy the "saml2.sso.demo.war" in to tomcat's webapps folder.
  • Start the both IS and tomcat.
  • you can use the http://localhost:8080/saml2.sso.demo/ to test the web app .
Source Code can be found here   

Saturday, December 27, 2014

How to retrieve all APIs docs published on WSO2 API Manager 1.7 environment


WSO2 API manager provides the necessary platform for creating, publishing and managing all aspects of an API and its life cycle. API Manager uses Swagger framework to provide interactive documentation support to help users to clearly understand and experience the APIs.
Hence, WSO2 API Manager stores the API definitions in registry.
       
These API definitions are compatible with Swagger version 1.2. In order to further learn about WSO2 API Manager, you can refer the documentation.

You can view the API documentation definition by logging in to API Store. You have to follow the following steps to achieve this.


  • Log in to API Store.  ( https://[HOST_NAME]:[PORT]/store )
  • It will display all the available APIs. 
  • You can click and visit to an API.
  • Navigate to "API Console".
  • Click on the "Download" link.
  • It will display the api-doc.json ( This compliance with swagger 1.1 ).
  • In order to navigate to API doc definition which compliance with Swagger 1.2, you have to replace the "api-doc.json" part of the URL (final part) with "1.2/default" part.



But in general, users might want to view the API docs by using private Swagger UI for various reasons. 

I have written a sample code on how to get all the API documentations get downloaded for both tenants and super-tenant.

All the API definitions are stored in registry. Only thing you have to do is to log in API Manager first and then next login to store by using admin services provided by WSO2 Carbon.
Next, you have to follow a REST URL provided to access the API definitions stored in registry.

URL formats used to access the API definitions differ based on either tenants or super-tenants.


  1. Super-tenants as API providers. 

             https://[HOST_NAME]:[PORT]/registry/resource/_system/governance/apimgt/
             applicationdata/api-docs/[NAME]-[VERSI0N]-[PROVIDER]/1.2/default 


      2.  Tenants as API providers.

            https://[HOST_NAME]:PORT]/t//registry/resource/_system/
            governance/apimgt/applicationdata/api-docs/[NAME]-[VERSI0N]-                                                 [PROVIDER]/1.2/default

      In abstract what we are  going to do vi program is
   
      [1] Log in to API Manager carbon home
      [2] Retrieve all the tenants.
      [3] Log in to API Store
      [4] Dynamically generate the URLs by using program.
      [5] Retrieve them by simple GET call.

       Source code can be downloaded from here.

Saturday, December 20, 2014

How to invoke Admin Services on WSO2 Carbon Products.


This blog post explains on how to invoke admin services provided by WSO2 Carbon Products. I have implemented a sample code tested with WSO2 Identity Server.

In this blog post I will briefly explain on how to update the password via admin service.
I will use both AuthenticationAdmin and UserAdmin services to update the password.

 - AuthenticationAdmin will be letting users to log in to the system.
 - UserAdmin provides the API "changePasswordByUser" to update the logged in user's password.

WSDL files of Admin Services can be viewed by following steps.


  • Update the value of "HideAdminServiceWSDLs" element to false in carbon.xml.                 Located inside
  • WSDL file to a corresponding admin service can be viewed by browsing to to the URL.
    https://[CARBON_HOME]:[PORT]/services/[ADMIN_SERVICE]?wsdl                                                                                                                                           
Next we have to generate the stub. Stub will wrapping up the underlyaing RMI operations that occur when invoking the admin services.assist you to perform RMI operations on Admin services exposed via corresponding WSDL. Stub can be generated by various means. But in this sample I have used the  maven-antrun-plugin to generate the stub dynamically during compile time.

Once done with the stub generation, Only thing left now is to invoke the relevant stubs to perform necessary operations. First thing you should do is to login by using the AuthenticationAdminStub. During the logging in process, the stub returns the cookie. By using that cookie, you can proceed with your subsequent operations.

The complete code can be downloaded from here.

Monday, September 1, 2014

How to perform message transformation in WSO2 ESB


Message type transformations are basically done by both Message Builders and Message Formatters.

Their operation can be depicted on following diagram.



According to this image ... 

  • Message builder is chosen based on the content type of incoming message. The chosen builder processes the incoming message's raw payload and converts it in to soap.
  • Message formatter is chosen based on the content type of outgoing message. It builds the output stream based on the content type of message.


messageType” property can be used to change the content-type of the message.

Now Let us look an example scenario...

#1

I am using the postman chrome plug-in as REST client.




The top part represents the JSON request and the bottom part represents the JSON response from the ESB.


#2

I am using the JAX-RS service as my back end server. You can find the code here.


#3

You can find the synapse configuration with transformation details.



You can further improve this synapse configuration by having switch-case, where you can define various builders or formatters based on the content-type of the incoming or outgoing messages.



How to over come "https not supported or disabled in libcurl" error when using CURL


If you see an error like this, You have to simply follow the following steps.

curl: (1) Protocol https not supported or disabled in libcurl

#1

Uninstall the CURL and its dependencies from your machine

sudo apt-get remove --auto-remove curl


#2

Download the CURL latest version from here.

#3

Navigate to the extracted CURL package and configure it with SSL option.

./configure --with-ssl

#4

now build the source.

make

#5

install the CURL

make install

Sunday, August 31, 2014

How to sync between forked github repository and origin



If you are trying to figure out about github and the concepts behind it, you can easily get started by       referring this tutorial.

most of the open source projects exist in the github. You can fork them in to your repository, modify     them and you can commit the changes back to the origin repository. But while doing development,         most of the folks forget to get sync with the latest commits from the upstream origin. Therefore, this       blog post describes about how to sync the forked repository with upstream repository.


If you have just cloned the forked out repository from origin or if you haven't added the remote               upstream repository, then you have to follow the steps #1-#4.

#1

If you haven't cloned the forked repository, you have to start the proceedings by cloning the forked repository.
 
List the currently configured remote repository with forked one.

git clone https://github.com/firzhan/carbon-registry.git




 #2

Check the currently configured remote repository.

git remote -v




#3


Specify the upstream repository that will be synced with forked repository.

git remote add upstream https://github.com/wso2/carbon-registry.git

#4

Verify the upstream repository and origin by checking the remote URLs.

git remote -v


The tasks #1 to #4 are one time task. You have to do them when you cloned a forked repository. 

Rest of the tasks should be done when ever you are trying to push changes to the forked master branch. By making this as a habit you can avoid unnecessary conflicts which may arise when merging pull requests.

#5

Fetch all the commits from upstream repository.

git fetch upstream



#6

Now checkout the master and merge it with the upstreams master.

git checkout master

git merge upstream/master


If the upstream had changes, git will be printing out the summary on current updates.

Now you are having an updated repository.


In additionally you can check the difference between upstream and local repo using following command

git diff master upstream/master > bps.diff

Wednesday, August 27, 2014

Building a JAX-RS service in 5 minutes ...


This post seems to be bit crazy. But with the help of Grizzly, Jersey, and Glassfish I was able to implement it with in 5 minutes.


So here how I did.

Step 1
========

Create a http server by using GrizzlyServerFactory.



Step 2
========

Now create a resource handling class inside the package domain. You might have noticed as we are mentioning to look-up for resource handling service classed inside "domain" package. 
You can test this example by using link http://localhost:9999/users/all.

You can find the source code here

Tuesday, August 26, 2014

Trouble shooting the apache2 installation in ubuntu 13.04



As usual without any buzz in ubuntu I used the typical command without any issues.

apt-get install apache2


Next I tried to start the installed apache2. 

  apachectl start

But it responded with an error message, saying

AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message
(98)Address already in use: AH00072: make_sock: could not bind to address [::]:80
(98)Address a
lready in use: AH00072: make_sock: could not bind to address 0.0.0.0:80


So by default apache2 server runs on port 80. I used the netstat command to kill the process, responsible to that port. But it kept me giving this result

netstat -ltnp | grep :80 

tcp6 0 0 :::80 :::* LISTEN 15237/apache2
tcp6 0 0 :::8080 :::* LISTEN 10470/java
tcp6 0 0 127.0.0.1:8005 :::* LISTEN 10470/java
tcp6 0 0 :::8009 :::* LISTEN 10470/java

Later I realized, In all cases Killing the process may not work, as the process using the port 80 will get restarted and doesn't allow to use the port. So what can be done is to change the port for apache, if that doesn't matter.

Therefore as a solution for that, I opened the /etc/apache2/ports.conf and changed the Listen port to 8081.

Now, again I tried to start the server and it complained to me as 

" AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message "

So i added a new line "ServerName localhost" in the file /etc/apache2/httpd.conf.


Now the server started without much trouble.


 




Implementing Store and Forward pattern Using WSO2 ESB/JMS MessageStores with fault handling


WSO2 ESB can be used for implementing Store and Forward messaging patterns using message stores and Processors.

The store and forward messaging pattern is used by clients to send the messages to a WSO2 ESB proxy service. Proxy service will be storing the messages in a message store. Lately Message Processor will fetch those messages and send it to the back end. This main objective of this sort of a design is to make sure the client doesn't loose any messages in the event of failure of back end services.



This entire operation can be depicted in this diagram.


Let me explain the operation in full detail.

  1. Initially client sends a message to the Proxy service of WSO2 ESB.
  2. Proxy service stores the message in to persistent/in-memory Message store. Before the message being inserted to storage, ESB's message store serializes the messages by using store mediator.
  3. Message Processor will act either as sampling processor or forwarding processor. This diagram depicts of a message forwarding processor where the processor won't pop the message from queue until the message delivered successfully. Message store can just pass the message back to the endpoint or before sending to the back end it can perform some operations on the message.
  4. Eventually the message will be send to back end server.

In this blog post I will be using ActiveMQ (as JMS store ) and Message Forwarding Processor( as message processor). 

Implementing Store and Forward messaging patterns using WSO2 ESB

Download and install the ActiveMQ from here.

Create a Proxy Service in WSO2 ESB. You can go through the ESB samples on how to create proxy services.




  


In this proxy service, I have defined only the essential components to explain the Store and Forward messaging pattern. Here "inSequence" receives the incoming message and performs following tasks.

  1. Logs the incoming message
  2. Sends the message to back-end service,
  3. Sends back the HTTP STATUS code 202 to client message. 
In addition Proxy service has a faultSequence too. I will get back to it later.

Next we have to define a message store and corresponding properties to that store. Here we have selected ActiveMQ persistent message store.




Once we have sorted out the message store, next we have to define a way to pop or peek messages from the queue. So next we have to define the message processor properties like the message processing class, polling interval etc ...


Now we are almost done except one thing. We haven't mentioned any where to store the incoming message :) . So I have to add a small bit of code in to Proxy's inSequence. we can add the message storing tag in between FORCE_SC_ACCEPTED and OUT_ONLY properties. 




We are recommending this order for a purpose. Lets consider that the JMSMessageStore crashed due to some reason. On that scenario, when proxy service tries to write the message in to store, will return with an exception. Thus triggering the fault sequence. We can pack the error message in to a payload factory and we can send it to the client. Therefore by preserving this order, we make sure that we send 202 HTTP STAUTS message to client only if the storage is succesful.

Now we can look in to complete ESB synapse configuration.


Tuesday, August 19, 2014

Cross domain jQuery ajax call with cookies



CORS ( Cross Origin Resource Sharing ) is a W3C spec which defines the way to communicate over cross domains from the browser.


Cross domain communication occurs when a service from domain A tries to connect with a service from domain B to retrieve some data. But traditionally browsers don't allow this sort of communication due to Same Origin Policy

By using the CORS, the Cross domain communication can be enabled from domain B's side by adding some more additional header values which will enable the browser to let domain A to access the services of domain B.

Following browser's support CORS
  • Internet Explorer 8+
  • Firefox 3.5+
  • Safari 4+
  • Chrome

By default standard CORS requests don't have cookies embedded with it. Therefore by setting the XMLHttpRequest’s <.withCredentials> attribute to true we can enable cookie transport with the request. This is the only thing that have to be done from the client side.

On the other hand we have to set some of the header values from the server side too. Those are

  1. Access-Control-Allow-Origin field indicates the origin of the request header. Wild card operator value doesn't work on all browsers.
  2. Access-Control-Allow-Headers param indicates the headers supported by autothe server. If any of the request which is not expecting any one of the listed values as headers, browser won't be displaying the content.  
  3. Access-Control-Allow-Methods param defines the comma delimited supported HTTP methods by server.
  4. Set the Access-Control-Allow-Credentials parameter value to true. This header field indicates that cookie should be included in request.


JQuery client code.

Server code.