Thursday, September 3, 2015

How to set-up DB2 with WSO2 BPS



System Requirements


  • DB2 Express-C 10.5. This can be downloaded over here.
  • Ubuntu 13.10
  • WSO2 BPS 3.5. This can be downloaded over here .

DB2 database can be installed by following the instructions given from this post. But unfortunately since we are installing a non licensed version of DB2, we won't be able to create the sample database mentioned in the blog post using the db2fs command. 

You can access the database by accessing as the db2 instance user. 

                        su - db2inst1

Step 1 - Create the databases

In order to view the available db2 database directories, you can use the following command 

                      db2 list database directory


Now we can create the necessary databases and assign necessary privileges for them to connect BPS with DB2 server. You have to create 4 such databases like bps_db, bpmn_db, reg_db and user_db.
We are going to create non restricted databases. Thus won't be having any permission issues.

                     db2 create database bps_db

                     db2 activate db bps_db 

Once the database get activated, we need to get connected with the database to perform operations on it.

                     db2 connect to bps_db


Step 2 - Creating the temporary system table spaces

You have to create temporary system table spaces for each database. This system table space stores internal temporary data required during SQL operations such as sorting, reorganizing tables, creating indexes, and joining tables. 

You have to create system tables paces with page size 4K, 8K, 16K and 32K. In addition buffer pool has to be created with the same size for each table space. Therefore in abstract, you have to create 4 buffer pools and 4 table spaces.

Creating a buffer pool with the size 4k

db2 "CREATE BUFFERPOOL STB_4_POOL PAGESIZE 4K"

Creating a system table space with the size 4k

db2 "CREATE SYSTEM TEMPORARY TABLESPACE STB_4 PAGESIZE 4K BUFFERPOOL STB_4_POOL"

More information regarding system table space and buffer pool can be found here

Step 3 - Obtaining the port number used in DB2

By default DB2 operates on port number 50000. In order to ensure the working port number of the DB2, we can execute following commands.

         db2 "get dbm cfg"|grep -i svce

This command will produce following output. 

         TCP/IP Service name                          (SVCENAME) = db2c_db2inst1

now you have to look for the port number in /etc/services. You can find a record like this

         db2c_db2inst1   50000/tcp

The above record defines the tcp port for the DB2 service which is 50000 in this case. 


Step 4 - Configuring the datasources in wso2 bps

Business Process Server 3.5.0 has to be configured with data-source properties files. These files can be found inside /repository/conf/datasources folder. Further configuration can be found here

I have copied the sample data-source configuration over here. 

       


Next we have to start the BPS server with -Dsetup option. This will create the tables. If not tables can be created by executing the db2 which can be found in dbscripts


Step 5 - Make sure the tables have necessary privileges

The tables should have the necessary privileges to "insert, update, delete and select". If we have created the database in restricted mode, we have to run the following commands to grant permission to the  tables.

First we have to dynamically generate command to grant privileges. Following script will dynamically generate the SQL and will write it in to the bps.sql file.

db2 "select 'grant insert, update, delete on table ' || trim(tabschema) || '.' || trim(tabname) || ' to user db2inst1;' from syscat.tables where type = 'T'" > bps.sql

Now open the bps.sql and remove the unwanted terms in the file ( in the top and bottom ).
This file will be heaving sql scripts like this 
               grant insert, update, delete on table SYSIBM.SYSDBAUTH to user db2inst1;

Next execute the file 

              db2 -tvsf bps.sql


Dropping the database in DB2

Following commands should be executed prior to dropping a database in DB2.

                      db2 disconnect all
                      db2 force applications all
                      db2 deactivate db  bps_db
                      db2 drop database BPS_DB


Note 

  • WSO2 BPS 3.5 db2 scripts can be found inside the pack. ( BPS_HOME/dbscripts/bps/)

Wednesday, August 26, 2015

How to Fix the linking errors during installation of Oracle 11g in Ubuntu 13.10

I followed the following blog post to install the oracle. But this blog post explains only about installing oracle in Ubuntu 12.04.

When I try to do it in Ubuntu 13.10, I faced some linking errors. Most of the linking errors are like  "Failed to link liborasdk.so.11.1" or some other library names. Therefore in order to over come those errors, after completing the installation of additional packages, some of the installation files to be modified.

">

Friday, July 10, 2015

How to implement a custom message processor for wso2 ESB


Storing and forwarding those messages by using WSO2 ESB is a widely adopted approach. We call this Message Store and Message Forward (MSMF) pattern.

ESB has two components to achieve this target. Those are

Message Store

Used to store the incoming messages in to the ESB to JMS message Queue.

Message Processor

This keeps polling the JMS queue in predefined interval and sends the message to the target endpoint.


But by default there are two types of message processors are supported out of the box.
 
  Scheduled Message Forwarding Processor

       This message processor process's the message from queue and forward it to a target endpoint.            The implementation of this can be found in following location.

  Sampling Processor

       This message processor process's the message from queue and forward it to a target sequence.            The implementation of this can be found in following location.

You can define the maximum retry count in the message processor. Once the retry count exceeded the limit, message processor can be configured to either one of the thing.

1. Message processor can deactivate itself.
2. Or it can drop that specific message from the message queue and continue processing the rest of
    the messages in the queue.

I have requirement of message processor which is sort of extension of the second configuration defined above

Use case

I have a setup 2 activemq message queues. Those are stock_message_queue and 
failed_stock_message_queue.

Here after failing to send the message to the back end after the number of retry counts, rather than dropping the message, I have to move that message to failed_stock_message_queue. The initially obtained messages are placed in the stock_message_queue.

Therefore, as a solution we can implement a custom message processor. Custom message processor can be implemented by using MessageProcessor interface. Since the message processor goes through several life cycle stages, we have to consider a great deal of care in developing our implementation during high loads.

I will be extending the existing ScheduledMessageForwardingProcessor class. In the "checkAndDeactivateProcessor" method, I will be adding an additional check to invoke a sequence, once the maximum attempt count breached. Invoked sequence will be moving the message from the stock_message_queue to failed_stock_message_queue. 

This is the changed code snippets from original implementation of ScheduledMessageForwardingProcessor class and synapse configuration.








The complete synapse code, custom code and capp project can be found from here. The queue messages can be checked from accessing the web console of active-mq.


Built the custom store and place the jar in to /repository/components/lib

Restart the ESB server

Upload the capp from ESB management console.

Wednesday, July 8, 2015

How to overcome the update error in Ubuntu 13.10 version

Since the Ubuntu 13.10 version seems to be an old version, when ever I try to perform update, I have been keep getting following error.

"Err http://archive.ubuntu.com saucy/main amd64 packages 
 404 Not Found [ IP :91.189.91.23 80]"

This issue happens due to end of providing support for this version of ubuntu ( in my case its saucy ). Once the support is over it will moved to another server.

Therefore in order to fix this issue, you can either update the OS version to the latest one or you can update the package repository archives of archive.ubuntu.com  and  security.ubuntu.com package repository domain names with  old-releases.ubuntu.com. You can achieve this by using following command.

sudo sed -i -e 's/archive.ubuntu.com\|security.ubuntu.com/old-releases.ubuntu.com/g' /etc/apt/sources.list

Next in order to make sure, no places are missed again we can manually update the source.list.d file using simple grep commands.

grep -E 'archive.ubuntu.com|security.ubuntu.com' /etc/apt/sources.list.d/*

Next run the simple sudo apt-get update command. 

Next while performing the update, you might face the signature validation failed error message.

"A error occurred during the signature verification"

In such scenarios, we can follow the old famous way of solving this issue.

sudo apt-get clean
cd /var/lib/apt
sudo mv lists lists.old
sudo mkdir -p lists/partial
sudo apt-get clean
sudo apt-get update


Wednesday, July 1, 2015

How to configure nginx to handle same Rest URL for GET, POST, PUT in the same URI


Here the use case is there is a rest endpoint, and the incoming requests for the endpoint may have any type of HTTP methods.

Now the requirement is to configure the nginx to handle such scenarios.

Incoming url :- https://10.29.17.29:9443//api-gw/hello2/1.0/oooooogggggggggg


This url might be a GET or POST.

We can define the following nginx configuration for this purpose


upstream internallbgwhttps {
        server wso2.apimgw-cluster.com:8243;
}



server {

       listen 10.29.17.29:9443;
       server_name 10.29.17.29;

       ssl on;
       ssl_certificate     /usr/local/keys/self-ssl.crt;
       ssl_certificate_key /usr/local/keys/self-ssl.key;



 location ~^/api-gw/(.*)$ {

           access_log  /usr/local/whp/nginx/conf.d/logs/external-api-all-gw-access.log  main;

                   if ($request_method = GET){
                       rewrite  ^/api-gw/(.*)$  /gw-get/$1$is_args$args last;
           }

                   proxy_set_header X-Forwarded-Host $host;
           proxy_set_header X-Forwarded-Server $host;
           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_set_header Host $http_host;

           proxy_pass https://internallbgwhttps/$1;
           proxy_redirect  https://internallbgwhttps/(.*) https://10.29.17.29:9443/$1;
       }


           location ~^/gw-get/(.*)$ {

           access_log  /usr/local/whp/nginx/conf.d/logs/external-api-get-gw-access.log  main;


                   proxy_set_header X-Forwarded-Host $host;
           proxy_set_header X-Forwarded-Server $host;
           proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
           proxy_set_header Host $http_host;

                   proxy_pass https://internallbgwhttps/$1$is_args$args;
           proxy_redirect  https://internallbgwhttps/(.*) https://10.29.17.29:9443/$1;
       }


}

Thursday, June 25, 2015

Some Commonly used postgre sql commands



Log in command to access the postgre sql database

psql -U

psql -U postgresadmin postgres


List down all the in schemas in the database

\l

List down all the tables in a schema

\dt

List down all the tables in a schema

\dt

Executing a sql file in a schema

CREATE DATABASE userdb; 
psql -U postgresadmin -f /usr/local/script/userdb/postgresql.sql -d userdb;

Wednesday, June 24, 2015

How to handle ldap certificate exception with WSO2 carbon server

WSO2 Identity product can be configured to use external ldap as a primary or secondary user store. You can follow the document to learn more about configuring ldap.


After configuring the ldap and during server startup, you may experience the following handshake error message

TID: [0] [IS] [2015-06-24 08:29:51,120] ERROR {org.wso2.carbon.user.core.ldap.LDAPConnectionContext} -  Error obtaining connection.Trying again to get connection...  {org.wso2.carbon.user.core.ldap.LDAPConnectionContext}
javax.naming.CommunicationException: ldap.wso2.org:3269 [Root exception is javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target]
        at com.sun.jndi.ldap.Connection.(Connection.java:226)
        at com.sun.jndi.ldap.LdapClient.(LdapClient.java:136)
        at com.sun.jndi.ldap.LdapClient.getInstance(LdapClient.java:1608)
        at com.sun.jndi.ldap.LdapCtx.connect(LdapCtx.java:2698)
        at com.sun.jndi.ldap.LdapCtx.(LdapCtx.java:316)
        at com.sun.jndi.ldap.LdapCtxFactory.getUsingURL(LdapCtxFactory.java:193)
        at com.sun.jndi.ldap.LdapCtxFactory.getUsingURLs(LdapCtxFactory.java:211)
        at com.sun.jndi.ldap.LdapCtxFactory.getLdapCtxInstance(LdapCtxFactory.java:154)
        at com.sun.jndi.ldap.LdapCtxFactory.getInitialContext(LdapCtxFactory.java:84)
        at javax.naming.spi.NamingManager.getInitialContext(NamingManager.java:684)
        at javax.naming.InitialContext.getDefaultInitCtx(InitialContext.java:307)
        at javax.naming.InitialContext.init(InitialContext.java:242)
        at javax.naming.InitialContext.(InitialContext.java:216)
        at javax.naming.directory.InitialDirContext.(InitialDirContext.java:101)
        at org.wso2.carbon.user.core.ldap.LDAPConnectionContext.getContext(LDAPConnectionContext.java:160)

In order to solve this issue, you have to import the public key of the ldap in to the client-trustore,jks(CARBON_HOME/repository/resources/security).

You can obtain the ldap's public certificate by executing following command

echo -n | openssl s_client -connect ldap.wso2.org:3269 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > ldapserver.crt

Later you can import the cer file in to client-truststore.jks to be used by WSO2 carbon products

keytool -import -trustcacerts -alias ldapcert -file ldapserver.crt -keystore client-truststore.jks

You can validate the certificate by running following command

 keytool -list -keystore client-truststore.jks -alias ldapcert


Saturday, June 6, 2015

How to create web app specific custom log file in WSO2 carbon platform



Use case

Need to have a custom log file to log the web applications deployed into wso2 Application Server 5.2.1.


Prerequisites

Steps

You can have the basic idea about log4j 1.2 by going through following link.

You can create the log file from the code like this.



As an alternative, you can prepare your own log4j.properties file for the web application and configure it to load the configurations from the file. I will be explaining that approach.



I am going to modify the web application which can be found in AS_HOME/samples/Jaxws-Jaxrs/jaxrs_basic.

I did following modifications to the web app
  1. Modified the the pom.xml file to add the commons-logging and log4j dependencies. Changed the scope to provided for the dependencies of cxf, ws.rs and commons client.
  2. Removed the "packagingExcludes" option to make sure that the jars related to log4j is packed in the WEB-INF/lib folder. We are packing this in to lib folder to make sure, that the defined log4j.properties files is loaded again for the web application. If not the logging will use the log4j.properties defined in carbon environment.
  3. Updated the "maven-antrun-plugin" to copy the "log4j.properties" files in to the target/classes in web app war file.
You can view the pom file over here.


Once you build this maven file, you can deploy it into Application Server and check for the log file in the custom path that you defined in log4j.properties file.

You can check for logs by hitting the endpoint with the following url


The source code can be found here for this web app.







Monday, May 11, 2015

How to use the original incoming message to serve 2 different requests in WSO2 ESB



I am going to explain you a typical scenario where you have to manipulate the incoming message
into WSO2 ESB proxy service.

Proxy service is going to receive the following message as a soap request.


This message should be send to two endpoints. Those are
- Student Record department - The entire request message should be send to this endpoint.
- Finance Department - Only interested with contents inside the "Payment" element.

Initially the proxy service is expected to perform the following operation.

Extract the part and create a payload. Then send it to the finance endpoint.

Lately remove the part by applying the xslt operation to it and then you can send it to the Student Records department.




This proxy service performs following operations
  1. Uses Enrich mediator to copy the complete envelope of incoming soap message.
  2. Property mediator extracts the PAYMENT element and child elements.
  3. Extracted Payment element can be used to create a new soap payload by using payload factory.
  4. If this operation is successful we can use the enrich mediator to recreate the payload
  5. On top of this enriched message XSLT mediator can be used to remove the Payment element.
  6. This message now can be send to the endpoint.



The xslt script, that used to remove the Payment can be found here. I have uploaded it to the registry.
Therefore, the proxy can reference to it by using key.







Saturday, May 9, 2015

How to enable nginx sticky module in Red-Hat version 6.5


Nginx seems to be the most effective software load balancer for the moment. It can be used to front several WSO2 products along with other load balancers.

When you are using load balancer with WSO2 products, the main concern is preserving the session affinity between several requests by a single user. In a clustered environment, sessions are not replicated throughout the cluster except some scenarios. Scenarios like SSO and OAuth information uses hazelcast based caching scenarios across the cluster. Therefore we don't need to have sticky sessions for them. But the admin service invocations depends on sticky sessions because we don't replicate sessions in a cluster. Therefore, it always recommended to use sticky sessions in wso2 products.

Nginx sticky session doesn't come along with default pack of nginx. Therefore, you have install it to the pack from building the source.



Step 1

Download nginx source from here.

Step 2

Download nginx sticky-module source from here.

Step 3

Stop the currently running nginx module and un-install it from the system.

service nginx stop

yum remove nginx


If you had install it from the source, you have to remove them manually. During the installation, if you had used the default installation places, you can use this command. 

sudo rm -rf /usr/local/nginx /usr/sbin/nginx /etc/nginx /usr/local/nginx/ /usr/local/nginx/

Step 4

Install the dependencies from yum install. If you are not allowed to use yum install, you can install them manually from source also.

yum install gcc gcc-c++ pcre pcre-devel openssl openssl-devel zlib zlib-devel

Step 5
Now we are going to build the nginx source with the sticky module. In order to do that, extract the nginx source and module into your preferred locations. Navigate to the nginx source's root directory and configure it.
./configure --user=nginx --group=nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --with-http_ssl_module --add-module=
If you had installed the zlib or pcre packages from the source, you have to add thier relevant path to the ./configure command. Ex. --with-pcre=/home/ec2-user/pcre-8.32  --with-zlib=/home/ec2-user/zlib-1.2.8
Step 6
Copy the nginx startup script from here to the /etc/init.d/nginx file.
change the permission level of that file  chmod a+x /etc/init.d/nginx
check the status of nginx config by running following command
chkconfig nginx on
Step 7
Now you can run the nginx with sticky session enabled.
service nginx start