Creating a WSO2 Application Server cluster using AWS EC2 and Nginx as a load balancer


The WSO2 Application server is used for hosting, deploying and managing applications, built on the WSO2 Carbon platform. A detailed documentation of the Application Server can be found here.

As is the case with other WSO2 products, multiple instances of the Application Server can be installed in a cluster. When in a cluster, the work is shared between the multiple instances although it appears as if though there is only a single instance of the Application Server (AS).

In this tutorial, we will be using AWS EC2 instances to create a AS cluster. The EC2 instances will be Ubuntu based. We’ll be creating a cluster of 3 AS nodes, using Nginx for load balancing and MySQL for storing the user management and registry data of the AS. More information on load balancing can be found here and here.

Creating the EC2 instances

Create 5 EC2 instances. For the purpose of this tutorial, you can opt to use the free tier instances (t2.micro). However, the recommended memory for the AS is 2GB therefore, it is recommended to use the t2.small type. You can follow the steps given below in order to create the required instances.

  • In ‘Choose AMI’, select the AMI (Amazon Machine Image) you want. In this example, we will be using the Ubuntu 14.04 LTS server.
  • In ‘Choose Instance Type’, select the type of the instance you want. It is preferable to select an instance with at least 2GB of memory (t2.small). If you are using t2.micro, it is better to set the memory limit to 512MB in the <PRODUCT-HOME>/bin/ file. You can set this by changing the “-Xmx” option to “-Xmx512m”.
  • In ‘Configure Instance’, select the number of instances you need. You can either create 5 instances now or create 4 instances and make a copy of the Worker node after configuring and edit the relevant configurations.
  • In ‘Configure Security Group’, create a new security group for the instances. For this example, we’ll just enable all inbound traffic to the instances by selecting ‘All traffic’ for the type and ‘Anywhere’ for the source.

When you select ‘Review & Launch’ after configuring, you will be prompted to either create or select a key pair. You can either select an existing key pair you have or create a new pair. Whichever way you choose, make sure that you have access to that key pair since this will be used to connect to the instances you created, using SSH. In AWS EC2, there is no way to recover the key pair or download another copy; if you lose the key, you will lose access to your instances. Therefore it is important to keep the key file safe. 

Wait until AWS sets up your new instances and use a terminal to login to your new instances. Run the following command to get access to a particular instance. Assuming the key file is named ‘wso2as.pem’, replace <path-to-key-file> with the actual path of the key file and replace IP-of-instance with the public IP or the public DNS (provided in the instances dashboard in the AWS EC2 console).

ssh -i <path-to-key-file>/wso2as.pem ubuntu@IP-of-instance

You can assign names for the 5 instances through the EC2 dashboard. Name one instance Nginx and another MySQL.

The cluster we will be creating will take the following form, given in the image below.

WSO2 Application Server cluster

(pic credits:

This type of clustering is called Worker/Manager clustering. The AS instances in the Manager (cluster) are called Manager nodes and the instances in the Worker cluster are referred to as Worker nodes. The Worker nodes are used to serve requests received while the Manager nodes are used to deploy and configure artifacts on the AS. More information on clustering WSO2 products can be found here:

Installing & Configuring Nginx

To install Nginx on your EC2 instance, you can use the following commands. You only need to install Nginx in one of the EC2 instances.

sudo apt-get update
sudo apt-get install nginx

You can confirm that Nginx is properly installed by typing in the public IP address of the instance in a browser or by using curl. Both should serve the default “Welcome to Ngnix” webpage if Nginx is properly installed and running.

You can check the currently active listening TCP sockets using the following command. If Nginx is running, it should contain an entry for Nginx, indicating that it is listening to all incoming requests on port 80. (Note: without root privileges, the following command will not reveal the process IDs nor the program names, unless they are in the user’s scope)

sudo netstat -ntlp

To configure Nginx as the load balancer for our cluster, you can follow the instructions and settings given at

Points to note when following the instructions given in the above link:

  • Note the “ip_hash” directive in the upstream group in step 3. This tells Nginx to make use of the ip-hash mechanism of load balancing. In this method, the IP address of the client is used as a hash key in order to determine which server to serve the request. The advantage of this method is that all requests from a particular client gets served by one server. This eliminates the need to worry about persisting the users’ session data across multiple servers. If the server which was serving a client goes down, another server will take over serving the requests. More on load balancing using Nginx:
  • Note the “ssl_certificate” and “ssl_certificate_key” directives in the configurations shown in steps 3 and 4. For this, you need to give the path to the SSL certificate and key you create in the last section of the page, “Create SSL certificates”.

Setting up the Database

For the cluster to work, it needs to have a central database to store the user management details and registry data. We will use a MySQL server running on a separate EC2 instance for this purpose. Install MySQL server on the instance using the following command.

sudo apt-get install mysql-server

Use the configurations given in the following document to setup the databases required:

In this particular example, the MySQL server is runnning on a separate EC2 instance. By default, MySQL server only allows connections for the local user. If you wish to connect to the server remotely, you will have to set the permissions explicitly. You can follow the instructions given here to add remote users:

Note: In step 3, a property called skip-networking is mentioned. However, in newer MySQL versions, there is no such property and instead, the bind-address is set to

You will have to grant remote access privileges to all the nodes in the cluster.

Preparing the nodes

Download a copy of the AS from here:

For getting a copy of the AS to the EC2 instances, I used the secure copy command since downloading the product requires you to fill out a form first and the wget command didn’t work in this case. You can copy any file from your computer to an EC2 instance using the following command:

scp -i <path-to-keyfile>/wso2as.pem <path-to-AS-distribution>/ ubuntu@<ip-address-of-the-instance>:

The above command will copy the file “” to the home folder of the user account “ubuntu” at the specified IP address. If you wish to copy it to some other location, you can give the path after the colon.

Optionally, you can try and build the AS from the source:

Extract the package using the unzip command. You might have to install unzip on the instance first.

unzip <path-to-AS-distribution>/

You also need Oracle JRE 1.7 to run the AS. For this you can try wget. But, it didn’t quite work for me so I used scp to copy the JDK distribution (I used the JDK itself, but just the runtime would suffice) as well. You can extract it and set the variables in the .bashrc file.

you can access the .bashrc file using the following command:

nano ~/.bashrc

Append the following to the .bashrc file.

export JAVA_HOME=<path-to-extracted-jre-or-jdk>/jdk1.7.0_79
export PATH=${PATH}:${JAVA_HOME}/bin

After editing the .bashrc, you can reload the configurations using the following command:

source ~/.bashrc

Configuring the Manager Node(s)

For configuring the Manager nodes, you can follow the instructions given in:

Step 4 in the Manager node configuration deals with the configuration of a Subversion repository. This is important for the functioning of the cluster because this is where all the web apps hosted on the Application Server will be stored in. When a web app is uploaded to the AS through the management console, it will store the app in the specified SVN repo. The worker nodes will be checking this repo periodically and will pull in if there are any changes to the repo, since the last pull.

You can either use an existing repository you have or setup a new repository in a EC2 instance. I used an existing SVN repository of mine.

In addition to the above, you need the SVNkit library to sync the SVN repo across the nodes. To add this, download and place the following jar file in <PRODUCT_HOME>/repository/components/dropins.

In addition to this, you need to place the following library in <PRODUCT_HOME>/repository/components/lib

Additional details on the SVN deployment synchronizer can be found here:

Step 5 is important since Nginx uses the address to determine which server to route the request to. Typing in the plain IP will not work since that would just serve the default Nginx page (unless you have already configured it to make your site/app the default). More on the subject can be found here:

Configuring the Worker Node(s)

In order to configure the Worker nodes, you can follow the instructions given here:

Note that in step 4 the AutoCommit property in the Worker nodes are set to false.

In step 6, when configuring the other Worker nodes, apart from the localMemberPort property, you also have to change the localMemberHost property to reflect the IP address of the particular Worker node you are configuring.

Testing the Cluster

For testing the cluster, you can follow the instructions given at:

But for accessing the server and applications, use the addresses we defined earlier, in our /etc/hosts file.

To access the management console:

To access an app deployed on the AS:<url-of-the-app>

Here, the requests made to Nginx are redirected accordingly, to the relevant servers. If is requested, the request is directed to the manager node. If a URL in is requested, the request is routed to one of the worker nodes.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s