How to setup VFS transport in WSO2 ESB with Samba

Environment: WS02 ESB 4.8.1, Samba 4.1.11, Ubuntu 14.10

Install Samba:

apt-get update
apt-get install samba

Configure two Samba shares:

  path = /tmp/samba/in
  available = yes
  valid users = deep
  read only = no
  browseable = yes
  public = yes
  writable = yes
  guest ok = no

  path = /tmp/samba/out
  available = yes
  valid users = deep
  read only = no
  browseable = yes
  public = yes
  writable = yes
  guest ok = no

Set passwd for user deep:

smbpasswd -a deep

Enable VFS transport ( transport sender and listener ) in ESB:


<transportReceiver name="vfs" class="org.apache.synapse.transport.vfs.VFSTransportListener"/>

<transportSender name="vfs" class="org.apache.synapse.transport.vfs.VFSTransportSender"/>

Now you can create a VFS enabled ESB proxy:

<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns=""
         <address uri="http://localhost:9000/services/SimpleStockQuoteService"
         <property name="transport.vfs.ReplyFileName"
                   expression="fn:concat(fn:substring-after(get-property('MessageID'), 'urn:uuid:'), '.xml')"
         <property name="OUT_ONLY" value="true"/>
               <address uri="vfs:smb://deep:deep@localhost/SambaShareOut/reply.xml"/>
   <parameter name="transport.PollInterval">5</parameter>
   <parameter name="transport.vfs.ActionAfterProcess">MOVE</parameter>
   <parameter name="transport.vfs.FileURI">vfs:smb://deep:deep@localhost/SambaShareIn</parameter>
   <parameter name="transport.vfs.MoveAfterProcess">vfs:smb://deep:deep@localhost/SambaShareOut</parameter>
   <parameter name="transport.vfs.MoveAfterFailure">vfs:smb://deep:deep@localhost/SambaShareOut</parameter>
   <parameter name="transport.vfs.FileNamePattern">.*\.xml</parameter>
   <parameter name="transport.vfs.ContentType">text/xml</parameter>
   <parameter name="transport.vfs.ActionAfterFailure">MOVE</parameter>

Now you can copy a SOAP message (test.xml) to location “smb://deep:deep@localhost/SambaShareIn” then ESB will poll for new files with extension “.xml” and send it to the give service. Response will by copy to the location “smb://deep:deep@localhost/SambaShareOut”


<?xml version='1.0' encoding='UTF-8'?>
<soapenv:Envelope xmlns:soapenv="" xmlns:wsa="">
        <m0:getQuote xmlns:m0="http://services.samples">

How to set a Tomcat Filter in WSO2 Servers

Create your filter Jar and update the Carbon Tomcat web.xml to pick then new filter.


<!-- Filter implementation -->
<!-- Filter maping --> 

WSO2Con 2013 Landon

WSO2Con Landon

Event picture


WSO2 Carbon 4 Released.


>WSO2 Carbon

How to build WSO2 Carbon 4.0.0 from source

First user has to build orbit related to Carbon 4.0.0.

Take a checkout from WSO2 carbon orbit 4.0.0 tag

svn co

build useing maven3

mvn clean install

Now user can build the Carbon Kernel.

Take a check out from Carbon Kernel 4.0.0 tag.

svn co

Now build using maven3.

mvn clean install

How to setup Apache Flume

Apache Flume is a distributed logging system. Flume supports local file systems and HDFS file system.

Flume has three component. Those are Master , Collector and agent. Master node does the coordination among the log cluster nodes. Collector acts as the log collecting agent. Log collector does the log storing task. Flume can can sink logs to different file systems. Users can develop their own sink plugins to support their storage log systems. Log agent does to log extraction and push the logs to collector.

Following configuration allow Flume to extract tail out of a log file and push to a log collector that writes logs to a local storage.

Start Master

./flume master

Start Collector

Sink to Local File system

./flume node -1 -n dump -c "dump: collectorSource() | collectorSink(\"/tmp/flume/collected\","server");" -s

Sink to HDFS file system

./flume node -1 -n dump -c "dump: collectorSource() | collectorSink(\"hdfs://node0:9000/flume/collected\","server");" -s

Start agent with tail to given log file

./flume node_nowatch -1 -s -n dump -c 'dump:tail("/home/hadoop/flume_log_gen_server/wso2as-4.5.0-SNAPSHOT/repository/logs/wso2carbon.log") | agentBESink("node0");'

Application Development with WSO2 Relational Storage Service ( WSO2 RSS )

WSO2 Relational Storage Service is a data storage service provided by WSO2 Stratoslive PaaS. WSO2 RSS supports MySQL and Amazon RDS as the back end data store.

Creating data bases with WSO2 RSS is a simple task. StratosLive Data Server has the easy RSS user interface that helps to add / manage databases.

Steps to create database using WSO2 RSS.

1.  Add Database.

2. Create Database User and add user to a database privileged group.

3. Create tables / mange data using  WSO2 RSS DB console.

RSS based data stores are accessible with in StratosLive PaaS.

Users can use Java application development methods to access RSS Data stores.

WSO2ConRSS application is a webapp deployed in StratosLive Application servers and it uses a RSS based data store to retrieve data. Source code related this sample available in OT svn.