How to setup VFS transport in WSO2 ESB with Samba

Environment: WS02 ESB 4.8.1, Samba 4.1.11, Ubuntu 14.10

Install Samba:

apt-get update
apt-get install samba

Configure two Samba shares:
/etc/samba/smb.conf

[SambaShareIn]
  path = /tmp/samba/in
  available = yes
  valid users = deep
  read only = no
  browseable = yes
  public = yes
  writable = yes
  guest ok = no

[SambaShareOut]
  path = /tmp/samba/out
  available = yes
  valid users = deep
  read only = no
  browseable = yes
  public = yes
  writable = yes
  guest ok = no

Set passwd for user deep:

smbpasswd -a deep

Enable VFS transport ( transport sender and listener ) in ESB:

$ESB_HOME/repository/conf/axis2/axis2.xml

<transportReceiver name="vfs" class="org.apache.synapse.transport.vfs.VFSTransportListener"/>

<transportSender name="vfs" class="org.apache.synapse.transport.vfs.VFSTransportSender"/>

Now you can create a VFS enabled ESB proxy:

<?xml version="1.0" encoding="UTF-8"?>
<proxy xmlns="http://ws.apache.org/ns/synapse"
       name="VFSSMB"
       transports="vfs"
       startOnLoad="true"
       trace="disable">
   <description/>
   <target>
      <endpoint>
         <address uri="http://localhost:9000/services/SimpleStockQuoteService"
                  format="soap12"/>
      </endpoint>
      <outSequence>
         <property name="transport.vfs.ReplyFileName"
                   expression="fn:concat(fn:substring-after(get-property('MessageID'), 'urn:uuid:'), '.xml')"
                   scope="transport"/>
         <property name="OUT_ONLY" value="true"/>
         <send>
            <endpoint>
               <address uri="vfs:smb://deep:deep@localhost/SambaShareOut/reply.xml"/>
            </endpoint>
         </send>
      </outSequence>
   </target>
   <parameter name="transport.PollInterval">5</parameter>
   <parameter name="transport.vfs.ActionAfterProcess">MOVE</parameter>
   <parameter name="transport.vfs.FileURI">vfs:smb://deep:deep@localhost/SambaShareIn</parameter>
   <parameter name="transport.vfs.MoveAfterProcess">vfs:smb://deep:deep@localhost/SambaShareOut</parameter>
   <parameter name="transport.vfs.MoveAfterFailure">vfs:smb://deep:deep@localhost/SambaShareOut</parameter>
   <parameter name="transport.vfs.FileNamePattern">.*\.xml</parameter>
   <parameter name="transport.vfs.ContentType">text/xml</parameter>
   <parameter name="transport.vfs.ActionAfterFailure">MOVE</parameter>
</proxy>

Now you can copy a SOAP message (test.xml) to location “smb://deep:deep@localhost/SambaShareIn” then ESB will poll for new files with extension “.xml” and send it to the give service. Response will by copy to the location “smb://deep:deep@localhost/SambaShareOut”

test.xml

<?xml version='1.0' encoding='UTF-8'?>
<soapenv:Envelope xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:wsa="http://www.w3.org/2005/08/addressing">
    <soapenv:Body>
        <m0:getQuote xmlns:m0="http://services.samples">
            <m0:request>
                <m0:symbol>IBM</m0:symbol>
            </m0:request>
        </m0:getQuote>
    </soapenv:Body>
</soapenv:Envelope>

How to set a Tomcat Filter in WSO2 Servers

Create your filter Jar and update the Carbon Tomcat web.xml to pick then new filter.

$CARBON_HOME/repository/conf/tomcat/carbon/WEB-INF/web.xml

<!-- Filter implementation -->
    <filter>
        <filter-name>SetCustomCookie</filter-name>
        <filter-class>com.piedpiper.CustomCookie</filter-class>
        <init-param>
            <param-name>mode</param-name>
            <param-value>DENY</param-value>
        </init-param>
    </filter>
<!-- Filter maping --> 
    <filter-mapping>
       <filter-name>SetCustomCookie</filter-name>
       <url-pattern>/*</url-pattern>
    </filter-mapping>

How to setup Apache Flume

Apache Flume is a distributed logging system. Flume supports local file systems and HDFS file system.

Flume has three component. Those are Master , Collector and agent. Master node does the coordination among the log cluster nodes. Collector acts as the log collecting agent. Log collector does the log storing task. Flume can can sink logs to different file systems. Users can develop their own sink plugins to support their storage log systems. Log agent does to log extraction and push the logs to collector.

Following configuration allow Flume to extract tail out of a log file and push to a log collector that writes logs to a local storage.

Start Master

./flume master

Start Collector

Sink to Local File system

./flume node -1 -n dump -c "dump: collectorSource() | collectorSink(\"/tmp/flume/collected\","server");" -s

Sink to HDFS file system

./flume node -1 -n dump -c "dump: collectorSource() | collectorSink(\"hdfs://node0:9000/flume/collected\","server");" -s

Start agent with tail to given log file

./flume node_nowatch -1 -s -n dump -c 'dump:tail("/home/hadoop/flume_log_gen_server/wso2as-4.5.0-SNAPSHOT/repository/logs/wso2carbon.log") | agentBESink("node0");'

How to use StratosLive Column ( Family Data ) Store Service.

WSO2 CSS is Column ( Family Data ) Store based on Apache Cassandra . WSO2 CSS  can deploy with any WSO2 Carbon based product and it is available as a service in StratosLive  the PaaS offering of WSO2.

It is very easy to use CSS as a data store with widely available connectors like java based Hector and other thrift based connectors. StratosLive supports Hector API to communicate with the Cassanda based back-end CSS cluster. External applications can use StratosLive PaaS column data store feature with any Cassandra connector.

StratosLive app developers have to use tenant information to authenticate in the connection with CSS data store. Tenant admin can create tenant and authorize the user for data store access.

Check the full sample in OT SVN.

This sample create connection to StratosLive CSS as an external application. It writes random data to StratosLive CSS keyspace and read and output date via stdout.

Instructions to build and run the sample.

Build the project with Maven

Take a copy of the source using svn

mvn clean install

Build the project with dependency libraries

mvn clean assembly:assembly -o

Execute the program

java -jar target/org.wso2.carbon.cassandra.examples-3.2.1-jar-with-dependencies.jar


Column ( Family Data ) Store Service in WSO2 StratosLive

StratosLive PaaS supports several internal data stores like column ( family data ) store service , relational data store service and external data sources like Amazon DS and Amazon S3. Also users can use external data sources via Web Services.

WSO2 introduces CSS in the StratosLive PaaS to support webscale data generated by users deployed applications and the PaaS itself.

WSO2 Stratos CSS is based on Apache Cassandra. Cassandra is modified to run in WSO2 Carbon platform which is an OSGI environment. Stratos CSS 1.0.0 is shipped with Stratos 1.5.1. Users can install it with WSO2 private cloud deployments. CSS related features can be deploy with any carbon standalone product and get full features.

StratosLive has separate CSS cluster deployed to store tenant keyspaces. StrtosLive Data Service Server ( DSS ) contains the user interfaces to manage keyspaces.

CSS is a multi-tenanted and it works with users in private Stratos deployments.

WSO2 CSS 1.0.0 features.

1. Manage (create / delete / modify ) keyspaces

2. Share Keyspaces with in users

3. Create Indexes

4. Monitor Keyspace

WSO2 CSS has easy user interface to manage keyspaces and users can use CSS to manage external keyspaces. Users can use WSO2 CSS as a Cassandra management user interface.

  • List Keyspaces

 

 

 

 

 

 

 

 

 

 

 

 

  • List Keyspace information

 

 

 

 

 

 

 

 

 

 

 

 

  • Create a Keyspace for a tenant

 

 

 

 

 

 

 

 

 

 

 

 

  • Create Column Family

 

 

 

 

 

 

 

 

 

 

 

 

  • Create Column and Set Indexes

 

 

 

 

 

 

 

 

 

 

 

 

  • Share Keyspace

 

 

 

 

 

 

 

 

 

 

 

 

WSO2 Stratos PaaS Column ( Family ) data support will improve with the CSS based data services and CQL support in next CSS releases.