When you upload the program to Arduino UNO board you may get an error saying
“Serial port ‘/dev/ttyACM0’ not found. Did you select the right one from the Tools > Serial Port menu?“
This error occurs when user do not have write privileges to device /dev/ttyACM0. User can fix this issue by giving enough rights to user or simply give all writes to the device. ( chmod 777 /dev/ttyACM0 [this is a hack ;)])
- Download Websphere artifact
- Install Websphere
- Create Profile
- Deploy axis2 war
- Alter the class loading in Axis2
- Alter the class loading in websphere
- access the service via web
Download SOAP UI
Change follwing JVM options ins soapui.sh to run on 64bit new Ubuntu version.
Remove and reload the thinkpad_acpi module with enabling fan control. Set the fan level to disengaged.
rmmod thinkpad_acpi modprobe thinkpad_acpi fan_control=1 echo "level disengaged" > /proc/acpi/ibm/fan
Now you will hear the fan run with full throttle. You can check the system temperature and fan speed with “sensors” tools. sensors tool comes with the lm-sensors package.
Due to problems with class loader isolation, the Maven Ant Tasks cannot be run by the Maven AntRun Plugin with Maven 3.0. See MANTRUN-123 for details.
A temporary workaround seems to be the reverseLoader=”true” attribute for the :
<typedef resource="org/apache/maven/artifact/ant/antlib.xml" uri="urn:maven-artifact-ant" reverseLoader="true">
Flume has three component. Those are Master , Collector and agent. Master node does the coordination among the log cluster nodes. Collector acts as the log collecting agent. Log collector does the log storing task. Flume can can sink logs to different file systems. Users can develop their own sink plugins to support their storage log systems. Log agent does to log extraction and push the logs to collector.
Following configuration allow Flume to extract tail out of a log file and push to a log collector that writes logs to a local storage.
Sink to Local File system
./flume node -1 -n dump -c "dump: collectorSource() | collectorSink(\"/tmp/flume/collected\","server");" -s
Sink to HDFS file system
./flume node -1 -n dump -c "dump: collectorSource() | collectorSink(\"hdfs://node0:9000/flume/collected\","server");" -s
Start agent with tail to given log file
./flume node_nowatch -1 -s -n dump -c 'dump:tail("/home/hadoop/flume_log_gen_server/wso2as-4.5.0-SNAPSHOT/repository/logs/wso2carbon.log") | agentBESink("node0");'
Download Hadoop 1.0.0 and setup as a multi node cluster.
Download Apache Pig 0.9.1 and extract.
Export HADOOP_HOME – place you install Apache Hadoop
Start Apache Pig with mapreduce mode
You will get the grunt prompt