Friday, April 29, 2016

Sublime Text software for JAVA,HIVE and all formats same like notepad++ with colors

syntax highlight for Hive and Pig in Sublime Text same like notepad++.

Here are the steps if you are interested (you can skip 1 – 3 if by chance you already have Package Control plugin installed):
.       Download the sublime (If you are not an admistrator,  download portable version)
           Go to https://packagecontrol.io/installation
      .       Follow the Manual installation steps (the automated script is timing out, I assume some proxy filter is not letting it through)
      .       Afterwards you might be prompted a couple of times to restart Sublime Text as various missing dependencies get installed
      .       Once that’s done, go to Preferences > Package Control
      .       Select Install Package (6th item from top or use type ahead)
      .       You might then have a delay of a few seconds and see a “package repository updating” message in status bar
      .       A dialog box will appear with packages available for installation - search for Apache Pig and Apache Hive and install them

Scala IDE installation with maven

1. Download the scala-ide from http://scala-ide.org/
2. Install the scala-ide from scala.exe
3. while opening the scala-ide yo will found jvm issues. to fix this you can add JAVA_HOME and PATH to environment variables.
4. To integrate scala with maven follow the below steps:
        Open Eclipse --> Help --> Install New software --> Add Repository
                              Maven for scala - http://alchim31.free.fr/m2e-scala/update-site
     once selected the categories click on next--> next -> finish.
     Restart the Eclipse
     Create New Maven Project by selecting FILE -> NEW -> ADD Project -> Maven Project
   But scala-arch some times we are unable to see. To solve this we need to Remote catagories into maven arch manually.
    Window-> Preferences -> Maven ->Archetypes -> Add Remote Catalog
Then you can get scala-archtype , select and create scala-maven project.
Run the Scala project and Enjoy :-)

For your reference :
eclipse.ini configuration file:
-startup
plugins/org.eclipse.equinox.launcher_1.3.0.v20140415-2008.jar
--launcher.library
plugins/org.eclipse.equinox.launcher.win32.win32.x86_64_1.1.200.v20150204-1316
-vmargs
-Xmx2G
-Xms200m
-XX:MaxPermSize=384m
-vm
C:\Program Files (x86)\Java\jdk1.7.0_45\bin\javaw.exe

new file
-startup
plugins/org.eclipse.equinox.launcher_1.3.0.v20120522-1813.jar
--launcher.library
plugins/org.eclipse.equinox.launcher.win32.win32.x86_1.1.200.v20120522-1813
-product
com.android.ide.eclipse.adt.package.product
--launcher.XXMaxPermSize
256M
-showsplash
com.android.ide.eclipse.adt.package.product
--launcher.XXMaxPermSize
256m
--launcher.defaultAction
openFile
-vm
C:\Program Files\Java\jdk1.7.0_25\bin\javaw.exe
-vmargs
-Dosgi.requiredJavaVersion=1.6
-Xms40m
-Xmx768m
-Declipse.buildId=v21.0.0-531062

Apache Spark installation on windows

Download scalaIDE to run scala applications http://scala-ide.org/

Now we’ll see the installation steps :
  • Install Java 7 or later. Set JAVA_HOME and PATH variable as environment variables.
  • Download Scala 2.10 and install. Set SCALA_HOME  andadd %SCALA_HOME%\bin in   PATH variable in environment variables. To test whether Scala is installed or not, run following command
  • Next thing is Spark. Spark can be installed in two ways.
    •  Building Spark using SBT
    •  Use Prebuilt Spark package

 Building Spark with SBT :
  • Download SBT and install. Set SBT_HOME and PATH variable in environment variables.
  • Download source code from Spark website against any of the Hadoop version.
  • Run sbt assembly command to build the Spark package
  • You need to set Hadoop version also while building as follows : 
  •      sbt –Pyarn –pHadoop 2.3 assembly                                                                     

Using Spark Prebuilt Package:
  • Choose a Spark prebuilt package for Hadoop i.e.  Prebuilt for Hadoop 2.3/2.4 or later. Download and extract it to any drive i.e. D:\spark-1.2.1-bin-hadoop2.3
  • Set SPARK_HOME and add %SPARK_HOME%\bin in PATH in environment variables
  • Run following command on command line.                              
  • You’ll get and error for winutils.exe:
      Though we aren’t using Hadoop with Spark, but somewhere it checks for HADOOP_HOME               variable in configuration. So to overcome this error, download winutils.exe and place it in any             location (i.e. D:\winutils\bin\winutils.exe).
  • Set HADOOP_HOME = D:\winutils in environment variable
  • Now, Re run the command “spark-shell’ , you’ll see the scala shell
  • Here sometimes you can get the \tmp\hive folder have read and write permissions. Because of this issue you were unable to create sqlcontext. To fix this issue:
          From command line --> winutil folder --> bin\winutil.exe chmod 777 /tmp/hive
               if office workstation you will not have permissions to change. But if your installation points to D: folder then you can give the access to the /tmp/hive.
  • For Spark UI : open http://localhost:4040/ in browser
  • For testing the successful setup you can run the example :   
  • It will execute the program and return the result :
Enjoy the installation process :-)