Hanumant’s Java Workshop

Turbo Charged Java Development!

Configuring DataSource, EntityManager, TransactionManager, DAO, and Service beans in a Spring Config File!

If you are learning JPA, don’t forget to check out this Sample Application

The title of this post looks like a SEO magnet, but this is truely what this post is about. I encounter this kind of configuration requirements often but not as often as would make me remember all the pieces by heart. So I decided to document the complete chain of beans that are required end to end for an application that is not deployed in an EJB Container.


<?xml version="1.0" encoding="windows-1252"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"

        <bean class="org.springframework.orm.jpa.support.PersistenceAnnotationBeanPostProcessor" />
        <tx:annotation-driven />
        <!-- This is a service layer bean that uses a DAO -->

    <bean id=”WebETSManager” class=”com.enthuware.webets.impl.SimpleWebETSManager”  init-method=”init”>
        <constructor-arg index=”0″>
            <value>some value</value>
        <property name=”webETSDAO” ref=”WebETSDAO”/>

    <bean id=”WebETSDAO” class=”com.enthuware.webets.impl.SpringJPAWebETSDAO”>
        <property name=”entityManagerFactory” ref=”myEmf”/>

    <bean id=”transactionManager” class=”org.springframework.orm.jpa.JpaTransactionManager”>
        <property name=”entityManagerFactory” ref=”myEmf”/>
        <property name=”dataSource” ref=”dataSource”/>

    <bean id=”myEmf” class=”org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean”>
        <property name=”dataSource” ref=”dataSource”/>

        <property name=”jpaVendorAdapter”>
            <bean class=”org.springframework.orm.jpa.vendor.HibernateJpaVendorAdapter”>
                <property name=”showSql” value=”false” />
                <property name=”generateDdl” value=”true” />
                <property name=”databasePlatform” value=”org.hibernate.dialect.MySQLDialect”/>
        <property name=”persistenceUnitName” value=”webetsPU”/>
        <property name=”persistenceUnitManager”>
            <bean class=”org.springframework.orm.jpa.persistenceunit.DefaultPersistenceUnitManager”>
                <property name=”defaultDataSource” ref=”dataSource”></property>
        <property name=”loadTimeWeaver”>
            <bean class=”org.springframework.instrument.classloading.InstrumentationLoadTimeWeaver”/>


         <bean id="dataSource" class="org.apache.tomcat.jdbc.pool.DataSource">
          <property name="driverClassName" value="com.mysql.jdbc.Driver"/>
          <property name="url" value="jdbc:mysql://localhost:3306/mydb"/>
          <property name="username" value="userid"/>
          <property name="password" value="pwd"/>
          <property name="initialSize" value="2"/>
          <property name="maxActive" value="5"/>
          <property name="testWhileIdle" value="true"/>
          <property name="validationQuery" value="SELECT 1"/>
          <property name="validationInterval" value="120000"/>
          <property name="timeBetweenEvictionRunsMillis" value="120000"/>


The beans are self explanatory.

February 22, 2011 Posted by | Java | Leave a comment

Debugging GWT project using Netbeans and Maven without the damn Plugins

My favourite Java editor is NetBeans and I love Maven. So when sometime ago I was developing a GWT application, I was disappointed to learn that

  1. GWT directory structure was not same as Maven standard directory structure.
  2. Google hasn’t provided any plugin for Netbeans.
  3. Third party NetBeans plugins that claimed to get you started with GWT quick were hopeless.

I tried GWT4NB and GWT Maven Plugin. After playing with these plugins for a couple of days, I just couldn’t get them to work the way I wanted. I wanted –

  1. Latest GWT version (2.1.0.M1),
  2. Netbeans v6.8 IDE,
  3. Maven 2.0 Directory Structure,
  4. and the ability to debug the GWT app in Netbeans (along with break points and all) .

Finally, I gave up on the plugins and decided to set up the whole project myself so that I could satisfy my requirements. The following is a documentation of what needs to be done to do this. This approach allows you to work with new releases as well.


  1. Basic knowledge of developing a regular JEE Web Application. You should know what file goes where.
  2. Basic knowledge of Maven –
    1. You should be able to create a simple web app project in Maven directory structure.
    2. You should be able to add a jar in your local Maven Repository
    3. Maven ant plugin – This plugin allows you to run any Ant task while building a project. Maven will automatically download it so you don’t have to do anything here. Just be aware.
  3. GWT – You should know the how GWT works. This article is not about how to develop GWT apps. It is about how to debug GWT app in Netbeans.
  4. Netbeans – Netbeans is capable of opening a Maven project without any modifications.

Setting up the infrastructure –

  1. Download and install GWT wherever you like on your machine. You need it to get the various jar files and the gwtc compiler. I have it in c:/gwt-2.1.0.M1
  2. Create your Maven Web App project (empty shell, with a Hello World index.jsp to begin with)  wherever you like. Since a Maven Web App has a standard directory structure, Netbeans has nothing to do with it. We will add GWT related things to appropriate directories in this web app later.
  3. In your Web app’s POM, you will add the following dependencies.         

    So add these two jar files (they are in the GWT installation directory) in your Maven repository. You can do this either by deploying it by mvn command or by directly putting them in appropriate directory in Maven repo. This is standard Maven stuff. Nothing to do with GWT.

Your Java code can be categorized into three kinds –

  1. server side code – e.g. business logic, DAO etc.
  2. the code that will be converted into JavaScript and used only on the client side – e.g. GUI screens.
  3. the code that is used by server side and the client side- e.g. Data transfer objects and Service interfaces . These classes will be used by the server side and these classes will also be converted into Java Script and will be used by the client side code.

In a Web App, the server side code has to be in the WEB-INF/classes and the Java Script files generated from the Java Code must be in the document root of the web app.

As you know, GWT development requires the following two changes in your regular Web App development process –

  1. Before you can build your WAR file , you need to first compile your Java code of kind 2 and 3 to JavaScript. This compilation is done by the GWT compiler, which is implemented by com.google.gwt.dev.Compiler class.
  2. Unlike a regular webapplication, “Debugging” a GWT application actually involves “Running” the  GWT’s custom webserver in Debug mode. This custom webserver is implemented by com.google.gwt.dev.DevMode class and this is exactly what is invoked when you run “ant devmode”.Note: Debugging the GWT WebApp directly like a regular web app will only allow you to the debug the server side code. Indeed, because the IDE has no knowledge of the Java Code that was used to generate the JavaScript code.  Obviously, this is not what you want. You want to be able to debug the client side Java Code.

So basically, if you are able to do the above two changes in your build process, you are home free.  The following is how you can do it:

Add two new profiles in the profiles section of the project’s POM – one that adds a the GWT compile step in the build step and another that runs the DevMode class in the debug step.

Note: Do not let the size of the following profiles scare you. They are very straight forward.


              <!– your objective in this profile is to attach an ant task in the compile phase. –>
            <id>build-with-gwtc</id> <!– This is the name of your profile. Can be anything. –>
                    <plugin> <!– configure antrun plugin –>
                                <phase>compile</phase> <!– you want to run the task in compile phase –>
                                    <tasks> <!– this is the ant task that you want to run. The task is to run the GWT Compiler, which is a java class. –>

                                         <java failonerror=”true” fork=”true” classname=”com.google.gwt.dev.Compiler”>
                                            <classpath> <!– configure the classpath for the GWT compiler. It needs these jars. –>
                                                <pathelement location=”${gwt.sdk}/gwt-user.jar”/> <!– gwt.sdk is a variable defined in properties section below–>
                                                <pathelement location=”${gwt.sdk}/gwt-dev.jar”/>
                                                <pathelement location=”${gwt.sdk}/gwt-servlet.jar”/>


                                                <!– This is where all your Java code exists. Remember, GWTC needs raw Java code and not class files. So it needs to know where to find the java code.–>
                                                <pathelement location=”src/main/java”/> 


                                                <!– all your resource files such as spring configuration file exist here.  This location is specified here because it also contains MyGWTMain.gwt.xml file. –>
                                                <pathelement location=”src/main/resources”/>

                                            <jvmarg value=”-Xmx256M”/> <!– GWT compiler needs some extra memory –>


                                            <!– These arguments are straight from google provided build.xml file –>

 <!– -war target/${webappctxname}  tells GWTC that all the web app code is in target/${webappctxname} directory. This is the same place where Maven puts the web app after building the webapp.  GWTC is the last step of the build process. –>

<arg line=”-gen gen -war target/${webappctxname} -style PRETTY”/>



                                            <!–  Notice that you don’t specify all the Java files that you want GWTC to compile here. You specify only the entry point. It figures out the rest using MyGWTMain.gwt.xml file. This is a standard GWT configuration file and it should be in src/main/resources/com/mycompany directory –>

                                         <arg value=”com.mycompany.MyGWTMain”/>




 <!– your objective in this profile is to run GWT’s custom web server in debug mode. Most of the stuff is same as above.–>

<!– you can probably change it to a different phase. –>


                                         <!– you are going to run  com.google.gwt.dev.DevMode class. So below, you are just specifying its classpath and the arguments. –>
                                        <java failonerror=”true” fork=”true” classname=”com.google.gwt.dev.DevMode”>
                                                <pathelement location=”${project.basedir}/target/${webappctxname}/WEB-INF/classes”/>
                                                <pathelement location=”${gwt.sdk}/gwt-user.jar”/>

                                                 <fileset dir=”${gwt.sdk}” includes=”gwt-dev*.jar”/>
                                                <fileset dir=”${project.basedir}/target/${webappctxname}/WEB-INF/lib” includes=”**/*.jar”/>
                                                 <!– the following is rquired because that’s where DevMode will find the java code for the GWT javascripts. –>
                                                <pathelement location=”${project.basedir}/src/main/java”/>
                                                <pathelement location=”${project.basedir}/src/main/resources”/>

                                            <jvmarg value=”-Xmx256M”/>
                                            <jvmarg value=”-Xdebug” />


                                           <!– This will cause the process to open a up port 5555  for debugger. –>
                                            <jvmarg value=”-Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=5555″ />

                                            <jvmarg value=”-Xnoagent”/>
                                            <jvmarg value=”-Djava.compiler=NONE”/>   

                                            <!– The following arguments are for the GWT WebServer DevMode –>
                                           <arg value=”-startupUrl”/>
                                            <arg value=”MyGWTMain.html”/>
                                            <arg value=”-war”/>

                                            <arg value=”target/${webappctxname}”/>

                                             <arg line=”${gwt.args}”/>
                                            <arg value=”com.mycompany.MyGWTMain”/>


        <!– The following are added by netbeans. –>



How to use the above configuration

  1.  To build the project – In your Netbeans project explorer, Right Click on your project. You should see two new profiles – build-with-gwtc and debug-gwt. Select “build-with-gwtc”. Now, again right-click on the project and select “clean and build”. This will cause it to first compile (using javac) all your java code and put the class files and resource files in appropriate directory in “target” and then it will execute the ant task, which calls the GWTC compiler. It will generate the JavaScript for all the permutations. Finally, it will build the War file.
  2. To debug the project –  In your Netbeans project explorer, Right Click on your project and select the profile debug-gwtc. Now, again right-click and select “Build”. Do not select Run or Debug. Remember, we have modified the build step to execute ant java process in debug mode. Once you click build, it will execute DevMode class in a separate process and you should see a message “Listening for transport dt_socket at address: 5555”. You can now go to Debug menu and select Attach Debugger to attach to this process.

 This may look like a lot of steps but it is actually quite simple and transparent. This approach allowed me to learn how exactly GWT works.


September 22, 2010 Posted by | Java | 3 Comments

Continuous Integration using Perforce and Hudson

A little background

The basic premise of Continuous Integration is simple. You want to know, as soon as possible, whether the the code that you just checked into the source control is breaking anything or not. So, as soon as you checkin your code, the build server gets all your latest code and all other code that depends on your code and builds all the affected projects. Compilation error or failure of any of the test cases should raise an alarm (i.e. send an email to the group).

We are using Perforce as the source control and Hudson as our build server. All of our projects are Mavenized.  We have set up a trigger on our Depo in Perforce such that whenever anything is committed to our Depo, it makes an HTTP request with parameters such as the changelist number, clientspec, and userid.

When I joined this project, I inherited a small but a very smartly written servlet that handled the HTTP requests made by Perforce upon any commit. This servlet would:

  1. get the changelist number (from the HTTP parameters)
  2. get the list of files committed under that changelist by executing the perforce client (p4.exe): p4.exe -p p4server:port -u p4userid -P p4passwd fstat @1234,@1234
  3. Identify the projects affected by these files, and
  4. Kickoff Hudson build projects to do the build(s).

The catch was that this small code was written in Groovy (apparantly, it was small because it was written in Groovy) and nobody knew Groovy in our team. All of our projects are in Java and it doesn’t make sense to add a completely new programming language to the mix just to save a about 50 lines of code, when there is 1000s of lines of Java code base lying around.  So we wanted to convert this Groovy servlet into Java.

This post is about the problem that I faced in step 2 above (retrieving file names committed under a given change list id).

The problem

Even after setting the userid and password flags ( -u and -P) while calling p4.exe and even after setting P4USER and P4PASSWD environment variable, Perforce would not return any data. It would just send back an error message saying: “P4PASSWD is wrong or is unset.”

As it turns out, for some reason, you need first execute p4 login command before calling the p4 fstat command. And p4 client does not accept the password provided in the -P flag or P4PASSWD env variable. When you call p4 login, it asks for the password and waits on the stdin for the user to enter the password. Obviously, this is no good for the continuous integration. This process must execute without any manual intervention.

The solution

The following is the code snippet that did the trick for me.

//the following code is executed once to login
StringBuilder sb = new StringBuilder();
String loginCmd = "p4 -u " + this.userid + " -P " + this.passwd + " -p " + this.host + " login";

Runtime runtime = Runtime.getRuntime();
Process p = runtime.exec(loginCmd);
OutputStream os = p.getOutputStream();
int returncode = p.waitFor();
getOutput(stringBuilder, p.getInputStream()); 

if (returncode != 0) {
   throw new Exception("Problem in login. Return code: " + returncode + " Output:" + sb);


//In another method that gets the fstat output for a given changelist
Runtime runtime = Runtime.getRuntime();
String fstatCmd = "p4 fstat @" + changeListId + ",@" + changeListId;
Process p = runtime.exec(fstatCmd);
getOutput(sb, p.getInputStream());
int returncode = p.waitFor();
if (returncode != 0) {
      throw new Exception("Problem in running fstat. Return code: " + returncode + " Output:" + sb);



June 19, 2010 Posted by | Java | 1 Comment

Why Spring?

Much has been written about the benefits of the Spring Framework ever since it was released. That Spring framework is light weight, non-invasive, feature rich, etc. is quite well known. But then there are many things that are light weight, non-invasive, and feature rich. What’s so special about Spring? The DI/IoC? Here is my take on it…

In my experience, nearly all non trivial professionally driven projects start out with some fairly common goals, namely – minimize hardcoding, minimize dependency on specific implementation classes, extensible, and easily customizable. These goals are neither unique, nor special. They have been achieved before and are being achieved by many dev teams even now.  Their ubiquity, afterall, is what has driven the invention of various Design Patterns.  Every development team has an architect/designer who has his own view about how to achieve all these goals and assuming that he is well read (w.r.t design patterns), he rolls up some boiler plate code and his own set of conventions that reflects his understanding of the design patterns, which the rest of the team follows. There is nothing fundamentally wrong with this approach. In fact, this is pretty much how applications were being developed for some time now. Long before Spring, Hibernate, or even Struts, I wrote my own framework for developing a discussion forum, resource management, and webbased exam tool on www.jdiscuss.com.

So what’s the problem?

The problem is that a wheel is reinvented everytime a new framework is written. Just imagine what would happen if every car company had its own way of maneuvering  their car. Oh, I can drive a Ford but I guess I need a month long training to drive a Chevy!
This is exactly what happens when new people join the team, or worse, when the architect/designer leaves the team!  I myself find it quite hard and boring to go through framework codebase (who writes documentation?) and understand the whole customized design philosophy. It is a pain to add or update any new feature, not because the framework is poorly written but simply because there is a lot of inertia to learn the nuances of a particular custom framework. For complex applications, there is a huge learning curve and people are not really motivated to learn because this skill is not transferrable to another job instantly. 

This is where we need Spring (and Hibernate/Struts). Spring is basically an implementation, not a mere “specification” or “guideline” but a concrete implementation that can be used right away, of industry approved and time tested good design practices. Not only it gives us a standard way of doing standard things but also a standard way of doing non-standard things. It is a lot easier to swap resources between Spring based projects than between custom framework based projects. Spring is now so widely used accross the industry that a new developer is also quite motivated to learn the system because his resume gets instant recognition 🙂

What about “every application is different”?

Yes, I admit that every application is different and has different needs. However, in my experience, a lot of things that we do are same. We do the same thing – singletons, facades, data access, etc. for every application and Spring provides all the boiler plate code to do that. Further more, Spring has now evolved (as of version 3.0) so much that even for a weird requirement, I am sure there is something available in Spring that does it.  Just check out the reference manual before coding it yourself.

In short, you need Spring. It really is a panacea!

October 3, 2009 Posted by | Java | 3 Comments

Creating an RPM for a Java Application

If you are preparing for Oracle Java Certification, don’t forget to check out Mock Exams and Questions from Enthuware. They are the best!

Ok, so why would I want create an RPM for a Java application anyway?

Well, there can be several reasons. I wanted to deploy my application on several CentOS boxes. All these boxes are hooked up to a central repository server. To deploy any application to these appliance, the RPM for that application needs to be added on this repo server. The boxes are synched with the repo server automatically. So basically, if I add an RPM on this repo server, the boxes can easily grab it just like any other application. No user intervention required in the whole process.

The ground work

There are several online guides and tutorials that describe how to build an RPM. The ones that I read were –

  1. https://pmc.ucsc.edu/~dmk/notes/RPMs/Creating_RPMs.html
  2. http://docs.fedoraproject.org/drafts/rpm-guide-en/ch-creating-rpms.html
  3. http://genetikayos.com/code/repos/rpm-tutorial/trunk/rpm-tutorial.html
  4. http://www.ibm.com/developerworks/library/l-rpm1/

While all of the above gave me the basic concepts and the general idea of how to build an RPM package, I found several key pieces of information missing which caused confusion. Also, none of them explained it from a java developer’s perspective.  My goal in this blog is to document my findings, the information that would have saved me a week of effort had it been given in the above refered articles, for myself [it’s amazing how soon you forget things 🙂 ], and for any other poor soul who is banging his head trying to build an RPM for a Java application.

I advise you to go through all of the above articles before reading further.

The Standard Steps

All of the following steps are to be done on a Linux box, the one where you want to build the RPM.

Step 1: Install the package rpm-build

yum install rpm-build

or if you are not logged in as root but your account is in the sudo list –

sudo yum install rpm-build

Notice that the package name is rpm-build but the command that you use for building is rpmbuild (not rpm-build) 🙄

Step 2: The .rpmmacros file

You need to create this file containing the following two lines in your home directory.

%_topdir       /home/hdeshmukh/rpm
%_tmppath      /home/hdeshmukh/rpm/tmp

This file basically tells rpmbuild to use your personal account space for building the rpm instead of using shared space.

Step 3: Create the directory structure

rpmbuild requires the following directory structure under %_topdir directory –

The following command makes these directories –

$mkdir ~/rpm ~/rpm/BUILD ~/rpm/RPMS ~/rpm/RPMS/noarch ~/rpm/SOURCES ~/rpm/SPECS ~/rpm/SRPMS ~/rpm/tmp

Key Information – rpmbuild builds rpms for various CPU architectures such as i386 and i686. For a Java application, you don’t care about that. So you just need to create one directory named ~/rpm/RPMS/noarch instead of multiple directories such as ~/rpm/RPMS/i386, one for each architecture.

Key information for bundling a Java Application

Impedence Mismatch

As you know (I am assuming that you have gone through the above mentioned URLs) that rpmbuild actually 1. “builds” the sources and 2. “installs” the output of the build process.  Now, in Linux world, this basically means compiling the sources using “make”. The place where I was stuck and frustrated was the liberal reference to “make” in the above mentioned articles.  I know it is a build tool but honestly speaking, I have never used make and I do not care about it. I don’t know what exactly it does, what cfg file in what path does it need, what does it create and where it puts what it creates. The next stumbling block for me was the “install” process. All the articles refer to “make -install”. Again, I have no clue what/how/where does it install. May be there is a config file somewhere that it reads, but I don’t know.

Build Process in the Java World: In Java world, you have, for “sources”:

  1. a set of .java and .properties files (organized in some application specific directory structure)
  2. a set of third party jars
  3. an ant build.xml file that describes the build process

The build process uses ant (instead of make) i.e. ant <targetname> and it uses the build.xml to generate the final output, usually, in the form of a jar file.

Install Process in the Java World: For a simple application, you can just drop your final jar anywhere you like and just run java -jar <jarname>. For a complex application, such as an enterprise application (a war or an ear), you might have to drop the war or ear to appropriate directory of the application server. In some cases, you might even want to explode the jar to an approriate directory on the machine and then execute some command like java -classpath ./lib/a.jar:./lib/b.jar -Dx=1 -Dy=2 com.mycomp.myapp.AppStarter to run your application.

With the above discussion in reference, I will now introduce the heart of the build process, the spec file :drumroll: The lines in the bold font is the code that goes in the spec file and the lines in the regular font are my annotations.

The .spec file

Summary: Lease Alert Monitor
Name: LeaseAlert 
Should not have any space.
Version: 1
Release: 1  
Name version and release become part of the name of the output rpm file. In this case it will be LeaseAlert_1_1.rpm
License: Restricted
Group: Applications/System
BuildRoot: %{_builddir}/%{name}-root 
BuildRoot is the directory where rpmbuild will “install” the output of the “build” process (whatever that process it).  %{_builddir} points to %_topdir/BUILD/ so our BuildRoot will be ~/rpm/BUILD/leasealert-root (%_topdir is defined in .rpmacros to be ~/rpm)
URL: http://mycompany.net/
Vendor: Mycompany
Packager: Hanumant Deshmukh
Prefix: /usr/local 
In the “install” process (specifed in the spec file below), you have to specify the exact directory path where you want to install (i.e. copy the files, basically) on the machine where the application is being installed. You may believe that the application will be installed in say, /usr/local/javaapps/leasealert directory, but at the install time, the machine may not have /usr/local or the user may not want to install it there. May be the user wants to install it a /home/hdeshmukh/javaapps/leasealert The Prefix value specifies what part of your install directory is changeable by the user. So when a user installs your RPM, he can specify the prefix and the application will be installed under <prefix>/javaapps/leasealert instead of /usr/local/javaapps/leasealert.
BuildArchitectures: noarch
For java apps, you don’t care about the CPU architecture

Lease Alert Monitor

%prep For java apps, there is nothing to prepare. However, in some cases, you might want to pull the sources from a source code control system. The command to pull the sources (and all the files that are required for building) and put it in the SOURCES directory (explained in build section below) should go here.

This just shows where rpmbuild is executing from.
cd %{_sourcedir}  When rpmbuild reaches the build section, it is in BuildRoot directory (specified above in the beginning of the spec file). But our sources are in the SOURCES directory. %{_sourcedir} is a standard variable available in rpmbuild and in our case it points to ~/rpm/SOURCES. The goal here is to cd to the directory from where ant can find the build.xml and execute the build target.
ant tar My build.xml is in the top of the SOURCES directory and the name of my target is tar. The output of this ant command is a tar file named leasealert.tar in ~/rpm/BUILD directory. Please see the description of my ant file below to learn the contents and structure of this tar file.

At this time, your current directory is BuildRoot. This is where the application will be installed on YOUR machine (i.e. the machine on which you are creating the rpm) and NOT ON end user’s machine. When the rpm is installed on the end users machine, the directory path upto BuildRoot will be removed. So if you install your app (on your rpm build machine) in <BuildRoot>/usr/local/javaapps/leasealert, when the user installs your rpm, the application will be installed in /usr/local/javaapps/leasealert. Of course, the user may specify the prefix as /home/hdeshmukh, in which case the app will be installed in /home/hdeshmukh/javapps/leasealert
rm -rf $RPM_BUILD_ROOT This is just to make sure that the BuildRoot is empty. $RPM_BUILD_ROOT points to ~/rpm/BUILD/leasealert-root in my case. Note that this command will NOT be executed on end user’s machine.
mkdir -p $RPM_BUILD_ROOT/usr/local/edns/standalonejava/leasealert The path after $RPM_BUILD_ROOT, is where you want (subject to the prefix change) the application to be installed on the end user’s machine. In my case, it is /usr/local/edns/standalonejava/leasealert. This directory will be created on end user’s machine if it does not already exist. Not because of this mkdir command here but because the rpm file will contain the application in /usr/local/edns/standalonejava/leasealert directory and will explode the contents in this directory on end user’s machine. In case of a web application, you should give the path to your application server’s document root directory.
cd $RPM_BUILD_ROOT/usr/local/edns/standalonejava/leasealert
tar -xf $RPM_BUILD_ROOT/../leasealert.tar
My install process just requires me to explode the tar file in $RPM_BUILD_ROOT/usr/local/edns/standalonejava/leasealert directory. In case of a web app, you may want to explode it in the application server’s document root directory. So you have to make that directory (in the previous step) before exploding the tar. 

This is executed after the rpm file is already built. So no need to keep this stuff anymore.

%files Here, you have to list ALL the files that you want to copy on the end user’s machine. The path to the files is as per the deployment structure used under BuildRoot. So no need to specify $RPM_BUILD_ROOT/usr/local…
%attr(755,root,root) /usr/local/edns/standalonejava/leasealert/run.sh
This is the shell script file that contains the java command to excute my main class along with some properties, so I want to make it executable.

* Tue Oct 20 2008 Hanumant
– Created initial spec file

I hope the contents of the spec file are clear. Now, run $rpmbuild -ba ~/rpm/SPECS/leasealert.spec to build the rpm. LeaseAlert_1_1.rpm should be created in /RPMS/noarch directory.

The build.xml file

The deployment structure of my standalone java application is follows –

<deploydirectory>/leasealert.jar <– contains application class files
<deploydirectory>/run.sh <-shell script to run the application
<deploydirectory>/lib/mail.jar <– third party jarfiles

In case you are wondering about the contents of run.sh, it is quite simple:

java -classpath .:leasealert.jar:./lib/mail.jar com.mycompany.MyClass

As you have probably already noticed in the spec file, I want the deploy directory to be /usr/local/edns/standalonejava/leasealert

Now, the goal of my build.xml is to compile the sources and bundle all the stuff (jar file, lib, props, and run.sh) into a single tar file named leasealert.tar. The directory structure contained within the tar file should be such that, when exploded during the “install” process of rpmbuild, it should reflect the deployment structure. The following is how I achieved it –

<project name=”leasealert” basedir=”/home/hdeshmukh/rpm/” default=”main”>

    <property name=”rpmroot.dir”     value=”/home/hdeshmukh/rpm”/>
    <property name=”lib.dir”     value=”${rpmroot.dir}/SOURCES/lib”/>
All my third party jars are here.
    <property name=”src.dir”     value=”${rpmroot.dir}/SOURCES/src”/> All .java files, and property files are here.
    <property name=”build.dir”   value=”${rpmroot.dir}/BUILD”/> This is where the the final leasealert.tar file is generated and put by ant.
    <property name=”classes.dir” value=”${build.dir}/classes”/>This is where the .class files are generated by ant.
    <property name=”jar.dir”     value=”${build.dir}/jar”/> This is where the the leasealert.jar file is generated and put by ant.

    <property name=”main-class”  value=”com.mycompany.myapp.MyClass”/>

Standard ant stuff —

    <path id=”classpath”>
        <fileset dir=”${lib.dir}” includes=”**/*.jar”/>

    <target name=”clean”>
        <delete dir=”${build.dir}”/>

    <target name=”compile”>
        <mkdir dir=”${classes.dir}”/>
        <javac srcdir=”${src.dir}” destdir=”${classes.dir}”  classpathref=”classpath” source=”1.6″ target=”1.6″/>

    <target name=”jar” depends=”compile”>
        <mkdir dir=”${jar.dir}”/>
        <jar destfile=”${jar.dir}/${ant.project.name}.jar” basedir=”${classes.dir}”>
                <attribute name=”Main-Class” value=”${main-class}”/>

    <target name=”tar” depends=”jar”> This generates the final leasealert.tar
      <copy file=”${src.dir}/run.sh” tofile=”${jar.dir}/run.sh” />
      <copy toDir=”${jar.dir}/” >
          <fileset dir=”${src.dir}”>
              <include name=”**/*.properties”/>
      <mkdir dir=”${jar.dir}/lib”/>
      <copy todir=”${jar.dir}/lib”>
          <fileset dir=”${lib.dir}”>
              <include name=”**/*.*”/>

      <tar destfile=”${build.dir}/leasealert.tar” basedir=”${jar.dir}”/>

    <target name=”clean-build” depends=”clean,jar”/>

    <target name=”main” depends=”clean, tar”/>


I am assuming that you already have ant installed on your build machine. If not, you can build your stuff elsewhere and just copy the outout into the BUILD directory of rpm. You can install ant using the command- sudo yum install apache-ant

Closing Remarks

The approach that I have chosen here is to use build mechanism of rpmbuild to kickoff the ant build process. It seems, ant 1.7 has a rpmbuild task that allows you to kickoff the rpmbuild process from an ant build process. Don’t get too excited though, because you require the same spec file in this approach as well 😀 I have ant 1.6, which does not have rpmbuild task and I was too exhausted to upgrade it and generate an rpm using this approach. So I leave the details of how to do that as an exercise for the readers 🙂

As always, comments welcome!

OCAJP Mock Exams and Questions

October 22, 2008 Posted by | Java | 6 Comments

Firing Up LAMP Update 2

It has been a long time since I investigated this. Here is a quick recap of what happened while working on this earlier: I tried installing Quercus 3.1.3 on Resin 3.0.24 and the sample programs worked fine. Then I tried installing phpBB3, (which was at RC6 stage) and ran into a bug in Quercus which stalled the installation. I googled the issue and found that other people also encountered the same. Folks at Caucho mentioned on their forum that they were working on this issue. Since all of the pieces involved were in beta/non-production stage, I wasn’t too interested persuing this further.

Well, situation has now changed. phpBB3 has been released as a production version and Caucho folks have fixed the issue in Resin 3.1.4. So I decided to give it a try again and the following are my finding/observations.

1. Installing Quercus

Quercus is implemented as a servlet and is bundled in quercus.jar. It also depends on resin-util.jar and script10.jar  Important thing is that all these jar files are already bundled with Resin and are present in <resin>/lib directory. So there is no explicit “installation” of Quercus as such. It is already there. For other app server, these files should be added to their lib folder.

Since Quercus is exposed as a servlet, any webapp that wishes to serve phps, must configure QuercusServlet to service .php requests. This is done by putting the following entry in any webapp’s web.xml file:

     <servlet-name>Quercus Servlet</servlet-name>

    <!– Tells Quercus to use the following JDBC database and to ignore the
         arguments of mysql_connect().


    <servlet-name>Quercus Servlet</servlet-name>

As you can see, I have also configured a database connection that Quercus will use. The free version of Quercus cannot use arbitrary database connections from php script. Regardless of the connection parameters specified in php code, connection specified by this entry is used.  In this case, I have specified a JDBC connection in conf/resin.conf file and mapped it to jdbc/phpbb3.

           <driver type=”com.mysql.jdbc.Driver”>

This  completes the configuration required to use Quercus.

2. Installing phpBB3

I just exploded the phpBB3 distribution in resin\webapps\phpbb3 folder and created web.xml file (web.xml for phpbb3) containing the entries made in step 1, in its WEB-INF folder . That’s it.

I started up resin server and access http://localhost:8080/phpbb3 I got the phpbb3 installation screen, followed the prompts and every thing went smooth. No issues. After installation, I was able to create and access forums, topics, and posts. I must say that at this point I haven’t checked out all the functionality of phpBB3.

So overall, everything seems to be working fine. I will now play with this set up and try to integrate it with some JEE application.

January 31, 2008 Posted by | Java | Leave a comment

Firing Up LAMP

I am currently facing a problem with an enterprise website. This website contains some webapps that are based on the JEE stack. Some of these webapps are custom developed and some are opensource apps. Some of the webapps are also hooked up to backend legacy systems through an ESB. So far so good!

Now, I want to leverage another opensource webapp and this webapp is based on LAMP. While LAM is ok, it is P that I have a problem with. To get this webapp up and running would have been no issue had it been the only webapp I was interested in. But I need to integrate this webapp with some backend components that are written in Java. Yes, PHP does have some modules that can be used to do so but I feel that such an integration is not seamless. Also, I am not using Apache but my application server itself as my webserver. So I will need to figure out how to plug in PHP engine into that. Then there is some personal reason as well … I like to  develop in Java and I would like to keep the PHP stuff to a minimum. I would have coded up the PHP app in JEE but it feels such a waste of time reinventing the wheel.

So basically, I was looking for something that will allow me to easily integrate PHP with my JEE environment and apparantly I have hit the jackpot!!! Folks from Caucho, who are  well known for their high performance servlet engine called Resin, have developed a cool technology called Quercus that implements PHP engine in pure Java. Here is why I am drooling…

1. Non intrusive – It is just a war file that you can install in any servlet container. So I can keep my existing set up as it is. No messy mod settings.

2. Fast – PHP files are compiled to Javabyte code (just like JSP files are) and as per their benchmark results, it runs up to 6x faster than apache-php combination.

3. Seamless Integration with Java – Take a look at this :

	$my_bean = jndi("java:comp/env/ejb/my-session-bean"); 

You can get hold of any of your existing Java components and use them right from PHP. Not that you would want to do this on a regular basis, but you can if you need to as a tactical solution!

4. Breaks the barrier – Most importantly, I think it breaks the barrier between the Java and the PHP world and lets their waters mix. For example, I can hook up an opensource CRM system based on PHP with an existing Java based OMS. It is well known that PHP is excellent for quick prototyping of webpages and with Quercus, so I can take advantage of that while at the same time I can use JEE in developing complex enterprise applications.

 Well, to me, it does sound really good on paper and in the next couple of weeks I am going to try this thing out and see if it really delivers what it promises. So here is what I am going to do …

1. Set up Quercus – first on Resin and then on Tomcat.

2. Make phpBB3 work on Quercus.

3. Hook up certain pbpBB functionality with a JEE based webapp and with some session beans.

Stay tuned …

October 15, 2007 Posted by | Java | Leave a comment

Terracotta and GridGain comparison…

One of my objectives with this exercise was to be able to understand which one of these tools should we use in what situations. It looks like we can get some inferences with the help of this application.

Initially, before implementing this app on GridGain, I was not too sure on how to implement it so that we can have some similarity with the implementation on TerraCotta. In Terracotta, we shared a Jobs class instance accross JVMs and we started multiple Producers and Consumers who would look at the same Jobs instance and add/remove a Job to/from that instance. In effect, we were able to take advantage of multiple nodes by starting up either a Consumer or a Producer, as required, and achieved better performance. In other words, “sharing” enabled us take advantage of multiple machines. So I was hung up on finding out how to share things on GridGain.

After discussing this with Dimitriy, I learned that it would not be correct to look at GridGain from a “sharing” perspective. We should look at it from a “task” perspective…what task can be made a unit of work and can be executed on other  machines. In this application, Job.run() is such a task. In Terracotta, we isolated Producers and Consumers, while in GridGain, we isolated Job.run() method.

However, one drawback of this application is that Job.run() is a completely independent task and does not depend on anything. So no sharing or coordination between two JVMs is required. In Terracotta solution we were able to see how such coordination can be done among threads running on multiple machines but our application doesn’t touch this aspect on GridGain. I will try to modify it such that we can see how coordination can be achieved on GridGain. Any suggestions would be welcome!

Another important aspect of GridGain that we haven’t touched upon in this application is how to split a task, execute the parts on multiple nodes, bring back the results, join the results, and return the final output. I think this kind of a situation will take care of our sharing scenario as well.


October 1, 2007 Posted by | Java | 1 Comment

Off my mark with GridGain…

Dmitriy from GridGain was kind enough to point out that for a simple application like this, not much code needs to be written or modified. As he explains in his comment on my previous post, I used his suggestions and was able to run the application.

All I did was the following –

1. Made Job class implement Serializable.

2. Used @Gridify annotation on Job.run() method. (I think I should have named it execute instead of run to avoid unnecessary confusion with Thread.run()). 

3. In the Main, inserted GridFactory.start(), Thread.sleep() and GridFactory.end() .

public Main() {
        try {
            new Producer(jobs).start();
            new Consumer(jobs).start();

            //not sure how many Consumers should I create.
            //new Consumer(jobs).start(); 
        } catch (Exception e) {
        } finally {

4. Added libraries (gridgain jar, other supporting jars, aspectjweaver jar) to the project.

5. Added -DGRIDGAIN_HOME and javaagent to VM parameters. BTW, for some reason, GridGain refused to start ( gridgain.bat from cmd line) when GRIDGAIN_HOME was set to “C:\Program Files\gridgain-1.5.1\bin”. But when I changed the blackslash to forward slash, it worked!  This is on WinXP.

So after these steps, I was able to see that Job.run() was being shipped off to different nodes. At this time, I have a few questions –

1. What happens when a Consumer picks up a Job from Jobs and calls job.run(). Since the run() method is gridified, when is that Consumer ready to pick up the next job? Immediately after the run() method is shipped off to be executed to another node? or after the run() method is done execution on the other node? What I am expecting is that since the execution of run() is shipped off to another node, the consumer should pick up the next available job and ship it off to another node. Is that a valid expectation? Is this happening is this sample application?

2. This question depends on the answer of the first one. How many Consumers should I start from Main? Starting a consumer is done through the code while number of grid nodes can be changed (by killing or starting new nodes) at anytime. So how do I make sure all the nodes of the grid are utilized. If a Consumer becomes ready to pick up a new Job as soon as it pick up one Job and sends it off to another node, then I just need one Consumer. If the Consumer.run() on the Main node waits until the Job.run() finishes execution on the remote node, then I need to start as many consumers as I have grid nodes.

May be Dmitriy can throw some light on these questions 🙂

October 1, 2007 Posted by | Java | 2 Comments

Working with GridGain

From what I understand about GridGain, basically, you have to identify a task (called GridTask) that can be split up and the splits (called GridJobs) can then be thrown on to multiple machines (called GridNodes). The original task (the one that we split up), can wait for the results of all the GridJobs and once they are ready it can combine them and return the final output. This requires a fair amount of code change (as compared to Terracotta) if you already have something running that you want to run on multiple machines. Of course, in Terracotta you would probably spend that time in configuration instead of java code changes. So, I believe, in terms of amount of efforts, there isn’t much difference.

Based on this understanding, I am still trying to figure out how to make use of GridGain in our Producer-consumer scenario. It was fairly intuitive to do on Terracotta but I am not yet sure how to proceed with GridGain. May be this scenario is more suitable for Terracotta.

Let’s see…

September 27, 2007 Posted by | Java | 1 Comment