hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Tsz Wo \(Nicholas\), Sze" <s29752-hadoopu...@yahoo.com>
Subject Re: Developing, Testing, Distributing
Date Fri, 08 Apr 2011 18:23:47 GMT
(Resent with -hadoopuser.  Apologize if you receive multiple copies.)





________________________________
From: "Tsz Wo (Nicholas), Sze" <s29752-hadoopgeneral@yahoo.com>
To: common-user@hadoop.apache.org
Sent: Fri, April 8, 2011 11:08:22 AM
Subject: Re: Developing, Testing, Distributing


First of all, I am a Hadoop contributor and I am familiar with the Hadoop code 
base/build mechanism.  Here is what I do:


Q1: What IDE you are using,
Eclipse.

Q2: What plugins to the IDE you are using
No plugins.

Q3:  How do you test your code, which Unit test libraries your using, how do  
you run your automatic tests after you have finished the development?
I use JUnit.  The tests are executed using ant, the same way for what we did in 
Hadoop development.

Q4: Do you have test/qa/staging environments beside the dev and the production? 
How do you keep it similar to the production
We, Yahoo!, have test clusters which have similar settings as production 
cluster.

Q5: Code reuse - how do you build components that can be used in other jobs, do 
you build generic map or reduce class?
I do have my own framework for running generic computations or generic jobs.

Some more details:
1) svn checkout MapReduce trunk (or common/branches/branch-0.20 for 0.20)
2) compile everything using ant
3) setup eclipse
4) remove existing files under ./src/examples 
5) develop my codes under ./src/examples
6) add unit tests under ./src/test/mapred

I find it very convenient since (i) the build scripts could  compile the 
examples code, run unit test, create jar, etc., and (ii) Hadoop contributors 
would maintain it.

Hope it helps.
Nicholas Sze

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message