hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "zhiyong zhang (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-5023) Add Tomcat support to hdfsproxy
Date Tue, 24 Feb 2009 18:33:02 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-5023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12676357#action_12676357

zhiyong zhang commented on HADOOP-5023:

    *  On my machine, running ant -Dtestcase=FOO test
         1. Still runs the unit test
I didn't know tests should be able to be isolated by testcase. I just added that functionality.

         2. Fails (times out)
Did you run "ant tar" first in the $HADOOP_ROOT dir. hadoop-core-*.jar is needed for the war
file.  We could have used a static lib dir to hold a pre-compiled hadoop jar, but that would
bring some redundancy as a whole. I am not sure in this case which way should we go. Either
have a static hadoop jar to make the proxy code more independent but add some redundancy or
have some dependency on the hadoop build but  have a cleaner structure. I follow the latter
in the patch. I can change if necessary.

    * If the "UnitTest" parameter is in the URI query:

      +    boolean unitTest = false;
      +    if (rqst.getParameter("UnitTest") != null) unitTest = true;

What does this do?

These two lines are used to test if UnitTest parameter is in the query. It is to differentiate
normal requests and unit-test-case requests. It is generally not a good idea to have unit
test code mingled in the source code. But in our case, as the cactus in-container unit test
is still in its early stage, It cannot set the request attributes in the webRequest. Although
it has a servletRequestWrapper, but the attributes set through that are not visible to the
source code request. To successfully conduct unit test, we have to do some extra work in the
source code to cooperate the unit test code.

    * The content of hdfsproxy/conf/{user-permissions,user-certs} appears to be for testing.
If so, it belongs in hdfsproxy/src/test
These two files are also needed for normal functionality. User need to edit these two files
for their purpose anyway. I don't think leaving them blank and having some default testing
value make much difference.

    * This should be removed from tomcat-web.xml:

      +   <!--<context-param>
      +      <param-name>hdfsproxy.dfs.namenode.address</param-name>
      +      <param-value>ucdev19.inktomisearch.com:54321</param-value>
      +      <description> name node address </description>
      +    </context-param>-->

Is this value duplicated in the hdfsproxy configuration and the tomcat configuration?

Yes, as you can see, this property is commented out with <!-- -->. I need to delete

    * These dependencies look suspect (ivy.xml). Are they necessary?

      +   <dependency org="httpunit" name="httpunit" rev="1.6" conf="common->master"/>
      +   <dependency org="htmlunit" name="htmlunit" rev="1.10" conf="common->master"/>
      +   <dependency org="aspectj" name="aspectjrt" rev="1.5.3" conf="common->master"/>

aspectj is needed for cactus. 
I've tried httpunit for some tests, then didn't use it in the end. I forgot to comment out/
remove these two lines in the ivy file.

    * I'm not sure ProxyUtils::sendCommand should be a public method. Given that it will overwrite
several system properties and no tool has a demonstrable need for it yet, it would be better
if it were package-private for now.

I think it is a good idea to make it package-private so that you can still do unit test.

Hi, can you do ant tar first, then run the test again to see if you can run the test? Or you
want me to add a static hadoop jar? 

Thank you.

> Add Tomcat support to hdfsproxy
> -------------------------------
>                 Key: HADOOP-5023
>                 URL: https://issues.apache.org/jira/browse/HADOOP-5023
>             Project: Hadoop Core
>          Issue Type: Improvement
>          Components: contrib/hdfsproxy
>            Reporter: Kan Zhang
>            Assignee: zhiyong zhang
>         Attachments: HADOOP-5023.patch, HADOOP-5023.patch, HADOOP-5023.patch
> We plan to add Tomcat support to hdfsproxy since Tomcat has good production support at

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message