hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "zhiyong zhang (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-481) Bug Fixes
Date Mon, 24 Aug 2009 17:49:59 GMT

    [ https://issues.apache.org/jira/browse/HDFS-481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12746977#action_12746977
] 

zhiyong zhang commented on HDFS-481:
------------------------------------

Hi Chris,  thanks for the comment.
right, that piece of code is to prevent the forwarding servlet from passing requests to itself.
That happens if the forwarding servlet could not find the matching servlet to forward request
to. 

For instance, if the user request path is /hadoop20 and hadoop20.war does not exist under
the same webapps/ folder as the ROOT.war, what the forwarding servlet will do (through ServletContext.getContext())
in this case is to match the longest path. Since it could not find /hadoop20, it will match
against its parent path /, which matches to ROOT.war, the forwarding servlet itself, then
it causes an infinite loop, finally causing java.lang.StackOverflowError to be thrown. 

I've thought of using logic curContext.getServletContextName().equals(dstContext.getServletContextName())
to tell, but it will break the unit test.  Since cactus unit test framework won't be able
to do cross context forwarding at this stage yet. All forwarding occurs in the same context.
 In that case, ServletContext.getContext() would always return the same context. the unit
test would stuck there. 

I couldn't think any other ways to work around this at this stage. Do you have any better
ideas?

Thanks.

> Bug Fixes
> ---------
>
>                 Key: HDFS-481
>                 URL: https://issues.apache.org/jira/browse/HDFS-481
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: contrib/hdfsproxy
>    Affects Versions: 0.21.0
>            Reporter: zhiyong zhang
>            Assignee: zhiyong zhang
>         Attachments: HDFS-481.patch, HDFS-481.patch, HDFS-481.patch, HDFS-481.patch
>
>
> 1. hadoop-version is not recognized if run ant command from src/contrib/ or from src/contrib/hdfsproxy
 
> If running ant command from $HADOOP_HDFS_HOME, hadoop-version will be passed to contrib's
build through subant. But if running from src/contrib or src/contrib/hdfsproxy, the hadoop-version
will not be recognized. 
> 2. ssl.client.do.not.authenticate.server setting can only be set by hdfs's configuration
files, need to move this setting to ssl-client.xml.
> 3.  Solve some race conditions for LdapIpDirFilter.java. (userId, groupName, and paths
need to be moved to doFilter() instead of as class members
> 4. Addressed the following StackOverflowError. 
> ERROR [org.apache.catalina.core.ContainerBase.[Catalina].[localh
> ost].[/].[proxyForward]] Servlet.service() for servlet proxyForward threw exception
> java.lang.StackOverflowError
>         at org.apache.catalina.core.ApplicationHttpRequest.getAttribute(ApplicationHttpR
> equest.java:229)
>      This is due to when the target war (/target.war) does not exist, the forwarding
war will forward to its parent context path /, which defines the forwarding war itself. This
cause infinite loop.  Added "HDFS Proxy Forward".equals(dstContext.getServletContextName()
in the if logic to break the loop.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message