hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Chris Douglas (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-481) Bug Fixes
Date Mon, 24 Aug 2009 05:16:59 GMT

    [ https://issues.apache.org/jira/browse/HDFS-481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12746733#action_12746733
] 

Chris Douglas commented on HDFS-481:
------------------------------------

Thanks for fixing the lib packaging. There was just one pair of changes that I wanted to ask
after:
{noformat}
-    <display-name>HDFS Proxy</display-name>
+    <display-name>HDFS Proxy Forward</display-name>
{noformat}
{noformat}
-    if (dstContext == null) {
-      LOG.info("Context non-exist or restricted from access: " + version);
+    // avoid infinite forwarding.
+    if (dstContext == null
+        || "HDFS Proxy Forward".equals(dstContext.getServletContextName())) {
+      LOG.error("Context (" + version
+          + ".war) non-exist or restricted from access");
{noformat}
This is to prevent the forwarding servlet from passing requests to itself? When does this
occur? Is there another way to detect/prevent it, other than (what looks like) taking a configurable
string and hard-coding a check for it?

The rest of the changes look reasonable.

> Bug Fixes
> ---------
>
>                 Key: HDFS-481
>                 URL: https://issues.apache.org/jira/browse/HDFS-481
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: contrib/hdfsproxy
>    Affects Versions: 0.21.0
>            Reporter: zhiyong zhang
>            Assignee: zhiyong zhang
>         Attachments: HDFS-481.patch, HDFS-481.patch, HDFS-481.patch, HDFS-481.patch
>
>
> 1. hadoop-version is not recognized if run ant command from src/contrib/ or from src/contrib/hdfsproxy
 
> If running ant command from $HADOOP_HDFS_HOME, hadoop-version will be passed to contrib's
build through subant. But if running from src/contrib or src/contrib/hdfsproxy, the hadoop-version
will not be recognized. 
> 2. ssl.client.do.not.authenticate.server setting can only be set by hdfs's configuration
files, need to move this setting to ssl-client.xml.
> 3.  Solve some race conditions for LdapIpDirFilter.java. (userId, groupName, and paths
need to be moved to doFilter() instead of as class members
> 4. Addressed the following StackOverflowError. 
> ERROR [org.apache.catalina.core.ContainerBase.[Catalina].[localh
> ost].[/].[proxyForward]] Servlet.service() for servlet proxyForward threw exception
> java.lang.StackOverflowError
>         at org.apache.catalina.core.ApplicationHttpRequest.getAttribute(ApplicationHttpR
> equest.java:229)
>      This is due to when the target war (/target.war) does not exist, the forwarding
war will forward to its parent context path /, which defines the forwarding war itself. This
cause infinite loop.  Added "HDFS Proxy Forward".equals(dstContext.getServletContextName()
in the if logic to break the loop.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message