hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Philip Zeyliger (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-6974) Configurable header buffer size for Hadoop HTTP server
Date Mon, 27 Sep 2010 05:48:32 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-6974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12915131#action_12915131

Philip Zeyliger commented on HADOOP-6974:

I'm +1 the idea.  I've absolutely run into this limit before, when running web apps on the
same host.

"dfs.http.header.buffer.size" seems like the wrong name for this parameter, since HttpServer
is also used by other places.  Perhaps "core.http.header.buffer.size"?

I would be in favor of making the limit larger by default.

Typically, I believe, additions to config variables include a change to core-default.xml to
document that variable.  Would be appropriate to see that as part of this patch, too.

-- Philip

> Configurable header buffer size for Hadoop HTTP server
> ------------------------------------------------------
>                 Key: HADOOP-6974
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6974
>             Project: Hadoop Common
>          Issue Type: Improvement
>            Reporter: Paul Butler
>         Attachments: hadoop-6974.patch
> This patch adds a configurable parameter dfs.http.header.buffer.size to Hadoop which
allows the buffer size to be configured from the xml configuration.
> This fixes an issue that came up in an environment where the Hadoop servers share a domain
with other web applications that use domain cookies. The large cookies overwhelmed Jetty's
buffer which caused it to return a 413 error.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message