hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Suresh Srinivas (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-1849) IPC server max queue size should be configurable
Date Wed, 24 Feb 2010 19:17:28 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-1849?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12837958#action_12837958
] 

Suresh Srinivas commented on HADOOP-1849:
-----------------------------------------

I think we should document the new parameter for the following reasons:
# Number of handlers is currently documented. The queue size per handler is closely related
to this and should be documented as well. These numbers need tweaking based on the size of
the cluster and the type of load. For example a cluster with smaller heartbeat period, requires
bigger queue with the same number of handlers. A cluster could also live with longer latency
instead of having to increas the number of handlers.
# Current approach of increasing the number of handlers has a drawback; the response buffer
per handler could take up significantly large heap on increasing the number of handlers.

That said, to me, more important thing is to have this parameter configurable. Whether it
is documented or not is secondary.

The queue size should be dependent on handler count. As the queue size increases, to certain
extent (based on time spent in lock and the cost of each request etc.) the system can benefit
from more number of threads. Keeping them as it is done today conveys this more clearly.


> IPC server max queue size should be configurable
> ------------------------------------------------
>
>                 Key: HADOOP-1849
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1849
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: ipc
>            Reporter: Raghu Angadi
>            Assignee: Konstantin Shvachko
>         Attachments: handlerQueueSizeConfig.patch, handlerQueueSizeConfig.patch
>
>
> Currently max queue size for IPC server is set to (100 * handlers). Usually when RPC
failures are observed (e.g. HADOOP-1763), we increase number of handlers and the problem goes
away. I think a big part of such a fix is increase in max queue size. I think we should make
maxQsize per handler configurable (with a bigger default than 100). There are other improvements
also (HADOOP-1841).
> Server keeps reading RPC requests from clients. When the number in-flight RPCs is larger
than maxQsize, the earliest RPCs are deleted. This is the main feedback Server has for the
client. I have often heard from users that Hadoop doesn't handle bursty traffic.
> Say handler count is 10 (default) and Server can handle 1000 RPCs a sec (quite conservative/low
for a typical server), it implies that an RPC can wait for only for 1 sec before it is dropped.
If there 3000 clients and all of them send RPCs around the same time (not very rare, with
heartbeats etc), 2000 will be dropped. In stead of dropping the earliest RPCs, if the server
delays reading new RPCs, the feedback to clients would be much smoother, I will file another
jira regd queue management.
> For this jira I propose to make queue size per handler configurable, with a larger default
(may be 500).

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message