hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Colin Patrick McCabe (JIRA)" <j...@apache.org>
Subject [jira] [Updated] (HADOOP-9676) make maximum RPC buffer size configurable
Date Mon, 01 Jul 2013 19:50:22 GMT

     [ https://issues.apache.org/jira/browse/HADOOP-9676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

Colin Patrick McCabe updated HADOOP-9676:
-----------------------------------------

     Target Version/s: 2.1.0-beta
    Affects Version/s:     (was: 2.2.0)
                       2.1.0-beta
    
> make maximum RPC buffer size configurable
> -----------------------------------------
>
>                 Key: HADOOP-9676
>                 URL: https://issues.apache.org/jira/browse/HADOOP-9676
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.1.0-beta
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HADOOP-9676.001.patch, HADOOP-9676.003.patch
>
>
> Currently the RPC server just allocates however much memory the client asks for, without
validating.  It would be nice to make the maximum RPC buffer size configurable.  This would
prevent a rogue client from bringing down the NameNode (or other Hadoop daemon) with a few
requests for 2 GB buffers.  It would also make it easier to debug issues with super-large
RPCs or malformed headers, since OOMs can be difficult for developers to reproduce.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Mime
View raw message