hadoop-common-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Colin Patrick McCabe (JIRA)" <j...@apache.org>
Subject [jira] [Created] (HADOOP-8361) avoid out-of-memory problems when deserializing strings
Date Fri, 04 May 2012 21:18:48 GMT
Colin Patrick McCabe created HADOOP-8361:
--------------------------------------------

             Summary: avoid out-of-memory problems when deserializing strings
                 Key: HADOOP-8361
                 URL: https://issues.apache.org/jira/browse/HADOOP-8361
             Project: Hadoop Common
          Issue Type: Bug
            Reporter: Colin Patrick McCabe
            Assignee: Colin Patrick McCabe
            Priority: Minor


In HDFS, we want to be able to read the edit log without crashing on an OOM condition.  Unfortunately,
we currently cannot do this, because there are no limits on the length of certain data types
we pull from the edit log.  We often read strings without setting any upper limit on the length
we're prepared to accept.

It's not that we don't have limits on strings-- for example, HDFS limits the maximum path
length to 8000 UCS-2 characters.  Linux limits the maximum user name length to either 64 or
128 bytes, depending on what version you are running.  It's just that we're not exposing these
limits to the deserialization functions that need to be aware of them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message