Colin Patrick McCabe created HDFS-3440:
------------------------------------------
Summary: should more effectively limit stream memory consumption when reading
corrupt edit logs
Key: HDFS-3440
URL: https://issues.apache.org/jira/browse/HDFS-3440
Project: Hadoop HDFS
Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
Attachments: number1.001.patch
Currently, we do in.mark(100MB) before reading an opcode out of the edit log. However, this
could result in us usin all of those 100 MB when reading bogus data, which is not what we
want. It also could easily make some corrupt edit log files unreadable.
We should have a stream limiter interface, that causes a clean IOException when we're in this
situation, and does not result in huge memory consumption.
--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira
|