hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-3440) should more effectively limit stream memory consumption when reading corrupt edit logs
Date Fri, 18 May 2012 05:04:12 GMT

    [ https://issues.apache.org/jira/browse/HDFS-3440?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13278578#comment-13278578
] 

Hadoop QA commented on HDFS-3440:
---------------------------------

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12527995/HDFS-3440.002.patch
  against trunk revision .

    +1 @author.  The patch does not contain any @author tags.

    +1 tests included.  The patch appears to include 2 new or modified test files.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    -1 javadoc.  The javadoc tool appears to have generated 2 warning messages.

    +1 eclipse:eclipse.  The patch built with eclipse:eclipse.

    +1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) warnings.

    +1 release audit.  The applied patch does not increase the total number of release audit
warnings.

    +1 core tests.  The patch passed unit tests in hadoop-hdfs-project/hadoop-hdfs hadoop-hdfs-project/hadoop-hdfs/src/contrib/bkjournal.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-HDFS-Build/2469//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/2469//console

This message is automatically generated.
                
> should more effectively limit stream memory consumption when reading corrupt edit logs
> --------------------------------------------------------------------------------------
>
>                 Key: HDFS-3440
>                 URL: https://issues.apache.org/jira/browse/HDFS-3440
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>            Priority: Minor
>         Attachments: HDFS-3440.001.patch, HDFS-3440.002.patch
>
>
> Currently, we do in.mark(100MB) before reading an opcode out of the edit log.  However,
this could result in us usin all of those 100 MB when reading bogus data, which is not what
we want.  It also could easily make some corrupt edit log files unreadable.
> We should have a stream limiter interface, that causes a clean IOException when we're
in this situation, and does not result in huge memory consumption.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Mime
View raw message