hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Haohui Mai (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-5995) TestFSEditLogLoader#testValidateEditLogWithCorruptBody gets OutOfMemoryError and dumps heap.
Date Fri, 21 Feb 2014 23:03:29 GMT

    [ https://issues.apache.org/jira/browse/HDFS-5995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13908963#comment-13908963
] 

Haohui Mai commented on HDFS-5995:
----------------------------------

Currently almost all ops assume that the data is not corrupted. Practically, it is also difficult
for the code to tell whether the data is corrupted.

The unit test seems breaks this assumption. In my opinion for now it might be better to limit
the corruption on the op code only instead, or even making it more simpler, creating an edit
log that contains invalid ops.

> TestFSEditLogLoader#testValidateEditLogWithCorruptBody gets OutOfMemoryError and dumps
heap.
> --------------------------------------------------------------------------------------------
>
>                 Key: HDFS-5995
>                 URL: https://issues.apache.org/jira/browse/HDFS-5995
>             Project: Hadoop HDFS
>          Issue Type: Test
>          Components: namenode, test
>    Affects Versions: 3.0.0
>            Reporter: Chris Nauroth
>            Assignee: Chris Nauroth
>            Priority: Minor
>         Attachments: HDFS-5995.1.patch
>
>
> {{TestFSEditLogLoader#testValidateEditLogWithCorruptBody}} is experiencing {{OutOfMemoryError}}
and dumping heap since the merge of HDFS-4685.  This doesn't actually cause the test to fail,
because it's a failure test that corrupts an edit log intentionally.  Still, this might cause
confusion if someone reviews the build logs and thinks this is a more serious problem.



--
This message was sent by Atlassian JIRA
(v6.1.5#6160)

Mime
View raw message