hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Colin Patrick McCabe (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-8965) Harden edit log reading code against out of memory errors
Date Mon, 31 Aug 2015 18:39:46 GMT

    [ https://issues.apache.org/jira/browse/HDFS-8965?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14723839#comment-14723839
] 

Colin Patrick McCabe commented on HDFS-8965:
--------------------------------------------

bq. Currently jenkins report will not show this findbug warning,once after committed this
patch,we can see..can you take care..? had seen like HDFS-8969...

It looks like the findbugs warning was for Hadoop common, not Hadoop-HDFS, which was a bit
confusing.  I was able to find out that the warning was:

bq. org.apache.hadoop.hdfs.server.namenode.FSEditLogOp$LengthPrefixedReader.decodeOp() ignores
result of java.io.DataInputStream.skip(long)

I have replaced the call to {{skip}} with a call to {{IOUtils#skipFully}}.

> Harden edit log reading code against out of memory errors
> ---------------------------------------------------------
>
>                 Key: HDFS-8965
>                 URL: https://issues.apache.org/jira/browse/HDFS-8965
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>    Affects Versions: 2.0.0-alpha
>            Reporter: Colin Patrick McCabe
>            Assignee: Colin Patrick McCabe
>         Attachments: HDFS-8965.001.patch, HDFS-8965.002.patch, HDFS-8965.003.patch, HDFS-8965.004.patch,
HDFS-8965.005.patch
>
>
> We should harden the edit log reading code against out of memory errors.  Now that each
op has a length prefix and a checksum, we can validate the checksum before trying to load
the Op data.  This should avoid out of memory errors when trying to load garbage data as Op
data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message