hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hadoop QA (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HADOOP-3737) CompressedWritable throws OutOfMemoryError
Date Sun, 13 Jul 2008 00:29:31 GMT

    [ https://issues.apache.org/jira/browse/HADOOP-3737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12613140#action_12613140

Hadoop QA commented on HADOOP-3737:

-1 overall.  Here are the results of testing the latest attachment 
  against trunk revision 676069.

    +1 @author.  The patch does not contain any @author tags.

    -1 tests included.  The patch doesn't appear to include any new or modified tests.
                        Please justify why no tests are needed for this patch.

    +1 javadoc.  The javadoc tool did not generate any warning messages.

    +1 javac.  The applied patch does not increase the total number of javac compiler warnings.

    +1 findbugs.  The patch does not introduce any new Findbugs warnings.

    +1 release audit.  The applied patch does not increase the total number of release audit

    +1 core tests.  The patch passed core unit tests.

    +1 contrib tests.  The patch passed contrib unit tests.

Test results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2849/testReport/
Findbugs warnings: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2849/artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Checkstyle results: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2849/artifact/trunk/build/test/checkstyle-errors.html
Console output: http://hudson.zones.apache.org/hudson/job/Hadoop-Patch/2849/console

This message is automatically generated.

> CompressedWritable throws OutOfMemoryError
> ------------------------------------------
>                 Key: HADOOP-3737
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3737
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: io
>    Affects Versions: 0.17.0
>            Reporter: Grant Glouser
>         Attachments: HADOOP-3737.patch
> We were seeing OutOfMemoryErrors with stack traces like the following (Hadoop 0.17.0):
> {noformat}
> java.lang.OutOfMemoryError
>         at java.util.zip.Deflater.init(Native Method)
>         at java.util.zip.Deflater.<init>(Deflater.java:123)
>         at java.util.zip.Deflater.<init>(Deflater.java:132)
>         at org.apache.hadoop.io.CompressedWritable.write(CompressedWritable.java:71)
>         at org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:90)
>         at org.apache.hadoop.io.serializer.WritableSerialization$WritableSerializer.serialize(WritableSerialization.java:77)
>         at org.apache.hadoop.io.SequenceFile$Writer.append(SequenceFile.java:1016)
>         [...]
> {noformat}
> A Google search found the following long-standing issue in Java in which use of java.util.zip.Deflater
causes an OutOfMemoryError:
> [http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=4797189]
> CompressedWritable instantiates a Deflater, but does not call {{deflater.end()}}.  It
should do that in order to release the Deflater's resources immediately, instead of waiting
for the object to be finalized.
> We applied this change locally and saw much improvement in the stability of memory usage
of our app.
> This may also affect the SequenceFile compression types, because org.apache.hadoop.io.compress.zlib.BuiltInZlib{Deflater,Inflater}
extend java.util.zip.{Deflater,Inflater}.  org.apache.hadoop.io.compress.Compressor defines
an end() method, but I do not see that this method is ever called.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message