hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-297) Implement a pure Java CRC32 calculator
Date Wed, 08 Jul 2009 18:29:15 GMT

    [ https://issues.apache.org/jira/browse/HDFS-297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12728838#action_12728838

dhruba borthakur commented on HDFS-297:

> During testing I ran on many more iterations than just the 24. W

Thanks for the info.

> I think we now have the fastest crc32 in the wes

Way to go!

> Implement a pure Java CRC32 calculator
> --------------------------------------
>                 Key: HDFS-297
>                 URL: https://issues.apache.org/jira/browse/HDFS-297
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Owen O'Malley
>            Assignee: Todd Lipcon
>         Attachments: crc32-results.txt, hadoop-5598-evil.txt, hadoop-5598-hybrid.txt,
hadoop-5598.txt, hadoop-5598.txt, hdfs-297.txt, PureJavaCrc32.java, PureJavaCrc32.java, PureJavaCrc32.java,
TestCrc32Performance.java, TestCrc32Performance.java, TestCrc32Performance.java, TestPureJavaCrc32.java
> We've seen a reducer writing 200MB to HDFS with replication = 1 spending a long time
in crc calculation. In particular, it was spending 5 seconds in crc calculation out of a total
of 6 for the write. I suspect that it is the java-jni border that is causing us grief.

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message