hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "dhruba borthakur (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HDFS-297) Implement a pure Java CRC32 calculator
Date Wed, 08 Jul 2009 06:40:14 GMT

    [ https://issues.apache.org/jira/browse/HDFS-297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12728534#action_12728534
] 

dhruba borthakur commented on HDFS-297:
---------------------------------------

This looks good. Does it make sense to run the unit test for longer period of time (more than
the current 24 iterations)  just so as to get more coverage? 

Is this new code written from scratch or has it been taken from some Apache project (just
making sure that there aren't any GPL/LGPL issues)

> Implement a pure Java CRC32 calculator
> --------------------------------------
>
>                 Key: HDFS-297
>                 URL: https://issues.apache.org/jira/browse/HDFS-297
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Owen O'Malley
>            Assignee: Todd Lipcon
>         Attachments: crc32-results.txt, hadoop-5598-evil.txt, hadoop-5598-hybrid.txt,
hadoop-5598.txt, hadoop-5598.txt, hdfs-297.txt, PureJavaCrc32.java, PureJavaCrc32.java, PureJavaCrc32.java,
TestCrc32Performance.java, TestCrc32Performance.java, TestCrc32Performance.java, TestPureJavaCrc32.java
>
>
> We've seen a reducer writing 200MB to HDFS with replication = 1 spending a long time
in crc calculation. In particular, it was spending 5 seconds in crc calculation out of a total
of 6 for the write. I suspect that it is the java-jni border that is causing us grief.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message