hadoop-hdfs-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hemanth Makkapati (Commented) (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HDFS-503) Implement erasure coding as a layer on HDFS
Date Thu, 24 Nov 2011 00:54:41 GMT

    [ https://issues.apache.org/jira/browse/HDFS-503?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13156434#comment-13156434

Hemanth Makkapati commented on HDFS-503:

I am a beginner with hadoop and started delving into the code only lately.
As I was trying to get RAID up and running, I observed the following exception in the log

ERROR org.apache.hadoop.raid.RaidNode: java.lang.NullPointerException
        at org.apache.hadoop.raid.RaidNode.tmpHarPathForCode(RaidNode.java:1491)
        at org.apache.hadoop.raid.RaidNode.doHar(RaidNode.java:1217)
        at org.apache.hadoop.raid.RaidNode.access$300(RaidNode.java:73)
        at org.apache.hadoop.raid.RaidNode$HarMonitor.run(RaidNode.java:1371)
        at java.lang.Thread.run(Thread.java:636)

The reason for this seems to be the absence of 'erasurecode' tag in raid configuration file
which, in my case, is very similar to the sample configuration file provided. Once the tag
is introduced, which is allowed to assume either XOR or RS, I didn't see any exception. Also,
the README file also doesn't mention anything about such a tag. 
Please confirm if my observation is correct.
Thought of posting it here for the benefit of others.
BTW, I checked out code from the trunk.

Thank you.

> Implement erasure coding as a layer on HDFS
> -------------------------------------------
>                 Key: HDFS-503
>                 URL: https://issues.apache.org/jira/browse/HDFS-503
>             Project: Hadoop HDFS
>          Issue Type: New Feature
>          Components: contrib/raid
>            Reporter: dhruba borthakur
>            Assignee: dhruba borthakur
>             Fix For: 0.21.0
>         Attachments: raid1.txt, raid2.txt
> The goal of this JIRA is to discuss how the cost of raw storage for a HDFS file system
can be reduced. Keeping three copies of the same data is very costly, especially when the
size of storage is huge. One idea is to reduce the replication factor and do erasure coding
of a set of blocks so that the over probability of failure of a block remains the same as
> Many forms of error-correcting codes are available, see http://en.wikipedia.org/wiki/Erasure_code.
Also, recent research from CMU has described DiskReduce https://opencirrus.org/system/files/Gibson-OpenCirrus-June9-09.ppt.
> My opinion is to discuss implementation strategies that are not part of base HDFS, but
is a layer on top of HDFS.

This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira


View raw message