hbase-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "stack (JIRA)" <j...@apache.org>
Subject [jira] Assigned: (HBASE-2234) Roll Hlog if any datanode in the write pipeline dies
Date Fri, 05 Mar 2010 06:31:28 GMT

     [ https://issues.apache.org/jira/browse/HBASE-2234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]

stack reassigned HBASE-2234:
----------------------------

    Assignee: Nicolas Spiegelberg  (was: stack)

Assigning Nicolas since he's doing the work (Made N a contributor).

@Nicolas, a few comments and solicitation of opinon.

+ We need to update the hadoop we bundle.  We'll want to ship with hadoop 0.20.2.  It has
at a minimum hdfs-127 fix.  We should probably apply hdfs-826 to the hadoop we ship too since
its a client-side only change.  If we included hdfs-200, that'd make it so this test you've
included actually gets exercised so we should apply it too?

+ In fact, it looks like this test fails if 826 and 200 are not in place, is that right? 
You probably don't want that.  Maybe skip out the test if they are not in place but don't
fail I'd say.

+ Your test is great.

+ FYI, we try not to reference log4j explicitly. -- i.e. the logger implementation -- but
i think in this case you have no choice going by the commons dictum that the logger config.
is outside of its scope (I was reading under "Configuring the Underlying Logging System" in
http://commons.apache.org/logging/apidocs/org/apache/commons/logging/package-summary.html).

+ I like the comments you've added to HLog.java

+ The log message says hadoop-4379 if hdfs-200 is found... maybe add or change mention of
hdfs-200

Patch looks good otherwise.

> Roll Hlog if any datanode in the write pipeline dies
> ----------------------------------------------------
>
>                 Key: HBASE-2234
>                 URL: https://issues.apache.org/jira/browse/HBASE-2234
>             Project: Hadoop HBase
>          Issue Type: Improvement
>          Components: regionserver
>            Reporter: dhruba borthakur
>            Assignee: Nicolas Spiegelberg
>             Fix For: 0.20.4, 0.21.0
>
>         Attachments: HBASE-2234-20.4-1.patch, HBASE-2234-20.4.patch
>
>
> HDFS does not replicate the last block of a file that is being written to. This means
that is datanodes in the write pipeline die, then the data blocks in the transaction log would
be experiencing reduced redundancy. It would be good if the region server can detect datanode-death
in the write pipeline while writing to the transaction log and if this happens, close the
current log an open a new one. This depends on HDFS-826

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message