hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Nicolas Spiegelberg (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HBASE-2234) Roll Hlog if any datanode in the write pipeline dies
Date Fri, 05 Mar 2010 20:06:27 GMT

    [ https://issues.apache.org/jira/browse/HBASE-2234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12841995#action_12841995

Nicolas Spiegelberg commented on HBASE-2234:

Will correct comment/messaging issues.  The actual patch should work fine without append support,
however I would lean towards applying HDFS-200 just to exercise this test.  In the unit test,
we really just use append support for syncFs(), since it's a deterministic way to initialize
the pipeline for reading [I'll clarify in code comments].  I've experimented with modifying
"io.file.buffer.size", but it would take a lot more introspection and another nastiness to
test it that way.  My biggest worry about 'skipping the test' is that it's easy to accidentally
replace the default with an un-patched JAR and not know about it since HBase has hundreds
of tests.  We've been regularly applying new HDFS client-side patches over here for append
support, and I messed up once or twice when I was task switching between this issue and other

> Roll Hlog if any datanode in the write pipeline dies
> ----------------------------------------------------
>                 Key: HBASE-2234
>                 URL: https://issues.apache.org/jira/browse/HBASE-2234
>             Project: Hadoop HBase
>          Issue Type: Improvement
>          Components: regionserver
>            Reporter: dhruba borthakur
>            Assignee: Nicolas Spiegelberg
>            Priority: Blocker
>             Fix For: 0.20.4, 0.21.0
>         Attachments: HBASE-2234-20.4-1.patch, HBASE-2234-20.4.patch
> HDFS does not replicate the last block of a file that is being written to. This means
that is datanodes in the write pipeline die, then the data blocks in the transaction log would
be experiencing reduced redundancy. It would be good if the region server can detect datanode-death
in the write pipeline while writing to the transaction log and if this happens, close the
current log an open a new one. This depends on HDFS-826

This message is automatically generated by JIRA.
You can reply to this email to add a comment to the issue online.

View raw message