hbase-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Jean-Daniel Cryans (JIRA)" <j...@apache.org>
Subject [jira] Commented: (HBASE-2234) Roll Hlog if any datanode in the write pipeline dies
Date Thu, 25 Mar 2010 22:13:27 GMT

    [ https://issues.apache.org/jira/browse/HBASE-2234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12849926#action_12849926
] 

Jean-Daniel Cryans commented on HBASE-2234:
-------------------------------------------

I saw a minor issue with this patch, if HBase doesn't have hdfs-site.xml in its CP or isn't
configured with the same dfs.replication, this call will not return the right value:

{code}
+                numCurrentReplicas < fs.getDefaultReplication()) {  
{code}

For example, I was testing HBASE-2337 on a single node and didn't put hadoop's conf dir in
HBase's CP but did configure rep=1, I ended up rolling the logs for every edit:

{code}
WARN org.apache.hadoop.hbase.regionserver.HLog: HDFS pipeline error detected. Found 1 replicas
but expecting 3 replicas.  Requesting close of hlog.
WARN org.apache.hadoop.hbase.regionserver.HLog: HDFS pipeline error detected. Found 1 replicas
but expecting 3 replicas.  Requesting close of hlog.
WARN org.apache.hadoop.hbase.regionserver.HLog: HDFS pipeline error detected. Found 1 replicas
but expecting 3 replicas.  Requesting close of hlog.
...
{code}

I can see hordes of new users coming with the same issue. Is there a better way to ask HDFS
about the default replication setting and store it?

> Roll Hlog if any datanode in the write pipeline dies
> ----------------------------------------------------
>
>                 Key: HBASE-2234
>                 URL: https://issues.apache.org/jira/browse/HBASE-2234
>             Project: Hadoop HBase
>          Issue Type: Improvement
>          Components: regionserver
>            Reporter: dhruba borthakur
>            Assignee: Nicolas Spiegelberg
>            Priority: Blocker
>             Fix For: 0.20.4
>
>         Attachments: HBASE-2234-20.4-1.patch, HBASE-2234-20.4.patch
>
>
> HDFS does not replicate the last block of a file that is being written to. This means
that is datanodes in the write pipeline die, then the data blocks in the transaction log would
be experiencing reduced redundancy. It would be good if the region server can detect datanode-death
in the write pipeline while writing to the transaction log and if this happens, close the
current log an open a new one. This depends on HDFS-826

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message