hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stack <st...@duboce.net>
Subject Re: datanode failure during write? /HBase consistency guarantee
Date Mon, 11 Jul 2011 15:58:35 GMT
On Sun, Jul 10, 2011 at 1:09 AM, Yang <teddyyyy123@gmail.com> wrote:
> if I write  a row key --- column into hbase region server, I see that in
> HRegion.put() it calls the HLog.append(), which ultimately calls
> DFSoutputStream.sync(),
> let's say at this moment, 1 out of the 3 replica data nodes goes down, I
> guess the sync() would throw an IOException, and the put() would fail?
>

I think the put will complete (I'd have to read src).

See below for more.


> but if I continue to call put(), would HDFS find out that the replica is
> down, and try to relocate the HLog onto another set of replica?

Yes. We'll notice that we are < the configured replica count so we'll
roll the WAL log.  Getting a new WAL ensures that the next write will
be with the configured number of replicas.

>  if so, what
> happens to the old set? during the interim, is it possible that the 2
> replicas that were successfully written could be visible to clients? (for
> example, the current region server dies and a new one comes up and picks up
> the HLog from 1 of the 2 replicas)
>

For the previous WAL now down a replica because a datanode went away,
in the background, the namenode will work to get the data
rereplicated.  So, yes, post-close of the WAL, the data would be
available to clients backed by two replicas until the NN catches it
back up.

St.Ack

Mime
View raw message