hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Sarvesh Singh" <serve...@gmail.com>
Subject Re: Data Node failover
Date Wed, 10 Jan 2007 12:58:38 GMT
Thanks for replying!
I also tried with replication count as 2, it still threw an exception and
failed.
I will post the exception soon.
Thanks
Servesh

On 1/10/07, Gautam Kowshik <gautamk@yahoo-inc.com> wrote:
>
> We have replication in place to account for cases when a datanode is not
> reachable. The namenode(master) starts replicating files that were on
> that data node. You can also tell the dfs to maintain more copies(than
> usual) of certain important files using a replication factor. Read more
> bout it here :
> http://lucene.apache.org/hadoop/hdfs_design.html#Data+Replication
>
> What kinda/how much data are u using on your 3 node dfs. The namenode
> can replicate 50 blocks per second on average, so i don't think time is
> a problem. A problem could be that the dfs is not able to maintain enuf
> replicas. Could u mention what exception u got?
> -Gautam.
>
> Sarvesh Singh wrote:
> > Hi,
> >
> > I have a hadoop cluster of 3 instances, when I kill data node process
> > on one
> > of the slave machine,
> > failover does not seem to work. Another slave machine does the copy of
> > DFS
> > block for 7-10 minutes
> > but client program bombs with an exception after 7-10 minutes.
> > Do we have data node failover implemented in Hadoop??
> >
> > Thanks
> > Servesh
> >
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message