hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Wellington Chevreuil <wellington.chevre...@gmail.com>
Subject Re: One datanode is down then write/read starts failing
Date Mon, 28 Jul 2014 16:01:30 GMT
Can you make sure you still have enough HDFS space once you kill this DN? If not, HDFS will
automatically enter safemode if it detects there's no hdfs space available. The error message
on the logs should have some hints on this.


On 28 Jul 2014, at 16:56, Satyam Singh <satyam.singh@ericsson.com> wrote:

> Hello,
> I have hadoop cluster setup of one namenode and two datanodes.
> And i continuously write/read/delete through hdfs on namenode through hadoop client.
> Then i kill one of the datanode, still one is working but writing on datanode is getting
failed for all write requests.
> I want to overcome this scenario because at live traffic scenario any of datanode might
get down then how do we handle those cases.
> Can anybody face this issue or i am doing something wrong in my setup.
> Thanx in advance.
> Warm Regards,
> Satyam

View raw message