hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From hadoop hive <hadooph...@gmail.com>
Subject Re: One datanode is down then write/read starts failing
Date Mon, 28 Jul 2014 17:22:20 GMT
If you have 2 an live initially and rep set to 2 which is perfectly fine
but you killed one dn... There is no place to put another replica of new
files as well as old files... Which causing issue in writing blocks.
On Jul 28, 2014 10:15 PM, "Satyam Singh" <satyam.singh@ericsson.com> wrote:

> @vikas i have initially set 2 but after that i have make one DN down. So
> you are saying from initial i have to make replication factor as 1 even i
> have DN 2 active initially. If so then what is the reason?
>
> On 07/28/2014 10:02 PM, Vikas Srivastava wrote:
>
>> What replication have you set for cluster.
>>
>> Its should be 1 in your case.
>>
>> On Jul 28, 2014 9:26 PM, Satyam Singh <satyam.singh@ericsson.com> wrote:
>>
>>> Hello,
>>>
>>>
>>> I have hadoop cluster setup of one namenode and two datanodes.
>>> And i continuously write/read/delete through hdfs on namenode through
>>> hadoop client.
>>>
>>> Then i kill one of the datanode, still one is working but writing on
>>> datanode is getting failed for all write requests.
>>>
>>> I want to overcome this scenario because at live traffic scenario any of
>>> datanode might get down then how do we handle those cases.
>>>
>>> Can anybody face this issue or i am doing something wrong in my setup.
>>>
>>> Thanx in advance.
>>>
>>>
>>> Warm Regards,
>>> Satyam
>>>
>>
>

Mime
View raw message