hadoop-hdfs-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Uma Maheswara Rao G <mahesw...@huawei.com>
Subject RE: another HDFS configuration for scribeH
Date Fri, 02 Dec 2011 05:27:00 GMT
Hi Eric,
Please check my recent experience with the dataloss issue.



 Performance degradtion is 9% with 10 Cleinet Threads , 8 node cluster,  24TB each machine.
It is always safe to set this flag if your machine will get good amount of load. 

From: Eric Hwang [ehwang@fb.com]
Sent: Friday, December 02, 2011 9:56 AM
To: Hairong Kuang
Cc: Zheng Shao; hdfs-dev@hadoop.apache.org
Subject: Re: another HDFS configuration for scribeH

Hi Hairong,

What is the risk for this change? How much more testing do you think will be needed?


From: Hairong Kuang <hairong@fb.com<mailto:hairong@fb.com>>
Date: Thu, 1 Dec 2011 20:22:10 -0800
To: Internal Use <ehwang@fb.com<mailto:ehwang@fb.com>>
Cc: Zheng Shao <zshao@fb.com<mailto:zshao@fb.com>>, "hdfs-dev@hadoop.apache.org<mailto:hdfs-dev@hadoop.apache.org>"
Subject: another HDFS configuration for scribeH

Hi Eric,

I was debugging a bizar data corruption case in the silver cluster today and realized that
there is a very important configuration that scribeH cluster should set. Could you please
set dfs.datanode.synconclose to be true in ScribeH for next week's push? This is will guarantee
that block data get persisted to disk on close, so preventing data loss when datanodes get


View raw message