hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Raghu Angadi <rang...@yahoo-inc.com>
Subject Re: everything becomes very slow when the number of writes is larger than the size of the cluster using *TestDFSIO* benchmark?
Date Wed, 14 May 2008 20:11:15 GMT
We don't expect such drastic slow down.. if b/w you report is the over
all throughput.

What are the exact command line you ran? Also how is b/w you report
measured?

Raghu.

Samuel Guo wrote:
> Hi all,
> 
> I run the *TestDFSIO* benchmark on a simple cluster of 2 nodes.
> The file size is the same in all cases 2GB.
> The number of files tried is 1,2,4,8(only write).
> The bufferSize is 65536 bytes.
> The file replication is 1.
> 
> the results as below:
> 
> files 1 2 4 8
> 
> write -- Throughout(mb/s) 52.89 52.31 23.06 22.25
> -- Avg IO rate(mb/s) 54.18 53.23 24.03 22.77
> 
> 
> read -- Throughout(mb/s) 79.17 60.77 20.15
> -- Avg IO rate(mb/s) 79.18 61.33 22.01
> 
> 
> It doesn't seem good. When the number of writes is larger than the size
> of the cluster, everything become worse.
> 
> Can anyone explain why everything is getting very slow when the the
> number of writes is close to or
> larger than the size of the cluster?
> 
> Is there something wrong with my test or the cluster settings?
> 
> Hope for your reply.
> 
> regards,
> 
> Samuel


Mime
View raw message