hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dibyendu Karmakar <dibyendu.d...@gmail.com>
Subject UNDERSTANDING HADOOP PERFORMANCE
Date Thu, 11 Apr 2013 10:19:12 GMT
Hi everyone,
I am testing hadoop performance. I have come accross the following parameters:
1. dfs.replication
2. dfs.block.size
3. dfs.heartbeat.interval   (dafault: 3)
4. dfs.blockreport.intervalMsec   (default: 3600000)
5. dfs.namenode.handler.count   (default: 10)
6. dfs.datanode.handler.count   (default: 3)
7.dfs.replication.interval    (default: 3)
8.dfs.namenode.decomission.interval    (default: 300)

I have successfully tested 1 and 2 parameters. But the rest of the
parameters starting from dfs.heartbeat.interval is confusing me a lot.

On increment of those parameters, will the hadoop perform better? (
considering separately for read and write operation )...
OR, do I have to decrease those parameters to have hadoop perform better?

Anyone please help. If possible please explain
dfs.namenode.hanlder.count and dfs.datanode.handler.count i.e. what
these two parameters do?

Thank you
-- 
Dibyendu Karmakar,
< dibyendu.dets@gmail.com >

Mime
View raw message