hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From madhu phatak <phatak....@gmail.com>
Subject Re: Help with adjusting Hadoop configuration files
Date Tue, 21 Jun 2011 09:32:59 GMT
The utilization of cluster depends upon the no of jobs and no of mappers and
reducers.The configuration files only help u set up the cluster by
specifying info .u can also specify some of details like block size and
replication in configuration files  which may help you in job management.You
can read all the available configuration properties here
http://hadoop.apache.org/common/docs/current/cluster_setup.html

On Tue, Jun 21, 2011 at 2:13 PM, Avi Vaknin <avivaknin13@gmail.com> wrote:

> Hi Everyone,
> We are a start-up company has been using the Hadoop Cluster platform
> (version 0.20.2) on Amazon EC2 environment.
> We tried to setup a cluster using two different forms:
> Cluster 1: includes 1 master (namenode) + 5 datanodes - all of the machines
> are small EC2 instances (1.6 GB RAM)
> Cluster 2: includes 1 master (namenode) + 2 datanodes - the master is a
> small EC2 instance and the other two datanodes are large EC2 instances (7.5
> GB RAM)
> We tried to make changes on the the configuration files (core-sit,
> hdfs-site
> and mapred-sit xml files) and we expected to see a significant improvement
> on the performance of the cluster 2,
> unfortunately this has yet to happen.
>
> Are there any special parameters on the configuration files that we need to
> change in order to adjust the Hadoop to a large hardware environment ?
> Are there any best practice you recommend?
>
> Thanks in advance.
>
> Avi
>
>
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message