hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Boyu Zhang <boyuzhan...@gmail.com>
Subject Re: nodes with different memory sizes
Date Fri, 08 Oct 2010 18:41:43 GMT
Hi Pablo,

thank you for the reply. Actually I forgot to mention that I am using HOD to
provision a hadoop and hdfs on the cluster. There is only one configuration
file when I tried to allocate the cluster. And every time the hadoop cluster
is up, which nodes it uses is different and handled by torque. Any idea how
HOD can be configured like that? Thank you very much!

Boyu

On Fri, Oct 8, 2010 at 12:27 PM, Pablo Cingolani <pcingola@tacner.com>wrote:

> I think you can change that in your "conf/mapred-site.xml", since it's
> a site specific config
> file (see: http://hadoop.apache.org/common/docs/current/cluster_setup.html
> )
>
> e.g.:
>    <property> <name>mapred.child.java.opts</name><value>-Xmx8G</value>
> </property>
>
> I hope this helps
> Yours
>     Pablo Cingolani
>
>
>
> On Fri, Oct 8, 2010 at 12:17 PM, Boyu Zhang <boyuzhang35@gmail.com> wrote:
> > Dear All,
> >
> > I am trying to run a memory hungry program in a cluster with 6 nodes,
> among
> > the 6 nodes, 2 of them have 32 G memory, and the rest have 16 G memory. I
> am
> > wondering is there a way of configuring the cluster so that the process
> run
> > in the big nodes have more memory while the process run in the smaller
> node
> > use smaller memory.
> >
> > I have been trying to find parameters I can use in the hadoop
> configuration
> > but it seems that the configuration has to be the same in all the nodes.
> If
> > this is the case, the best I can do is configure the java process to the
> > smaller memory. Any help is appreciated, thanks!
> >
> > Boyu
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message