hadoop-common-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Amandeep Khurana <ama...@gmail.com>
Subject Re: Adding Elasticity to Hadoop MapReduce
Date Wed, 14 Sep 2011 21:18:55 GMT
Hi Bharath,

Amazon EMR has two kinds of nodes - Task and Core. Core nodes run HDFS and
MapReduce but task nodes run only MapReduce. You can only add core nodes but
you can add and remove task nodes in a running cluster. In other words, you
can't reduce the size of HDFS. You can only increase it.

There is nothing that stops you from doing that in other Hadoop clusters.
You can configure new nodes that point to the master (for HDFS and
MapReduce) and they will get added to the cluster. In order to remove nodes
from the cluster, you can decommission them. Is there a specific use case
you are trying to solve that already existing mechanism does not solve?

-ak

On Wed, Sep 14, 2011 at 1:27 PM, Bharath Ravi <bharathravi1@gmail.com>wrote:

> Hi all,
>
> I'm a newcomer to Hadoop development, and I'm planning to work on an idea
> that I wanted to run by the dev community.
>
> My apologies if this is not the right place to post this.
>
> Amazon has an "Elastic MapReduce" Service (
> http://aws.amazon.com/elasticmapreduce/) that runs on Hadoop.
> The service allows dynamic/runtime changes in resource allocation: more
> specifically, varying the number of
> compute nodes that a job is running on.
>
> I was wondering if such a facility could be added to the publicly available
> Hadoop MapReduce.
>
> Does this idea make sense, has any previous work been done on this?
> I'd appreciate it if someone could point me the right way to find out more!
>
> Thanks a lot in advance!
> --
> Bharath Ravi
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message