hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alexander Alten-Lorenz <wget.n...@gmail.com>
Subject Re: Multiple separate Hadoop clusters on same physical machines
Date Mon, 02 Feb 2015 07:20:54 GMT
I see no possibility how federation may help to have different Clusters on _same_ machines.
On top, federation isn’t production ready, since the NN can have massively issues with GC
on high loaded systems, which will be the case here.
To have multiple, maybe single node, clusters the best way is to use cloud based solutions,
e.g.. OpenStack with Docker containers. Also an mesos driven solution can help here, there
are some good tutorials available.


> On 26 Jan 2015, at 10:34, Azuryy Yu <azuryyyu@gmail.com> wrote:
> Hi,
> I think the best way is deploy HDFS federation with Hadoop 2.x.
> On Mon, Jan 26, 2015 at 5:18 PM, Harun Reşit Zafer <harun.zafer@tubitak.gov.tr <mailto:harun.zafer@tubitak.gov.tr>>
> Hi everyone,
> We have set up and been playing with Hadoop 1.2.x and its friends (Hbase, pig, hive etc.)
on 7 physical servers. We want to test Hadoop (maybe different versions) and ecosystem on
physical machines (virtualization is not an option) from different perspectives.
> As a bunch of developer we would like to work in parallel. We want every team member
play with his/her own cluster. However we have limited amount of servers (strong machines
> So the question is, by changing port numbers, environment variables and other configuration
parameters, is it possible to setup several independent clusters on same physical machines.
Is there any constraints? What are the possible difficulties we are to face?
> Thanks in advance
> -- 
> Harun Reşit Zafer
> Bulut Bilişim ve Büyük Veri Analiz Sistemleri Bölümü
> T +90 262 675 3268 <tel:%2B90%20262%20675%203268>
> W  http://www.hrzafer.com <http://www.hrzafer.com/>

View raw message