hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Namikaze Minato <lloydsen...@gmail.com>
Subject Re: Provisioning a physical host into an existing HDFS/Yarn cluster
Date Wed, 27 Jan 2016 00:17:28 GMT
Well, since worker nodes are usually using all of their RAM and CPU,
plus generating an awful lot of I/O, you ***DEFINITELY*** shouldn't do
a cluster based on VMs.
Or you could go all the way making sure that all VMs could handle the
100% load but that would be pointless work that would amount to the
same processing power as the physical hosts without virtual machines.

Or maybe I'm missing a critical point here?


On 26 January 2016 at 08:03, Anfernee Xu <anfernee.xu@gmail.com> wrote:
> Hi,
> I recently got a powerful physical host, usually I get used to provisioning the host
with VM's and add them to my existing HDFS/Yarn cluster(consists of 300+ VM's), now I'm exploring
Docker based approach, so I want to know if there're some best practices I can follow down
the path.
> --
> --Anfernee

To unsubscribe, e-mail: user-unsubscribe@hadoop.apache.org
For additional commands, e-mail: user-help@hadoop.apache.org

View raw message