incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nate McCall <zznat...@gmail.com>
Subject Re: Minimum CPU and RAM for Cassandra and Hadoop Cluster
Date Mon, 15 Jul 2013 19:57:11 GMT
This is really dependent on the workload. Cassandra does well with 8GB
of RAM for the jvm, but you can do 4GB for moderate loads.

JVM requirements for Hadoop jobs and available slots are wholly
dependent on what you are doing (and again whether you are just
integration testing).

You can get away with (potentially much) lower memory requirements for
both if you are just testing integration between the two.

That said, the biggest issue will be IO contention between the
(potentially wildly) different access patterns. (This is exactly why
DataStax Enterprise segments workloads via snitching - you may want to
consider such depending on what you are doing, budget, etc).

If this is just for testing, some WAG numbers for a starting point
would be to slice off 5 images and give cassandra half the ram of the
image and Hadoop about 1/4. Get a bunch of monitoring setup for the
VMs and the cassandra instances and adjust accordingly depending on
what you see during your test runs.

On Fri, Jul 12, 2013 at 7:16 PM, Martin Arrowsmith
<arrowsmith.martin@gmail.com> wrote:
> Dear Cassandra experts,
>
> I have an HP Proliant ML350 G8 server, and I want to put virtual
> servers on it. I would like to put the maximum number of nodes
> for a Cassandra + Hadoop cluster. I was wondering - what is the
> minimum RAM and memory per node I that I need to have Cassandra + Hadoop
> before the performance decreases are not worth the extra nodes?
>
> Also, what is the suggested typical number of CPU cores / Node ? Would
> it make sense to have 1 core / node ? Less than that ?
>
> Any insight is appreciated! Thanks very much for your time!
>
> Martin

Mime
View raw message