cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nate McCall <>
Subject Re: Virtual IP / hardware load balancing for cassandra nodes
Date Mon, 20 Dec 2010 19:46:01 GMT
Current versions of Hector will mark the host as temporarily down if a
connection times out (retrying N seconds later; defaults to 10). If
you only have one host in your list, this will be an issue. It would
be a good idea to use more than one VIP for this reason.

Managing configurations on the app server does not have to be
difficult. I have used puppet previously to keep an
/etc/hector/ file up to date with my list of servers
(using SpringFramework's excellent configuration parameter replacement
plumbing to pull this in at startup). You can also add/remove hosts
via JMX at this point as well.

If there are any Hector-specific things that come up, feel free to
send us an email on

On Mon, Dec 20, 2010 at 11:28 AM, Jonathan Colby
<> wrote:
> Thanks guys.
> On Dec 20, 2010, at 5:44 PM, Dave Viner wrote:
> You can put a Cassandra cluster behind a load balancer.  One thing to be
> cautious of is the health check.  Just because the node is listening on port
> 9160 doesn't mean that it's healthy to serve requests.  It is required, but
> not sufficient.
> The real test is the JMX values.
> Dave Viner
> On Mon, Dec 20, 2010 at 6:25 AM, Jonathan Colby <>
> wrote:
>> I was unable to find example or documentation on my question.  I'd like to
>> know what the best way to group a cluster of cassandra nodes behind a
>> virtual ip.
>> For example, can cassandra nodes be placed behind a Citrix Netscaler
>> hardware load balancer?
>> I can't imagine it being a problem, but in doing so would you break any
>> cassandra functionality?
>> The goal is to have the application talk to a single virtual ip  and be
>> directed to a random node in the cluster.
>> I heard a little about adding the node addresses to Hector's
>> load-balancing mechanism, but this doesn't seem too robust or easy to
>> maintain.
>> Thanks in advance.

View raw message