incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dave Viner <davevi...@pobox.com>
Subject Re: Cassandra & HAProxy
Date Mon, 30 Aug 2010 17:02:36 GMT
Hi Edward,

By "down hard", I assume you mean that the machine is no longer responding
on the cassandra thrift port.  That makes sense (and in fact is what I'm
doing currently).  But, it seems like the real improvement is something that
would allow for a simple monitor that goes beyond the simple "machine not
reachable" issue and covers more common scenarios that temporarily impact
service time, but aren't so drastic as to cause machine outage.

Dave Viner


On Mon, Aug 30, 2010 at 9:52 AM, Edward Capriolo <edlinuxguru@gmail.com>wrote:

> On Mon, Aug 30, 2010 at 12:40 PM, Dave Viner <daveviner@pobox.com> wrote:
> > FWIW - we've been using HAProxy in front of a cassandra cluster in
> > production and haven't run into any problems yet.  It sounds like our
> > cluster is tiny in comparison to Anthony M's cluster.  But I just wanted
> to
> > mentioned that others out there are doing the same.
> > One thing in this thread that I thought was interesting is Ben's initial
> > comment "the presence of the proxy precludes clients properly backing off
> > from nodes returning errors."  I think it would be very cool if someone
> > implemented a mechanism for haproxy to detect the error nodes and then
> > enable it to drop those nodes from the rotation.  I'd be happy to help
> with
> > this, as I know how it works with haproxy and standard web servers or
> other
> > tcp servers.  But, I'm not sure how to make it work with Cassandra,
> since,
> > as Ben points out, it can return valid tcp responses (that say
> > "error-condition") on the standard port.
> > Dave Viner
> >
> > On Sun, Aug 29, 2010 at 4:48 PM, Anthony Molinaro
> > <anthonym@alumni.caltech.edu> wrote:
> >>
> >> On Sun, Aug 29, 2010 at 12:20:10PM -0700, Benjamin Black wrote:
> >> > On Sun, Aug 29, 2010 at 11:04 AM, Anthony Molinaro
> >> > <anthonym@alumni.caltech.edu> wrote:
> >> > >
> >> > >
> >> > > I don't know it seems to tax our setup of 39 extra large ec2 nodes,
> >> > > its
> >> > > also closer to 24000 reqs/sec at peak since there are different
> tables
> >> > > (2 tables for each read and 2 for each write)
> >> > >
> >> >
> >> > Could you clarify what you mean here?  On the face of it, this
> >> > performance seems really poor given the number and size of nodes.
> >>
> >> As you say I would expect to achieve much better performance given the
> >> node
> >> size, but if you go back and look through some of the issues we've seen
> >> over time, you'll find we've been hit with nodes being too small, having
> >> too few nodes to deal with request volume, having OOMs, having bad
> >> sstables,
> >> having the ring appear different to different nodes, and several other
> >> problems.
> >>
> >> Many of i/o problems presented themselves as MessageDeserializer pool
> >> backups
> >> (although we stopped having these since Jonathan was by and suggested
> row
> >> cache of about 1Gb, thanks Riptano!).  We currently have mystery OOMs
> >> which are probably caused by GC storms during compactions (although
> >> usually
> >> the nodes restart and compact fine, so who knows).  I also regularly
> watch
> >> nodes go away for 30 seconds or so (logs show node goes dead, then comes
> >> back to life a few seconds later).
> >>
> >> I've sort of given up worrying about these, as we are in the process of
> >> moving this cluster to our own machines in a colo, so I figure I should
> >> wait until they are moved, and see how the new machines do before I
> worry
> >> more about performance.
> >>
> >> -Anthony
> >>
> >> --
> >> ------------------------------------------------------------------------
> >> Anthony Molinaro                           <anthonym@alumni.caltech.edu
> >
> >
> >
>
> Any proxy with a TCP health check should be able to determine if the
> Cassandra service is down hard. The problem for the tools that are not
> cassandra protocol aware are detecting slowness or other anomalies
> like TimedOut exceptions.
>
> If you are seeing GC storms during compactions you might have rows
> that are too big. When the compaction hits these memory spikes. I
> lowered the compaction priority (and added more nodes) which has
> helped compaction back off leaving some IO for requests.
>

Mime
View raw message