cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jonathan Lacefield <jlacefi...@datastax.com>
Subject Re: horizontal query scaling issues follow on
Date Mon, 21 Jul 2014 12:41:15 GMT
Hello,

  Here is the documentation for cfhistograms, which is in microseconds.
http://www.datastax.com/documentation/cassandra/2.0/cassandra/tools/toolsCFhisto.html

  Your question about setting timeouts is subjective, but you have set your
timeout limits to 4 mins, which seems excessive.

  The default timeout values should be appropriate for a well sized and
operating cluster.  Increasing timeouts to achieve stability isn't a
recommended practice.

  You're VMs are undersized, and therefore, it is recommended that you
reduce your workload or add nodes until stability is achieved.

  The goal of your exersize is to "prove out" linear scalability, correct?
   Then it is recommended to find the load your small nodes/cluster can
handle without increasing timeout values, i.e. your cluster can remain
stable.  Once you found the "sweet spot" for load on your cluster, increase
load by X% while increasing cluster size by X%.  Do this for a few
iterations so you can see that the processing capabilities of your cluster
increases proportionally, and linearly, to the amount of load you are
putting on your cluster.  Note, with small VM's, you will not receive
production-like performance from individual nodes.

  Also, what type of storage do you have under the VMs?  It's not
recommended to leverage shared storage.  Leveraging shared storage will,
more than likely, not allow you to achieve linear scalability.  This is
because your hardware will not be scaling linearly fully through the stack.


  Hope this helps

Jonathan


On Sun, Jul 20, 2014 at 9:12 PM, Diane Griffith <dfgriffith@gmail.com>
wrote:

> I am running tests again across different number of client threads and
> number of nodes but this time I tweaked some of the timeouts configured for
> the nodes in the cluster.  I was able to get better performance on the
> nodes at 10 client threads by upping 4 timeout values in cassandra.yaml to
> 240000:
>
>
>    - read_request_timeout_in_ms
>    - range_request_timeout_in_ms
>    - write_request_timeout_in_ms
>    - request_timeout_in_ms
>
>
> I did this because of my interpretation of the cfhistograms output on one
> of the nodes.
>
> So 3 questions that come to mind:
>
>
>    1. Did I interpret the histogram information correctly in cassandra
>    2.0.6 nodetool output?  That the 2 column read latency output is the offset
>    or left column is the time in milliseconds and the right column is number
>    of requests that fell into that bucket range.
>    2. Was it reasonable for me to boost those 4 timeouts and just those?
>    3. What are reasonable timeout values for smaller vm sizes (i.e. 8GB
>    RAM, 4 CPUs)?
>
> If anyone has any  insight it would be appreciated.
>
> Thanks,
> Diane
>
>
> On Fri, Jul 18, 2014 at 2:23 PM, Tyler Hobbs <tyler@datastax.com> wrote:
>
>>
>> On Fri, Jul 18, 2014 at 8:01 AM, Diane Griffith <dfgriffith@gmail.com>
>> wrote:
>>
>>>
>>> Partition Size (bytes)
>>> 1109 bytes: 18000000
>>>
>>> Cell Count per Partition
>>> 8 cells: 18000000
>>>
>>> meaning I can't glean anything about how it partitioned or if it broke a
>>> key across partitions from this right?  Does it mean for 18000000 (the
>>> number of unique keys) that each has 8 cells?
>>>
>>
>> Yes, your interpretation is correct.  Each of your 18000000 partitions
>> has 8 cells (taking up 1109 bytes).
>>
>>
>> --
>> Tyler Hobbs
>> DataStax <http://datastax.com/>
>>
>
>


-- 
Jonathan Lacefield
Solutions Architect, DataStax
(404) 822 3487
<http://www.linkedin.com/in/jlacefield>

<http://www.datastax.com/cassandrasummit14>

Mime
View raw message