cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From DuyHai Doan <doanduy...@gmail.com>
Subject Re: Hot, large row
Date Fri, 25 Jul 2014 13:39:54 GMT
Hello Keith


   1. Periodically seeing one node stuck in CMS GC causing high read
   latency.  Seems to recover on its own after an hour or so

How many nodes do you have ? And how many distinct user_id roughtly is
there ?

Looking at your jvm settings it seems that you have the GC log enabled. It
worths having a look into it. And also grep for the pattern "GC for" in the
Casssandra system.log file

The symptom you mention looks like there are activity bursts on one
particular node. The rows are not so wide since the largest has only 61k of
cells and C* can deal with rows larger than that. It all depends now on
your data access pattern.

Also Jack Krupansky question is interesting. Even though you limit a
request to 5000, if each cell is a big blob or block of text, it mays add
up a lot into JVM heap ...

Did you try to do a select without limit and use paging feature of Java
Driver. Or lower the limit in the select to 500 as Duncan said and paginate
manually

Hope that helps

Duy Hai



On Fri, Jul 25, 2014 at 3:10 PM, Duncan Sands <duncan.sands@gmail.com>
wrote:

> Hi Keith,
>
>
> On 25/07/14 14:43, Keith Wright wrote:
>
>> Answers to your questions below but in the end I believe the root issue
>> here is
>> that LCS is clearly not compacting away as it should resulting in reads
>> across
>> many SSTables which as you noted is “fishy”.   I’m considering filing a
>> JIRA for
>> this, sound reasonable?
>>
>> We are running OOTB JMV tuning (see below) and using the datastax client.
>>  When
>> we read from the table in question, we put a limit of 5000 to help reduce
>> the
>> read volume but yes the standard scenario is:  “select * from
>> global_user_event_skus_v2 where user_id = ? limit 5000”
>>
>
> does reducing the limit, eg to 500, help?  I've had similar sounding
> problems when many clients were doing wide row reads in parallel, with the
> reads returning thousands of rows.
>
> Ciao, Duncan.
>

Mime
View raw message