cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Roni Balthazar <ronibaltha...@gmail.com>
Subject Re: Possible problem with disk latency
Date Wed, 25 Feb 2015 18:50:19 GMT
Hi Piotr,

Are your repairs finishing without errors?

Regards,

Roni Balthazar

On 25 February 2015 at 15:43, Ja Sam <ptrstpppp@gmail.com> wrote:
> Hi, Roni,
> They aren't exactly balanced but as I wrote before they are in range from
> 2500-6000.
> If you need exactly data I will check them tomorrow morning. But all nodes
> in AGRAF have small increase of pending compactions during last week, which
> is "wrong direction"
>
> I will check in the morning get compaction throuput, but my feeling about
> this parameter is that it doesn't change anything.
>
> Regards
> Piotr
>
>
>
>
> On Wed, Feb 25, 2015 at 7:34 PM, Roni Balthazar <ronibalthazar@gmail.com>
> wrote:
>>
>> Hi Piotr,
>>
>> What about the nodes on AGRAF? Are the pending tasks balanced between
>> this DC nodes as well?
>> You can check the pending compactions on each node.
>>
>> Also try to run "nodetool getcompactionthroughput" on all nodes and
>> check if the compaction throughput is set to 999.
>>
>> Cheers,
>>
>> Roni Balthazar
>>
>> On 25 February 2015 at 14:47, Ja Sam <ptrstpppp@gmail.com> wrote:
>> > Hi Roni,
>> >
>> > It is not balanced. As I wrote you last week I have problems only in DC
>> > in
>> > which we writes (on screen it is named as AGRAF:
>> > https://drive.google.com/file/d/0B4N_AbBPGGwLR21CZk9OV1kxVDA/view). The
>> > problem is on ALL nodes in this dc.
>> > In second DC (ZETO) only one node have more than 30 SSTables and pending
>> > compactions are decreasing to zero.
>> >
>> > In AGRAF the minimum pending compaction is 2500 , maximum is 6000 (avg
>> > on
>> > screen from opscenter is less then 5000)
>> >
>> >
>> > Regards
>> > Piotrek.
>> >
>> > p.s. I don't know why my mail client display my name as Ja Sam instead
>> > of
>> > Piotr Stapp, but this doesn't change anything :)
>> >
>> >
>> > On Wed, Feb 25, 2015 at 5:45 PM, Roni Balthazar
>> > <ronibalthazar@gmail.com>
>> > wrote:
>> >>
>> >> Hi Ja,
>> >>
>> >> How are the pending compactions distributed between the nodes?
>> >> Run "nodetool compactionstats" on all of your nodes and check if the
>> >> pendings tasks are balanced or they are concentrated in only few
>> >> nodes.
>> >> You also can check the if the SSTable count is balanced running
>> >> "nodetool cfstats" on your nodes.
>> >>
>> >> Cheers,
>> >>
>> >> Roni Balthazar
>> >>
>> >>
>> >>
>> >> On 25 February 2015 at 13:29, Ja Sam <ptrstpppp@gmail.com> wrote:
>> >> > I do NOT have SSD. I have normal HDD group by JBOD.
>> >> > My CF have SizeTieredCompactionStrategy
>> >> > I am using local quorum for reads and writes. To be precise I have
a
>> >> > lot
>> >> > of
>> >> > writes and almost 0 reads.
>> >> > I changed "cold_reads_to_omit" to 0.0 as someone suggest me. I used
>> >> > set
>> >> > compactionthrouput to 999.
>> >> >
>> >> > So if my disk are idle, my CPU is less then 40%, I have some free RAM
>> >> > -
>> >> > why
>> >> > SSTables count is growing? How I can speed up compactions?
>> >> >
>> >> > On Wed, Feb 25, 2015 at 5:16 PM, Nate McCall <nate@thelastpickle.com>
>> >> > wrote:
>> >> >>
>> >> >>
>> >> >>>
>> >> >>> If You could be so kind and validate above and give me an answer
is
>> >> >>> my
>> >> >>> disk are real problems or not? And give me a tip what should
I do
>> >> >>> with
>> >> >>> above
>> >> >>> cluster? Maybe I have misconfiguration?
>> >> >>>
>> >> >>>
>> >> >>
>> >> >> You disks are effectively idle. What consistency level are you
using
>> >> >> for
>> >> >> reads and writes?
>> >> >>
>> >> >> Actually, 'await' is sort of weirdly high for idle SSDs. Check
your
>> >> >> interrupt mappings (cat /proc/interrupts) and make sure the
>> >> >> interrupts
>> >> >> are
>> >> >> not being stacked on a single CPU.
>> >> >>
>> >> >>
>> >> >
>> >
>> >
>
>

Mime
View raw message