Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 146BB207C for ; Tue, 3 May 2011 21:35:40 +0000 (UTC) Received: (qmail 49512 invoked by uid 500); 3 May 2011 21:35:38 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 49492 invoked by uid 500); 3 May 2011 21:35:38 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 49484 invoked by uid 99); 3 May 2011 21:35:38 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 03 May 2011 21:35:38 +0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=FREEMAIL_FROM,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,RFC_ABUSE_POST,SPF_PASS,T_TO_NO_BRKTS_FREEMAIL X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of tmarthinussen@gmail.com designates 74.125.82.44 as permitted sender) Received: from [74.125.82.44] (HELO mail-ww0-f44.google.com) (74.125.82.44) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 03 May 2011 21:35:30 +0000 Received: by wwa36 with SMTP id 36so528921wwa.25 for ; Tue, 03 May 2011 14:35:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:in-reply-to:references:date :message-id:subject:from:to:content-type; bh=SE+uhRrydFwiOZ4qMKzrAywjpRLzuePhUNfXhnD22kU=; b=At27OMAqGqHpDLlO4BXN0QFf4UdaGQRU29jEtYjBGZ/DYWLJX2kaqEBTyEF98VEvo2 EXVb0iGa5CvR2turYXb7Bz+DK0A7Isc2K7v6NPJyCkUMWKSTn1xGa2tkx/D3Li05zPzr RzZhjWhjjKzS5NOq0e+qp0lNuRFjx+pRlpr7U= DomainKey-Signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; b=lOBEhVlntTdhZUA2zZNcUVXAk1YUESnDICaZ7FaxuYANt6xX55BiK6qFaAJ88+8LAD JdIWbaGyxiDXuZ2vtPFzElvQGdb9zc75IltjeTcRkRSo8JFtnJv4Cane4iRDgbPiW0qx e8cHGxWjkI0LavaipdHkBvhA7f7X2Q1HhRCzI= MIME-Version: 1.0 Received: by 10.216.15.137 with SMTP id f9mr318881wef.62.1304458488302; Tue, 03 May 2011 14:34:48 -0700 (PDT) Received: by 10.216.48.145 with HTTP; Tue, 3 May 2011 14:34:48 -0700 (PDT) In-Reply-To: References: Date: Wed, 4 May 2011 06:34:48 +0900 Message-ID: Subject: Re: MemtablePostFlusher with high number of pending calls? From: Terje Marthinussen To: user@cassandra.apache.org Content-Type: multipart/alternative; boundary=00151773dbf6d1cec004a265e8a5 X-Virus-Checked: Checked by ClamAV on apache.org --00151773dbf6d1cec004a265e8a5 Content-Type: text/plain; charset=ISO-8859-1 Hm... peculiar. Post flush is not involved in compactions, right? May 2nd 01:06 - Out of disk 01:51 - Starts a mix of major and minor compactions on different column families It then starts a few minor compactions extra over the day, but given that there are more than 1000 sstables, and we are talking 3 minor compactions started, it is not normal I think. May 3rd 1 minor compaction started. When I checked today, there was a bunch of tmp files on the disk with last modify time from 01:something on may 2nd and 200GB empty disk... Definitely no compaction going on. Guess I will add some debug logging and see if I get lucky and run out of disk again. Terje On Wed, May 4, 2011 at 5:06 AM, Jonathan Ellis wrote: > Compaction does, but flush didn't until > https://issues.apache.org/jira/browse/CASSANDRA-2404 > > On Tue, May 3, 2011 at 2:26 PM, Terje Marthinussen > wrote: > > Yes, I realize that. > > I am bit curious why it ran out of disk, or rather, why I have 200GB > empty > > disk now, but unfortunately it seems like we may not have had monitoring > > enabled on this node to tell me what happened in terms of disk usage. > > I also thought that compaction was supposed to resume (try again with > less > > data) if it fails? > > Terje > > > > On Wed, May 4, 2011 at 3:50 AM, Jonathan Ellis > wrote: > >> > >> post flusher is responsible for updating commitlog header after a > >> flush; each task waits for a specific flush to complete, then does its > >> thing. > >> > >> so when you had a flush catastrophically fail, its corresponding > >> post-flush task will be stuck. > >> > >> On Tue, May 3, 2011 at 1:20 PM, Terje Marthinussen > >> wrote: > >> > Just some very tiny amount of writes in the background here (some > hints > >> > spooled up on another node slowly coming in). > >> > No new data. > >> > > >> > I thought there was no exceptions, but I did not look far enough back > in > >> > the > >> > log at first. > >> > Going back a bit further now however, I see that about 50 hours ago: > >> > ERROR [CompactionExecutor:387] 2011-05-02 01:16:01,027 > >> > AbstractCassandraDaemon.java (line 112) Fatal exception in thread > >> > Thread[CompactionExecutor:387,1,main] > >> > java.io.IOException: No space left on device > >> > at java.io.RandomAccessFile.writeBytes(Native Method) > >> > at java.io.RandomAccessFile.write(RandomAccessFile.java:466) > >> > at > >> > > >> > > org.apache.cassandra.io.util.BufferedRandomAccessFile.flush(BufferedRandomAccessFile.java:160) > >> > at > >> > > >> > > org.apache.cassandra.io.util.BufferedRandomAccessFile.reBuffer(BufferedRandomAccessFile.java:225) > >> > at > >> > > >> > > org.apache.cassandra.io.util.BufferedRandomAccessFile.writeAtMost(BufferedRandomAccessFile.java:356) > >> > at > >> > > >> > > org.apache.cassandra.io.util.BufferedRandomAccessFile.write(BufferedRandomAccessFile.java:335) > >> > at > >> > > org.apache.cassandra.io.PrecompactedRow.write(PrecompactedRow.java:102) > >> > at > >> > > >> > > org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:130) > >> > at > >> > > >> > > org.apache.cassandra.db.CompactionManager.doCompaction(CompactionManager.java:566) > >> > at > >> > > >> > > org.apache.cassandra.db.CompactionManager$1.call(CompactionManager.java:146) > >> > at > >> > > >> > > org.apache.cassandra.db.CompactionManager$1.call(CompactionManager.java:112) > >> > at > >> > java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) > >> > at java.util.concurrent.FutureTask.run(FutureTask.java:138) > >> > at > >> > > >> > > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > >> > at > >> > > >> > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > >> > at java.lang.Thread.run(Thread.java:662) > >> > [followed by a few more of those...] > >> > and then a bunch of these: > >> > ERROR [FlushWriter:123] 2011-05-02 01:21:12,690 > >> > AbstractCassandraDaemon.java > >> > (line 112) Fatal exception in thread Thread[FlushWriter:123,5,main] > >> > java.lang.RuntimeException: java.lang.RuntimeException: Insufficient > >> > disk > >> > space to flush 40009184 bytes > >> > at > >> > > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34) > >> > at > >> > > >> > > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) > >> > at > >> > > >> > > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) > >> > at java.lang.Thread.run(Thread.java:662) > >> > Caused by: java.lang.RuntimeException: Insufficient disk space to > flush > >> > 40009184 bytes > >> > at > >> > > >> > > org.apache.cassandra.db.ColumnFamilyStore.getFlushPath(ColumnFamilyStore.java:597) > >> > at > >> > > >> > > org.apache.cassandra.db.ColumnFamilyStore.createFlushWriter(ColumnFamilyStore.java:2100) > >> > at > >> > > org.apache.cassandra.db.Memtable.writeSortedContents(Memtable.java:239) > >> > at > org.apache.cassandra.db.Memtable.access$400(Memtable.java:50) > >> > at > >> > org.apache.cassandra.db.Memtable$3.runMayThrow(Memtable.java:263) > >> > at > >> > > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) > >> > ... 3 more > >> > Seems like compactions stopped after this (a bunch of tmp tables there > >> > still > >> > from when those errors where generated), and I can only suspect the > post > >> > flusher may have stopped at the same time. > >> > There is 890GB of disk for data, sstables are currently using 604G > >> > (139GB is > >> > old tmp tables from when it ran out of disk) and "ring" tells me the > >> > load on > >> > the node is 313GB. > >> > Terje > >> > > >> > > >> > On Wed, May 4, 2011 at 3:02 AM, Jonathan Ellis > >> > wrote: > >> >> > >> >> ... and are there any exceptions in the log? > >> >> > >> >> On Tue, May 3, 2011 at 1:01 PM, Jonathan Ellis > >> >> wrote: > >> >> > Does it resolve down to 0 eventually if you stop doing writes? > >> >> > > >> >> > On Tue, May 3, 2011 at 12:56 PM, Terje Marthinussen > >> >> > wrote: > >> >> >> Cassandra 0.8 beta trunk from about 1 week ago: > >> >> >> Pool Name Active Pending Completed > >> >> >> ReadStage 0 0 5 > >> >> >> RequestResponseStage 0 0 87129 > >> >> >> MutationStage 0 0 187298 > >> >> >> ReadRepairStage 0 0 0 > >> >> >> ReplicateOnWriteStage 0 0 0 > >> >> >> GossipStage 0 0 1353524 > >> >> >> AntiEntropyStage 0 0 0 > >> >> >> MigrationStage 0 0 10 > >> >> >> MemtablePostFlusher 1 190 108 > >> >> >> StreamStage 0 0 0 > >> >> >> FlushWriter 0 0 302 > >> >> >> FILEUTILS-DELETE-POOL 0 0 26 > >> >> >> MiscStage 0 0 0 > >> >> >> FlushSorter 0 0 0 > >> >> >> InternalResponseStage 0 0 0 > >> >> >> HintedHandoff 1 4 7 > >> >> >> > >> >> >> Anyone with nice theories about the pending value on the memtable > >> >> >> post > >> >> >> flusher? > >> >> >> Regards, > >> >> >> Terje > >> >> > > >> >> > > >> >> > > >> >> > -- > >> >> > Jonathan Ellis > >> >> > Project Chair, Apache Cassandra > >> >> > co-founder of DataStax, the source for professional Cassandra > support > >> >> > http://www.datastax.com > >> >> > > >> >> > >> >> > >> >> > >> >> -- > >> >> Jonathan Ellis > >> >> Project Chair, Apache Cassandra > >> >> co-founder of DataStax, the source for professional Cassandra support > >> >> http://www.datastax.com > >> > > >> > > >> > >> > >> > >> -- > >> Jonathan Ellis > >> Project Chair, Apache Cassandra > >> co-founder of DataStax, the source for professional Cassandra support > >> http://www.datastax.com > > > > > > > > -- > Jonathan Ellis > Project Chair, Apache Cassandra > co-founder of DataStax, the source for professional Cassandra support > http://www.datastax.com > --00151773dbf6d1cec004a265e8a5 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable Hm... peculiar.

Post flush is not involved in compaction= s, right?

May 2nd
01:06 - Out of disk
01:51 - Starts a mix of major and minor compactions on different colu= mn families
It then starts a few minor compactions extra over the day, but given t= hat there are more than 1000 sstables, and we are talking 3 minor compactio= ns started, it is not normal I think.
May 3rd 1 minor compaction = started.

When I checked today, there was a bunch of tmp files on= the disk with last modify time from 01:something on may 2nd and 200GB empt= y disk...

Definitely no compaction going on.=A0
Guess I will add some debug logging and see if I get lucky and run out= of disk again.

Terje

On Wed, May 4, 2011 at 5:06 AM, Jonathan Ellis <jbellis@gmail.com> wrote:
Compaction does, but flush didn't until=
https://issues.apache.org/jira/browse/CASSANDRA-2404

On Tue, May 3, 2011 at 2:26 PM, Terje Marthinussen
<tmarthinussen@gmail.com> wrote:
> Yes, I realize that.
> I am bit curious why it ran out of disk, or rather, why I have 200GB e= mpty
> disk now, but unfortunately it seems like we may not have had monitori= ng
> enabled on this node to tell me what happened in terms of disk usage.<= br> > I also thought that compaction was supposed to resume (try again with = less
> data) if it fails?
> Terje
>
> On Wed, May 4, 2011 at 3:50 AM, Jonathan Ellis <jbellis@gmail.com> wrote:
>>
>> post flusher is responsible for updating commitlog header after a<= br> >> flush; each task waits for a specific flush to complete, then does= its
>> thing.
>>
>> so when you had a flush catastrophically fail, its corresponding >> post-flush task will be stuck.
>>
>> On Tue, May 3, 2011 at 1:20 PM, Terje Marthinussen
>> <tmarthinussen@gmail= .com> wrote:
>> > Just some very tiny amount of writes in the background here (= some hints
>> > spooled up on another node slowly coming in).
>> > No new data.
>> >
>> > I thought there was no exceptions, but=A0I did not look far e= nough back in
>> > the
>> > log at first.
>> > Going back a bit further now however, I see that about 50 hou= rs ago:
>> > ERROR [CompactionExecutor:387] 2011-05-02 01:16:01,027
>> > AbstractCassandraDaemon.java (line 112) Fatal exception in th= read
>> > Thread[CompactionExecutor:387,1,main]
>> > java.io.IOException: No space left on device
>> > =A0=A0 =A0 =A0 =A0at java.io.RandomAccessFile.writeBytes(Nati= ve Method)
>> > =A0=A0 =A0 =A0 =A0at java.io.RandomAccessFile.write(RandomAcc= essFile.java:466)
>> > =A0=A0 =A0 =A0 =A0at
>> >
>> > org.apache.cassandra.io.util.BufferedRandomAccessFile.flush(B= ufferedRandomAccessFile.java:160)
>> > =A0=A0 =A0 =A0 =A0at
>> >
>> > org.apache.cassandra.io.util.BufferedRandomAccessFile.reBuffe= r(BufferedRandomAccessFile.java:225)
>> > =A0=A0 =A0 =A0 =A0at
>> >
>> > org.apache.cassandra.io.util.BufferedRandomAccessFile.writeAt= Most(BufferedRandomAccessFile.java:356)
>> > =A0=A0 =A0 =A0 =A0at
>> >
>> > org.apache.cassandra.io.util.BufferedRandomAccessFile.write(B= ufferedRandomAccessFile.java:335)
>> > =A0=A0 =A0 =A0 =A0at
>> > org.apache.cassandra.io.PrecompactedRow.write(PrecompactedRow= .java:102)
>> > =A0=A0 =A0 =A0 =A0at
>> >
>> > org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableW= riter.java:130)
>> > =A0=A0 =A0 =A0 =A0at
>> >
>> > org.apache.cassandra.db.CompactionManager.doCompaction(Compac= tionManager.java:566)
>> > =A0=A0 =A0 =A0 =A0at
>> >
>> > org.apache.cassandra.db.CompactionManager$1.call(CompactionMa= nager.java:146)
>> > =A0=A0 =A0 =A0 =A0at
>> >
>> > org.apache.cassandra.db.CompactionManager$1.call(CompactionMa= nager.java:112)
>> > =A0=A0 =A0 =A0 =A0at
>> > java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java= :303)
>> > =A0=A0 =A0 =A0 =A0at java.util.concurrent.FutureTask.run(Futu= reTask.java:138)
>> > =A0=A0 =A0 =A0 =A0at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Thread= PoolExecutor.java:886)
>> > =A0=A0 =A0 =A0 =A0at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPool= Executor.java:908)
>> > =A0=A0 =A0 =A0 =A0at java.lang.Thread.run(Thread.java:662) >> > [followed by a few more of those...]
>> > and then a bunch of these:
>> > ERROR [FlushWriter:123] 2011-05-02 01:21:12,690
>> > AbstractCassandraDaemon.java
>> > (line 112) Fatal exception in thread Thread[FlushWriter:123,5= ,main]
>> > java.lang.RuntimeException: java.lang.RuntimeException: Insuf= ficient
>> > disk
>> > space to flush 40009184 bytes
>> > =A0=A0 =A0 =A0 =A0at
>> > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnabl= e.java:34)
>> > =A0=A0 =A0 =A0 =A0at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Thread= PoolExecutor.java:886)
>> > =A0=A0 =A0 =A0 =A0at
>> >
>> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPool= Executor.java:908)
>> > =A0=A0 =A0 =A0 =A0at java.lang.Thread.run(Thread.java:662) >> > Caused by: java.lang.RuntimeException: Insufficient disk spac= e to flush
>> > 40009184 bytes
>> > =A0=A0 =A0 =A0 =A0at
>> >
>> > org.apache.cassandra.db.ColumnFamilyStore.getFlushPath(Column= FamilyStore.java:597)
>> > =A0=A0 =A0 =A0 =A0at
>> >
>> > org.apache.cassandra.db.ColumnFamilyStore.createFlushWriter(C= olumnFamilyStore.java:2100)
>> > =A0=A0 =A0 =A0 =A0at
>> > org.apache.cassandra.db.Memtable.writeSortedContents(Memtable= .java:239)
>> > =A0=A0 =A0 =A0 =A0at org.apache.cassandra.db.Memtable.access$= 400(Memtable.java:50)
>> > =A0=A0 =A0 =A0 =A0at
>> > org.apache.cassandra.db.Memtable$3.runMayThrow(Memtable.java:= 263)
>> > =A0=A0 =A0 =A0 =A0at
>> > org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnabl= e.java:30)
>> > =A0=A0 =A0 =A0 =A0... 3 more
>> > Seems like compactions stopped after this (a bunch of tmp tab= les there
>> > still
>> > from when those errors where generated), and I can only suspe= ct the post
>> > flusher may have stopped at the same time.
>> > There is 890GB of disk for data, sstables are currently using= 604G
>> > (139GB is
>> > old tmp tables from when it ran out of disk) and "ring&q= uot; tells me the
>> > load on
>> > the node is 313GB.
>> > Terje
>> >
>> >
>> > On Wed, May 4, 2011 at 3:02 AM, Jonathan Ellis <jbellis@gmail.com>
>> > wrote:
>> >>
>> >> ... and are there any exceptions in the log?
>> >>
>> >> On Tue, May 3, 2011 at 1:01 PM, Jonathan Ellis <jbellis@gmail.com>
>> >> wrote:
>> >> > Does it resolve down to 0 eventually if you stop doi= ng writes?
>> >> >
>> >> > On Tue, May 3, 2011 at 12:56 PM, Terje Marthinussen<= br> >> >> > <tmart= hinussen@gmail.com> wrote:
>> >> >> Cassandra 0.8 beta trunk from about 1 week ago:<= br> >> >> >> Pool Name =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0Active =A0 Pending =A0 =A0 =A0Completed
>> >> >> ReadStage =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 =A0 =A0 =A05
>> >> >> RequestResponseStage =A0 =A0 =A0 =A0 =A0 =A0 =A0= 0 =A0 =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 =A087129
>> >> >> MutationStage =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 0 =A0 =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 187298
>> >> >> ReadRepairStage =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 0 =A0 =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 =A0 =A0 =A00
>> >> >> ReplicateOnWriteStage =A0 =A0 =A0 =A0 =A0 =A0 0 = =A0 =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 =A0 =A0 =A00
>> >> >> GossipStage =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 0 =A0 =A0 =A0 =A0 0 =A0 =A0 =A0 =A01353524
>> >> >> AntiEntropyStage =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0= =A00 =A0 =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 =A0 =A0 =A00
>> >> >> MigrationStage =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A00 =A0 =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 =A0 =A0 10
>> >> >> MemtablePostFlusher =A0 =A0 =A0 =A0 =A0 =A0 =A0 = 1 =A0 =A0 =A0 190 =A0 =A0 =A0 =A0 =A0 =A0108
>> >> >> StreamStage =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 0 =A0 =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 =A0 =A0 =A00
>> >> >> FlushWriter =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 0 =A0 =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 =A0 =A0302
>> >> >> FILEUTILS-DELETE-POOL =A0 =A0 =A0 =A0 =A0 =A0 0 = =A0 =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 =A0 =A0 26
>> >> >> MiscStage =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 =A0 =A0 =A00
>> >> >> FlushSorter =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 0 =A0 =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 =A0 =A0 =A00
>> >> >> InternalResponseStage =A0 =A0 =A0 =A0 =A0 =A0 0 = =A0 =A0 =A0 =A0 0 =A0 =A0 =A0 =A0 =A0 =A0 =A00
>> >> >> HintedHandoff =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 = =A0 =A0 1 =A0 =A0 =A0 =A0 4 =A0 =A0 =A0 =A0 =A0 =A0 =A07
>> >> >>
>> >> >> Anyone with nice theories about the pending valu= e on the memtable
>> >> >> post
>> >> >> flusher?
>> >> >> Regards,
>> >> >> Terje
>> >> >
>> >> >
>> >> >
>> >> > --
>> >> > Jonathan Ellis
>> >> > Project Chair, Apache Cassandra
>> >> > co-founder of DataStax, the source for professional = Cassandra support
>> >> > http://www.datastax.com
>> >> >
>> >>
>> >>
>> >>
>> >> --
>> >> Jonathan Ellis
>> >> Project Chair, Apache Cassandra
>> >> co-founder of DataStax, the source for professional Cassa= ndra support
>> >> htt= p://www.datastax.com
>> >
>> >
>>
>>
>>
>> --
>> Jonathan Ellis
>> Project Chair, Apache Cassandra
>> co-founder of DataStax, the source for professional Cassandra supp= ort
>> http://www.d= atastax.com
>
>



--
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of DataStax, the source for professional Cassandra support
http://www.datastax.c= om

--00151773dbf6d1cec004a265e8a5--