incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From aaron morton <aa...@thelastpickle.com>
Subject Re: insufficient space to compact even the two smallest files, aborting
Date Thu, 23 Jun 2011 19:28:48 GMT
Missed that in the history, cheers. 
A
-----------------
Aaron Morton
Freelance Cassandra Developer
@aaronmorton
http://www.thelastpickle.com

On 23 Jun 2011, at 20:26, Sylvain Lebresne wrote:

> As Jonathan said earlier, you are hitting
> https://issues.apache.org/jira/browse/CASSANDRA-2765
> 
> This will be fixed in 0.8.1 that is currently under a vote and should be
> released soon (let's say beginning of next week, maybe sooner).
> 
> --
> Sylvain
> 
> 2011/6/23 Héctor Izquierdo Seliva <izquierdo@strands.com>:
>> Hi Aaron. Reverted back to 4-32. Did the flush but it did not trigger
>> any minor compaction. Ran compact by hand, and it picked only two
>> sstables.
>> 
>> Here's the ls before:
>> 
>> http://pastebin.com/xDtvVZvA
>> 
>> And this is the ls after:
>> 
>> http://pastebin.com/DcpbGvK6
>> 
>> Any suggestions?
>> 
>> 
>> 
>> El jue, 23-06-2011 a las 10:55 +1200, aaron morton escribió:
>>> Setting them to 2 and 2 means compaction can only ever compact 2 files at time,
so it will be worse off.
>>> 
>>> Lets the try following:
>>> 
>>> - restore the compactions settings to the default 4 and 32
>>> - run `ls -lah` in the data dir and grab the output
>>> - run `nodetool flush` this will trigger minor compaction once the memtables
have been flushed
>>> - check the logs for messages from 'CompactionManager'
>>> - when done grab the output from  `ls -lah` again.
>>> 
>>> Hope that helps.
>>> 
>>> 
>>> -----------------
>>> Aaron Morton
>>> Freelance Cassandra Developer
>>> @aaronmorton
>>> http://www.thelastpickle.com
>>> 
>>> On 23 Jun 2011, at 02:04, Héctor Izquierdo Seliva wrote:
>>> 
>>>> Hi All. I set the compaction threshold at minimum 2, maximum 2 and try
>>>> to run compact, but it's not doing anything. There are over 69 sstables
>>>> now, read performance is horrible, and it's taking an insane amount of
>>>> space. Maybe I don't quite get how the new per bucket stuff works, but I
>>>> think this is not normal behaviour.
>>>> 
>>>> El lun, 13-06-2011 a las 10:32 -0500, Jonathan Ellis escribió:
>>>>> As Terje already said in this thread, the threshold is per bucket
>>>>> (group of similarly sized sstables) not per CF.
>>>>> 
>>>>> 2011/6/13 Héctor Izquierdo Seliva <izquierdo@strands.com>:
>>>>>> I was already way over the minimum. There were 12 sstables. Also,
is
>>>>>> there any reason why scrub got stuck? I did not see anything in the
>>>>>> logs. Via jmx I saw that the scrubbed bytes were equal to one of
the
>>>>>> sstables size, and it stuck there for a couple hours .
>>>>>> 
>>>>>> El lun, 13-06-2011 a las 22:55 +0900, Terje Marthinussen escribió:
>>>>>>> That most likely happened just because after scrub you had new
files
>>>>>>> and got over the "4" file minimum limit.
>>>>>>> 
>>>>>>> https://issues.apache.org/jira/browse/CASSANDRA-2697
>>>>>>> 
>>>>>>> Is the bug report.
>>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>> 
>>>> 
>>> 
>> 
>> 
>> 


Mime
View raw message