incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From aaron morton <aa...@thelastpickle.com>
Subject Re: Attempting to avoid fatal flush due to disk space
Date Wed, 04 Apr 2012 18:57:17 GMT
Is cleanupDirectoriesFailover able to delete the files ? 

When you get the error is the disk actually full ?

Can you narrow this down to "i cannot delete the sstables"? (and what platform are you on).


Cheers
 
-----------------
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com

On 5/04/2012, at 1:57 AM, Lewis John Mcgibbney wrote:

> Hi,
> 
> When writing some test code (to test our Gora-Cassandra module) for Apache Gora, I'm
experiencing a major problem when trying to tearDown, flush and basically clean everything
up. The tests consist of running our test suite against an embedded Cassandra instance using
the Gora API to do all sorts of tasks which may highkight where there are bugs in the code.
> 
> Now regardless of whether the tests are failing or not, I get the log and subsequent
stack trace as below which quite clearly indicates that there is insufficient disk to flush.
However as I don't have the luxury of making changes @ runtime I'm slightly stumped as to
how to solve it as all I'm working with is the Cassandra server which is initiated in setUpClass.
> 
> My yaml is ripped directly from the Cassandra 1.0.2 distribution so there is nothing
out of place in there. However maybe there are some settings which I have not configured?
> 
> Please indicate if I need to provide some more information to paint a clearer picture
of the situation.
> 
> When calling tearDownClass I have
> 
>         if (cassandraThread != null) {
>             cassandraDaemon.stop();
>             cassandraDaemon.destroy();
>             cassandraThread.interrupt();
>             cassandraThread = null;
>         }
>         cleanupDirectoriesFailover(); // this recursively deletes the directories Cassandra
works with whilst the server is running.
>     }
> 
> Thank you very much in advance for any help coming this way.
> 
> Lewis
> 
> 12/04/04 13:24:11 INFO gora.GoraTestDriver: tearing down test
> 12/04/04 13:24:11 INFO migration.Migration: Applying migration 15541bf0-7e51-11e1-0000-242d50cf1fff
Drop keyspace: Employee
> 12/04/04 13:24:11 INFO db.ColumnFamilyStore: Enqueuing flush of Memtable-Migrations@9190262(6586/82325
serialized/live bytes, 1 ops)
> 12/04/04 13:24:11 INFO db.Memtable: Writing Memtable-Migrations@9190262(6586/82325 serialized/live
bytes, 1 ops)
> 12/04/04 13:24:11 INFO db.ColumnFamilyStore: Enqueuing flush of Memtable-Schema@9695615(2796/34950
serialized/live bytes, 2 ops)
> 12/04/04 13:24:11 INFO db.Memtable: Completed flushing target/test/var/lib/cassandra/data/system/Migrations-h-2764-Data.db
(6650 bytes)
> 12/04/04 13:24:11 INFO db.Memtable: Writing Memtable-Schema@9695615(2796/34950 serialized/live
bytes, 2 ops)
> 12/04/04 13:24:11 INFO db.Memtable: Completed flushing target/test/var/lib/cassandra/data/system/Schema-h-2764-Data.db
(2946 bytes)
> 12/04/04 13:24:12 INFO compaction.CompactionTask: Compacted to [target/test/var/lib/cassandra/data/system/Migrations-h-2765-Data.db,].
 12,979,714 to 12,979,522 (~99% of original) bytes for 1 keys at 12.341213MB/s.  Time: 1,003ms.
> 12/04/04 13:24:12 INFO compaction.CompactionTask: Compacting [SSTableReader(path='target/test/var/lib/cassandra/data/system/Schema-h-2764-Data.db'),
SSTableReader(path='target/test/var/lib/cassandra/data/system/Schema-h-2760-Data.db'), SSTableReader(path='target/test/var/lib/cassandra/data/system/Schema-h-2761-Data.db'),
SSTableReader(path='target/test/var/lib/cassandra/data/system/Schema-h-2763-Data.db')]
> 12/04/04 13:24:12 INFO store.DataStoreTestBase: tearing down class
> 12/04/04 13:24:12 INFO service.AbstractCassandraDaemon: Cassandra shutting down...
> 12/04/04 13:24:12 INFO thrift.CassandraDaemon: Stop listening to thrift clients
> 12/04/04 13:24:12 INFO compaction.CompactionTask: Compacted to [target/test/var/lib/cassandra/data/system/Schema-h-2765-Data.db,].
 5,735,956 to 5,735,629 (~99% of original) bytes for 1,681 keys at 9.011404MB/s.  Time: 607ms.
> Tests run: 51, Failures: 1, Errors: 50, Skipped: 0, Time elapsed: 913.842 sec <<<
FAILURE!
> 12/04/04 13:25:12 INFO net.MessagingService: Shutting down MessageService...
> 12/04/04 13:25:12 INFO net.MessagingService: MessagingService shutting down server thread.
> 12/04/04 13:25:12 INFO net.MessagingService: Waiting for in-progress requests to complete
> 12/04/04 13:25:12 INFO db.ColumnFamilyStore: Enqueuing flush of Memtable-Versions@6081747(83/103
serialized/live bytes, 3 ops)
> 12/04/04 13:25:12 INFO db.Memtable: Writing Memtable-Versions@6081747(83/103 serialized/live
bytes, 3 ops)
> 12/04/04 13:25:12 ERROR service.AbstractCassandraDaemon: Fatal exception in thread Thread[FlushWriter:2,5,main]
> java.lang.RuntimeException: java.lang.RuntimeException: Insufficient disk space to flush
133 bytes
>     at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>     at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.RuntimeException: Insufficient disk space to flush 133 bytes
>     at org.apache.cassandra.db.ColumnFamilyStore.getFlushPath(ColumnFamilyStore.java:621)
>     at org.apache.cassandra.db.ColumnFamilyStore.createFlushWriter(ColumnFamilyStore.java:1866)
>     at org.apache.cassandra.db.Memtable.writeSortedContents(Memtable.java:248)
>     at org.apache.cassandra.db.Memtable.access$400(Memtable.java:47)
>     at org.apache.cassandra.db.Memtable$4.runMayThrow(Memtable.java:289)
>     at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>     ... 3 more
> 12/04/04 13:25:12 ERROR service.AbstractCassandraDaemon: Fatal exception in thread Thread[FlushWriter:2,5,main]
> java.lang.RuntimeException: java.lang.RuntimeException: Insufficient disk space to flush
133 bytes
>     at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:34)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>     at java.lang.Thread.run(Thread.java:662)
> Caused by: java.lang.RuntimeException: Insufficient disk space to flush 133 bytes
>     at org.apache.cassandra.db.ColumnFamilyStore.getFlushPath(ColumnFamilyStore.java:621)
>     at org.apache.cassandra.db.ColumnFamilyStore.createFlushWriter(ColumnFamilyStore.java:1866)
>     at org.apache.cassandra.db.Memtable.writeSortedContents(Memtable.java:248)
>     at org.apache.cassandra.db.Memtable.access$400(Memtable.java:47)
>     at org.apache.cassandra.db.Memtable$4.runMayThrow(Memtable.java:289)
>     at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30)
>     ... 3 more
> 
> -- 
> Lewis 
> 


Mime
View raw message