cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jonathan Ellis <jbel...@gmail.com>
Subject Re: StackOverflowError on high load
Date Wed, 17 Feb 2010 13:52:58 GMT
you temporarily need up to 2x your current space used to perform
compactions.  "disk too full" is almost certainly actually the
problem.

created https://issues.apache.org/jira/browse/CASSANDRA-804 to fix this.

On Wed, Feb 17, 2010 at 5:59 AM, Ran Tavory <rantav@gmail.com> wrote:
> no, that's not it, disk isn't full.
> After restarting the server I can write again. Still, however, this error is
> troubling...
>
> On Wed, Feb 17, 2010 at 12:24 PM, ruslan usifov <ruslan.usifov@gmail.com>
> wrote:
>>
>> I think that you have not enough room for your data. run df -h to see that
>> one of your discs is full
>>
>> 2010/2/17 Ran Tavory <rantav@gmail.com>
>>>
>>> I'm running some high load writes on a pair of cassandra hosts using an
>>> OrderPresenrvingPartitioner and ran into the following error after which one
>>> of the hosts killed itself.
>>> Has anyone seen it and can advice?
>>> (cassandra v0.5.0)
>>> ERROR [HINTED-HANDOFF-POOL:1] 2010-02-17 04:50:09,602
>>> CassandraDaemon.java (line 71) Fatal exception in thread
>>> Thread[HINTED-HANDOFF-POOL:1,5,main]
>>> java.lang.StackOverflowError
>>>         at sun.nio.cs.UTF_8$Encoder.encodeArrayLoop(UTF_8.java:341)
>>>         at sun.nio.cs.UTF_8$Encoder.encodeLoop(UTF_8.java:447)
>>>         at
>>> java.nio.charset.CharsetEncoder.encode(CharsetEncoder.java:544)
>>>         at
>>> java.lang.StringCoding$StringEncoder.encode(StringCoding.java:240)
>>>         at java.lang.StringCoding.encode(StringCoding.java:272)
>>>         at java.lang.String.getBytes(String.java:947)
>>>         at java.io.UnixFileSystem.getSpace(Native Method)
>>>         at java.io.File.getUsableSpace(File.java:1660)
>>>         at
>>> org.apache.cassandra.config.DatabaseDescriptor.getDataFileLocationForTable(DatabaseDescriptor.java:891)
>>>         at
>>> org.apache.cassandra.db.ColumnFamilyStore.doFileCompaction(ColumnFamilyStore.java:876)
>>>         at
>>> org.apache.cassandra.db.ColumnFamilyStore.doFileCompaction(ColumnFamilyStore.java:884)
>>>         at
>>> org.apache.cassandra.db.ColumnFamilyStore.doFileCompaction(ColumnFamilyStore.java:884)
>>>         at
>>> org.apache.cassandra.db.ColumnFamilyStore.doFileCompaction(ColumnFamilyStore.java:884)
>>>         at
>>> org.apache.cassandra.db.ColumnFamilyStore.doFileCompaction(ColumnFamilyStore.java:884)
>>>         at
>>> org.apache.cassandra.db.ColumnFamilyStore.doFileCompaction(ColumnFamilyStore.java:884)
>>>         at
>>> org.apache.cassandra.db.ColumnFamilyStore.doFileCompaction(ColumnFamilyStore.java:884)
>>>         at
>>> org.apache.cassandra.db.ColumnFamilyStore.doFileCompaction(ColumnFamilyStore.java:884)
>>>         ...
>>>         at
>>> org.apache.cassandra.db.ColumnFamilyStore.doFileCompaction(ColumnFamilyStore.java:884)
>>>         at
>>> org.apache.cassandra.db.ColumnFamilyStore.doFileCompaction(ColumnFamilyStore.java:884)
>>>         at
>>> org.apache.cassandra.db.ColumnFamilyStore.doFileCompaction(ColumnFamilyStore.java:884)
>>>  INFO [ROW-MUTATION-STAGE:28] 2010-02-17 04:50:53,230
>>> ColumnFamilyStore.java (line 393) DocumentMapping has reached its threshold;
>>> switching in a fresh Memtable
>>>  INFO [ROW-MUTATION-STAGE:28] 2010-02-17 04:50:53,230
>>> ColumnFamilyStore.java (line 1035) Enqueuing flush of
>>> Memtable(DocumentMapping)@122980220
>>>  INFO [FLUSH-SORTER-POOL:1] 2010-02-17 04:50:53,230 Memtable.java (line
>>> 183) Sorting Memtable(DocumentMapping)@122980220
>>>  INFO [FLUSH-WRITER-POOL:1] 2010-02-17 04:50:53,386 Memtable.java (line
>>> 192) Writing Memtable(DocumentMapping)@122980220
>>> ERROR [FLUSH-WRITER-POOL:1] 2010-02-17 04:50:54,010
>>> DebuggableThreadPoolExecutor.java (line 162) Error in executor futuretask
>>> java.util.concurrent.ExecutionException: java.lang.RuntimeException:
>>> java.io.IOException: No space left on device
>>>         at
>>> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)
>>>         at java.util.concurrent.FutureTask.get(FutureTask.java:83)
>>>         at
>>> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:154)
>>>         at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:888)
>>>         at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>>         at java.lang.Thread.run(Thread.java:619)
>>> Caused by: java.lang.RuntimeException: java.io.IOException: No space left
>>> on device
>>>         at
>>> org.apache.cassandra.db.ColumnFamilyStore$3$1.run(ColumnFamilyStore.java:1060)
>>>         at
>>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
>>>         at
>>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>>>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>>>         at
>>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>>>         ... 2 more
>>> Caused by: java.io.IOException: No space left on device
>>>         at java.io.FileOutputStream.write(Native Method)
>>>         at java.io.DataOutputStream.writeInt(DataOutputStream.java:180)
>>>         at
>>> org.apache.cassandra.utils.BloomFilterSerializer.serialize(BloomFilter.java:158)
>>>         at
>>> org.apache.cassandra.utils.BloomFilterSerializer.serialize(BloomFilter.java:153)
>>>         at
>>> org.apache.cassandra.io.SSTableWriter.closeAndOpenReader(SSTableWriter.java:123)
>>>         at
>>> org.apache.cassandra.db.Memtable.writeSortedContents(Memtable.java:207)
>>>         at
>>> org.apache.cassandra.db.ColumnFamilyStore$3$1.run(ColumnFamilyStore.java:1056)
>>>         ... 6 more
>>
>
>

Mime
View raw message