cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jonathan Ellis <jbel...@gmail.com>
Subject Re: Help with MapReduce
Date Tue, 20 Apr 2010 05:58:13 GMT
http://wiki.apache.org/cassandra/FAQ#slows_down_after_lotso_inserts

On Tue, Apr 20, 2010 at 12:48 AM, Joost Ouwerkerk <joost@openplaces.org> wrote:
> Ok.  This should be ok for now, although not optimal for some jobs.
>
> Next issue is node stability during the insert job.  The stacktrace below
> occured on several nodes while inserting 10 million rows.  We're running on
> 4G machines, 1G of which is allocated to cassandra.  What's the best config
> to prevent OOMs (even if it means sacrificing some performance)?
>
> ERROR [COMPACTION-POOL:1] 2010-04-20 01:39:15,853
> DebuggableThreadPoolExecutor.java (line 94) Error in executor
> futuretaskjava.util.concurrent.ExecutionException:
> java.lang.OutOfMemoryError: Java heap space
>         at
> java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222)        at
> java.util.concurrent.FutureTask.get(FutureTask.java:83)
>         at
> org.apache.cassandra.concurrent.DebuggableThreadPoolExecutor.afterExecute(DebuggableThreadPoolExecutor.java:86)
> at
> org.apache.cassandra.db.CompactionManager$CompactionExecutor.afterExecute(CompactionManager.java:582)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:888)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>         at java.lang.Thread.run(Thread.java:619)
> Caused by: java.lang.OutOfMemoryError: Java heap space
>         at java.util.Arrays.copyOf(Arrays.java:2786)        at
> java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:94)
>         at java.io.DataOutputStream.write(DataOutputStream.java:90)
> at java.io.FilterOutputStream.write(FilterOutputStream.java:80)
>         at
> org.apache.cassandra.db.ColumnSerializer.writeName(ColumnSerializer.java:39)
>         at
> org.apache.cassandra.db.SuperColumnSerializer.serialize(SuperColumn.java:301)
>         at
> org.apache.cassandra.db.SuperColumnSerializer.serialize(SuperColumn.java:284)
> at
> org.apache.cassandra.db.ColumnFamilySerializer.serializeForSSTable(ColumnFamilySerializer.java:87)
>         at
> org.apache.cassandra.db.ColumnFamilySerializer.serializeWithIndexes(ColumnFamilySerializer.java:99)
> at
> org.apache.cassandra.io.CompactionIterator.getReduced(CompactionIterator.java:131)
>         at
> org.apache.cassandra.io.CompactionIterator.getReduced(CompactionIterator.java:41)
>         at
> org.apache.cassandra.utils.ReducingIterator.computeNext(ReducingIterator.java:73)
> at
> com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:135)
>         at
> com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:130)
>         at
> org.apache.commons.collections.iterators.FilterIterator.setNextObject(FilterIterator.java:183)
> at
> org.apache.commons.collections.iterators.FilterIterator.hasNext(FilterIterator.java:94)
>         at
> org.apache.cassandra.db.CompactionManager.doCompaction(CompactionManager.java:284)
> at
> org.apache.cassandra.db.CompactionManager$1.call(CompactionManager.java:102)
>         at
> org.apache.cassandra.db.CompactionManager$1.call(CompactionManager.java:83)
>         at
> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
>         at java.util.concurrent.FutureTask.run(FutureTask.java:138)
>         at
> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>         ... 2 more
>
>
> On Mon, Apr 19, 2010 at 10:34 PM, Jonathan Ellis <jbellis@gmail.com> wrote:
>>
>> Oh, from Hadoop.  Yes, you are indeed limited to entire columns or
>> supercolumns at a time there.
>

Mime
View raw message