incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Manu Zhang <owenzhang1...@gmail.com>
Subject Re:
Date Thu, 20 Sep 2012 05:41:15 GMT
the problem seems to have gone away with changing Murmur3Partitioner back
to RandomPartitioner

On Thu, Sep 20, 2012 at 11:14 AM, Manu Zhang <owenzhang1990@gmail.com>wrote:

> Yeah, BulkLoader. You did help me to elaborate my question. Thanks!
>
>
> On Thu, Sep 20, 2012 at 10:58 AM, Michael Kjellman <
> mkjellman@barracuda.com> wrote:
>
>> I assumed you were talking about BulkLoader. I haven't played with trunk
>> yet so I'm afraid I won't be much help here...
>>
>> On Sep 19, 2012, at 7:56 PM, "Manu Zhang" <owenzhang1990@gmail.com
>> <mailto:owenzhang1990@gmail.com>> wrote:
>>
>> cassandra-trunk (so it's 1.2); no Hadoop, bulk load example here
>> http://www.datastax.com/dev/blog/bulk-loading#comment-127019; buffer
>> size is 64 MB as in the example; I'm dealing with about 1GB data. job
>> config, you mean?
>>
>> On Thu, Sep 20, 2012 at 10:32 AM, Michael Kjellman <
>> mkjellman@barracuda.com<mailto:mkjellman@barracuda.com>> wrote:
>> A few questions: what version of 1.1 are you running. What version of
>> Hadoop?
>>
>> What is your job config? What is the buffer size you've chosen? How much
>> data are you dealing with?
>>
>> On Sep 19, 2012, at 7:23 PM, "Manu Zhang" <owenzhang1990@gmail.com
>> <mailto:owenzhang1990@gmail.com>> wrote:
>>
>> > I've been bulk loading data into Cassandra and seen the following
>> exception:
>> >
>> > ERROR 10:10:31,032 Exception in thread
>> Thread[CompactionExecutor:5,1,main]
>> > java.lang.RuntimeException: Last written key
>> DecoratedKey(-442063125946754, 313130303136373a31) >= current key
>> DecoratedKey(-465541023623745, 313036393331333a33) writing into
>> /home/manuzhang/cassandra/data/tpch/lineitem/tpch-lineitem-tmp-ia-56-Data.db
>> >       at
>> org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:131)
>> >       at
>> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:152)
>> >       at
>> org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:169)
>> >       at
>> org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
>> >       at
>> org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
>> >       at
>> org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:69)
>> >       at
>> org.apache.cassandra.db.compaction.CompactionManager$1.run(CompactionManager.java:152)
>> >       at
>> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
>> >       at
>> java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
>> >       at java.util.concurrent.FutureTask.run(FutureTask.java:166)
>> >       at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>> >       at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>> >       at java.lang.Thread.run(Thread.java:722)
>> >
>> > The running Cassandra and that I load data into are the same one.
>> >
>> > What's the cause?
>>
>> 'Like' us on Facebook for exclusive content and other resources on all
>> Barracuda Networks solutions.
>>
>> Visit http://barracudanetworks.com/facebook
>>
>>
>>
>>
>>
>>
>> 'Like' us on Facebook for exclusive content and other resources on all
>> Barracuda Networks solutions.
>> Visit http://barracudanetworks.com/facebook
>>
>>
>>
>

Mime
View raw message