1000 and 10000 records take too short time to really benchmark anything. You will use 2 seconds just for stuff like tcp_windows sizes to adjust to the level were you get throughput.
The difference between 100k and 500k is less than 10%. Could be anything.
Filesystem caches, sizes of memtables (default memcache settings flushes a memtable when it reaches 300k entries)... difficult to say.
You should benchmark something larger than that. Need to at least to trigger some SSTable compactions and proper Java GC work if you really want to know what your performance is.
Batchmutate insert? Can be package size that differ if not nr threads sending data to Cassandra nodes.
I have 4 nodes in my cluster, and run a benchmark on node A in Java.
P.S. Replication = 3
On Thu, Sep 2, 2010 at 2:49 PM, vineet daniel <email@example.com> wrote:
You are inserting using php,perl,python,java or ? and is cassandra installed locally or on a network system and is it a single system or you have a cluster of nodes. I know I've asked you many questions but the answers will help immensely to assess the results.
Anyways congrats on getting better results :-) .
Let your email find you....
On Thu, Sep 2, 2010 at 11:39 AM, ChingShen <firstname.lastname@example.org> wrote:
I run a benchmark with my own code and found that the 100000 inserts performance is better than others, Why?
Can anyone explain it?
Partitioner = OPP
CL = ONE
insert one:201 ms
insert per:0.201 ms
insert thput:4975.1245 ops/sec
insert one:1950 ms
insert per:0.195 ms
insert thput:5128.205 ops/sec
insert one:15576 ms
insert per:0.15576 ms
insert thput:6420.134 ops/sec
insert one:82177 ms
insert per:0.164354 ms
insert thput:6084.4272 ops/sec