cassandra-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Avinash Lakshman (JIRA)" <j...@apache.org>
Subject [jira] Commented: (CASSANDRA-9) Cassandra silently loses data when a single row gets large (under "heavy load")
Date Wed, 25 Mar 2009 02:45:50 GMT

    [ https://issues.apache.org/jira/browse/CASSANDRA-9?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12688980#action_12688980
] 

Avinash Lakshman commented on CASSANDRA-9:
------------------------------------------

It is an issue that is actually a non-issue. In the worst case the Getter will return NULL
since it read an empty memtable (maybe memtable got cleared). But that is fine because now
the disk read will happen from buffer cache. It is not incorrect. No harm will be done.

> Cassandra silently loses data when a single row gets large (under "heavy load")
> -------------------------------------------------------------------------------
>
>                 Key: CASSANDRA-9
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-9
>             Project: Cassandra
>          Issue Type: Bug
>         Environment: code in trunk, linux-2.6.27-gentoo-r1/, java version "1.7.0-nio2",
4GB, Intel Core 2 Duo
>            Reporter: Neophytos Demetriou
>         Attachments: executor.patch, shutdown-before-flush-against-trunk.patch, shutdown-before-flush-v2.patch,
shutdown-before-flush-v3-trunk.patch, shutdown-before-flush.patch
>
>
> When you insert a large number of columns in a single row, cassandra silently loses some
or all of these inserts while flushing memtable to disk (potentialy leaving you with zero-sized
data files). This happens when the memtable threshold is violated, i.e. when currentSize_
>= threshold_ (MemTableSizeInMB) OR  currentObjectCount_ >= thresholdCount_ (MemTableObjectCountInMillions).
This was a problem with the old code in code.google and the code with the jdk7 dependencies
also. No OutOfMemory errors are thrown, there is nothing relevant in the logs. It is not clear
why this happens under heavy load (when no throttle is used) as it works fine when when you
pace requests. I have confirmed this with another member of the community.
> In storage-conf.xml:
>    <HashingStrategy>RANDOM</HashingStrategy>
>    <MemtableSizeInMB>32</MemtableSizeInMB>
>    <MemtableObjectCountInMillions>1</MemtableObjectCountInMillions>
>    <Tables>
>       <Table Name="MyTable">
>           <ColumnFamily ColumnType="Super" ColumnSort="Name" Name="MySuper"></ColumnFamily>
>       </Table>
>   </Tables>
> You can also test it with different values for thresholdCount_ In db/Memtable.java, say:
>     private int thresholdCount_ = 512*1024;
> Here is a small program that will help you reproduce this (hopefully):
>     private static void doWrite() throws Throwable
>     {
>         int numRequests=0;
>         int numRequestsPerSecond = 3;
>         Table table = Table.open("MyTable");
>         Random random = new Random();
>         byte[] bytes = new byte[8];
>         String key = "MyKey";
>         int totalUsed = 0;
>         int total = 0;
>         for (int i = 0; i < 1500; ++i) {
>             RowMutation rm = new RowMutation("MyTable", key);
>             random.nextBytes(bytes);
>             int[] used = new int[500*1024];
>             for (int z=0; z<500*1024;z++) {
>                 used[z]=0;
>             }
>             int n = random.nextInt(16*1024);
>             for ( int k = 0; k < n; ++k ) {
>                 int j = random.nextInt(500*1024);
>                 if ( used[j] == 0 ) {
>                     used[j] = 1;
>                     ++totalUsed;
>                     //int w = random.nextInt(4);
>                     int w = 0;
>                     rm.add("MySuper:SuperColumn-" + j + ":Column-" + i, bytes, w);
>                 }
>             }
>             rm.apply();
>             total += n;
>             System.out.println("n="+n+ " total="+ total+" totalUsed="+totalUsed);
>             //Thread.sleep(1000*numRequests/numRequestsPerSecond);
>             numRequests++;
>         }
>         System.out.println("Write done");
>     }
> PS. Please note that (a) I'm no java guru and (b) I have tried this initially with a
C++ thrift client. The outcome is always the same: zero-sized data files under heavy load
--- it works fine when you pace requests.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


Mime
View raw message