incubator-cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jagan Ranganathan <ja...@zohocorp.com>
Subject Re: Queuing System
Date Sun, 23 Feb 2014 02:27:24 GMT
Thanks Duy Hai for sharing the details. I have a doubt. If for some reason there is a Network
Partition or more than 2 Node failure serving the same partition/load and you ended up writing
hinted hand-off. 

Is there a possibility of a data loss? If yes, how do we avoid that?


Regards,
Jagan

---- On Sat, 22 Feb 2014 22:48:19 +0530 DuyHai Doan &lt;doanduyhai@gmail.com&gt; wrote
---- 


        Jagan
 

Few time ago I dealed with a similar queuing design for one customer. 
 

 If you never delete messages in the queue, then it is possible to use wide rows with bucketing
and increasing monotonic column name to store messages.
 

CREATE TABLE read_only_queue (

   bucket_number int,

   insertion_time timeuuid,

   message text,

   PRIMARY KEY(bucket_number,insertion_time)
    );
 

  Let's say that you allow only 100 000 messages per partition (physical row) to avoid too
wide rows, then inserting/reading from the table read_only_queue is easy;
 

  For message producer :
 

    1) Start at bucket_number = 1

    2) Insert messages with column name = generated timeUUID with micro-second precision (depending
on whether the insertion rate is high or not)

     3) If message count = 100 000, increment bucket_number by one and go to 2)
 

 For message reader:
 
   1) Start at bucket_number = 1
    2) Read messages by slice of  N, save the insertion_time of the last read message
      3) Use the saved insertion_time to perform next slice query 
    4) If read messages count = 100 000, increment bucket_number and go to 2). Keep the insertion_time,
do not reset it since his value is increasing monotonically
 

 For multiple and concurrent producers &amp; writers, there is a trick. Let's assume you
have P concurrent producers and C concurrent consumers.
 

   Assign a numerical ID for each producer and consumer. First producer ID = 1... last producer
ID = P. Same for consumers.

   

   - re-use the above algorithm

   - each producer/consumer start at bucket_number = his ID 

   - at the end of the row,
        - next bucket_number = current bucker_number + P for producers
         - next bucket_number = current bucker_number + C for consumers
  
 

 The last thing to take care of is compaction configuration to reduce the number of SSTables
on disk.
 

 If you achieve to get rid of accumulation effects, e.g reading rate is faster than writing
rate,  the message are likely to be consumed while it's still in memory (in memtable) at server
side. In this particular case, you can optimize further by deactivating compaction for the
table. 
 

 Regards
 

  Duy Hai 

  

  
 

  

 
  




  
 
 On Sat, Feb 22, 2014 at 5:56 PM, Jagan Ranganathan &lt;jagan@zohocorp.com&gt; wrote:
   Hi, 

 Thanks for the pointer. 
  

 Following are some options given there,
    If you know where your live data begins, hint Cassandra with a start column, to reduce
the scan times and the amount of tombstones to collect.
   A broker will usually have some notion of what’s next in the sequence and thus be able
to do much more targeted queries, down to a single record if the storage strategy were to
choose monotonic sequence numbers.

  We need to do is have some intelligence in using the system and avoid tombstones either
use the pointed Column Name or use proper start column if slice query is used.
   

  Is that right or I am missing something here?
   

  Regards,
  Jagan
   
---- On Sat, 22 Feb 2014 20:55:39 +0530 DuyHai Doan&lt;doanduyhai@gmail.com&gt; wrote
---- 

   
    Jagan 
   

   Queue-like data structures are known to be one of the worst anti patterns for Cassandra:
 http://www.datastax.com/dev/blog/cassandra-anti-patterns-queues-and-queue-like-datasets
  


  
 
 On Sat, Feb 22, 2014 at 4:03 PM, Jagan Ranganathan &lt;jagan@zohocorp.com&gt; wrote:
   Hi, 

  I need to decouple some of the work being processed from the user thread to provide better
user experience. For that I need a queuing system with the following needs,
    High Availability
  No Data Loss
  Better Performance.

 Following are some libraries that were considered along with the limitation I see,
    Redis - Data Loss
  ZooKeeper - Not advised for Queue system.
  TokyoCabinet/SQLite/LevelDB - of this Level DB seem to be performing better. With replication
requirement, I probably have to look at Apache ActiveMQ+LevelDB.

 After checking on the third option above, I kind of wonder if Cassandra with Leveled Compaction
offer a similar system. Do you see any issues in such a usage or is there other better solutions
available.
  

 Will be great to get insights on this.
 

 Regards,
 Jagan



 


 






 



 



Mime
View raw message