You may want to look at:
https://github.com/Netflix/astyanax/wiki/Chunked-Object-Store

-brian

---

Brian O'Neill

Chief Technology Officer


Health Market Science

The Science of Better Results

2700 Horizon Drive  King of Prussia, PA  19406

M: 215.588.6024 @boneill42    

healthmarketscience.com


This information transmitted in this email message is for the intended recipient only and may contain confidential and/or privileged material. If you received this email in error and are not the intended recipient, or the person responsible to deliver it to the intended recipient, please contact the sender at the email above and delete this email and any attachments and destroy any copies thereof. Any review, retransmission, dissemination, copying or other use of, or taking any action in reliance upon, this information by persons or entities other than the intended recipient is strictly prohibited.

 


From: prem yadav <ipremyadav@gmail.com>
Reply-To: <user@cassandra.apache.org>
Date: Tuesday, March 18, 2014 at 1:41 PM
To: <user@cassandra.apache.org>
Subject: Cassandra blob storage

Hi,
I have been spending some time looking into whether large files(>100mb) can be stores in Cassandra. As per Cassandra faq:

"Currently Cassandra isn't optimized specifically for large file or BLOB storage. However, files of around 64Mb and smaller can be easily stored in the database without splitting them into smaller chunks. This is primarily due to the fact that Cassandra's public API is based on Thrift, which offers no streaming abilities; any value written or fetched has to fit in to memory."

Does the above statement still hold? Thrift supports framed data transport, does that change the above statement. If not, why does casssandra not adopt the Thrift framed data transfer support?

Thanks