cassandra-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Robert Jackson <robe...@promedicalinc.com>
Subject Re: Is Cassandra suitable for this use case?
Date Thu, 25 Aug 2011 20:08:04 GMT
As far as I know the CassandraFS google code project has nothing to do with the current implementation
in Brisk (although I really have no idea about that). 

Some additional information about CFS in Brisk can be found in the presentations from Cassandra
SF 2011 [1]. There is a nice presentation titled "Introduction to Brisk" that reviews some
of the HDFS compatibility. 

[1] http://www.datastax.com/events/cassandrasf2011/presentations 

Robert Jackson 

----- Original Message -----

From: "Ruby Stevenson" <ruby185@gmail.com> 
To: user@cassandra.apache.org 
Sent: Thursday, August 25, 2011 2:50:19 PM 
Subject: Re: Is Cassandra suitable for this use case? 

hi Robert - 

This is quite interesting. Now CassandraFS on google.code seems 
inactive now. I don't see any release out of that. 

Do you know if Brisk is considered stable at all or still very experimental? 

thanks 

Ruby 


On Thu, Aug 25, 2011 at 12:44 PM, Robert Jackson 
<robertj@promedicalinc.com> wrote: 
> I believe this is conceptually similar to what Brisk is doing under 
> CassandraFS (HDFS compliant file system on top of cassandra). 
> 
> Robert Jackson 
> 
> [1] - https://github.com/riptano/brisk 
> ________________________________ 
> From: "Sasha Dolgy" <sdolgy@gmail.com> 
> To: user@cassandra.apache.org 
> Sent: Thursday, August 25, 2011 12:36:21 PM 
> Subject: Re: Is Cassandra suitable for this use case? 
> 
> You can chunk the files into pieces and store the pieces in Cassandra... 
> Munge all the pieces back together when delivering back to the client... 
> 
> On Aug 25, 2011 6:33 PM, "Ruby Stevenson" <ruby185@gmail.com> wrote: 
>> hi Evgeny 
>> 
>> I appreciate the input. The concern with HDFS is that it has own 
>> share of problems - its name node, which essentially a metadata 
>> server, load all files information into memory (roughly 300 MB per 
>> million files) and its failure handling is far less attractive ... on 
>> top of configuring and maintaining two separate components and two API 
>> for handling data. I am still holding out hopes that there might be 
>> some better way of go about it? 
>> 
>> Best Regards, 
>> 
>> Ruby 
>> 
>> On Thu, Aug 25, 2011 at 11:10 AM, Evgeniy Ryabitskiy 
>> <evgeniy.ryabitskiy@wikimart.ru> wrote: 
>>> Hi, 
>>> 
>>> If you want to store files with partition/replication, you could use 
>>> Distributed File System(DFS). 
>>> Like http://hadoop.apache.org/hdfs/ 
>>> or any other: 
>>> http://en.wikipedia.org/wiki/Distributed_file_system 
>>> 
>>> Still you could use Cassandra to store any metadata and filepath in DFS. 
>>> 
>>> So: Cassandra + HDFS would be my solution. 
>>> 
>>> Evgeny. 
>>> 
>>> 
> 
> 


Mime
View raw message