Return-Path: X-Original-To: apmail-cassandra-user-archive@www.apache.org Delivered-To: apmail-cassandra-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 7322779E1 for ; Thu, 25 Aug 2011 18:50:59 +0000 (UTC) Received: (qmail 97781 invoked by uid 500); 25 Aug 2011 18:50:57 -0000 Delivered-To: apmail-cassandra-user-archive@cassandra.apache.org Received: (qmail 97675 invoked by uid 500); 25 Aug 2011 18:50:56 -0000 Mailing-List: contact user-help@cassandra.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@cassandra.apache.org Delivered-To: mailing list user@cassandra.apache.org Received: (qmail 97667 invoked by uid 99); 25 Aug 2011 18:50:56 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 25 Aug 2011 18:50:56 +0000 X-ASF-Spam-Status: No, hits=-0.6 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,FREEMAIL_FROM,RCVD_IN_DNSWL_LOW,SPF_PASS,T_TO_NO_BRKTS_FREEMAIL X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of ruby185@gmail.com designates 209.85.220.172 as permitted sender) Received: from [209.85.220.172] (HELO mail-vx0-f172.google.com) (209.85.220.172) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 25 Aug 2011 18:50:51 +0000 Received: by vxi29 with SMTP id 29so2432014vxi.31 for ; Thu, 25 Aug 2011 11:50:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=Zt7nXb7IqhOV2KMKcDNiFAy4zFGUGvTVov0XHfT7sS4=; b=F6HEZgz8r6TWrCqIjs6IN1zBuJlAsEFW/Oy70yr+0fLzU3VHBu1evbQ/Han9G5Nhgr 1RJm4va/Uk1KBCkXmUG20FR6HEEhnav2SJITj9oSlm4OfX/Xs9MsLm9rYPFrTUICtX8g MKc2RDSzk7t9IrbwoY0PENoqECSckEMLAsDEU= MIME-Version: 1.0 Received: by 10.52.27.204 with SMTP id v12mr149793vdg.173.1314298220018; Thu, 25 Aug 2011 11:50:20 -0700 (PDT) Received: by 10.52.184.41 with HTTP; Thu, 25 Aug 2011 11:50:19 -0700 (PDT) In-Reply-To: References: Date: Thu, 25 Aug 2011 14:50:19 -0400 Message-ID: Subject: Re: Is Cassandra suitable for this use case? From: Ruby Stevenson To: user@cassandra.apache.org Content-Type: text/plain; charset=ISO-8859-1 X-Virus-Checked: Checked by ClamAV on apache.org hi Robert - This is quite interesting. Now CassandraFS on google.code seems inactive now. I don't see any release out of that. Do you know if Brisk is considered stable at all or still very experimental? thanks Ruby On Thu, Aug 25, 2011 at 12:44 PM, Robert Jackson wrote: > I believe this is conceptually similar to what Brisk is doing under > CassandraFS (HDFS compliant file system on top of cassandra). > > Robert Jackson > > [1] - https://github.com/riptano/brisk > ________________________________ > From: "Sasha Dolgy" > To: user@cassandra.apache.org > Sent: Thursday, August 25, 2011 12:36:21 PM > Subject: Re: Is Cassandra suitable for this use case? > > You can chunk the files into pieces and store the pieces in Cassandra... > Munge all the pieces back together when delivering back to the client... > > On Aug 25, 2011 6:33 PM, "Ruby Stevenson" wrote: >> hi Evgeny >> >> I appreciate the input. The concern with HDFS is that it has own >> share of problems - its name node, which essentially a metadata >> server, load all files information into memory (roughly 300 MB per >> million files) and its failure handling is far less attractive ... on >> top of configuring and maintaining two separate components and two API >> for handling data. I am still holding out hopes that there might be >> some better way of go about it? >> >> Best Regards, >> >> Ruby >> >> On Thu, Aug 25, 2011 at 11:10 AM, Evgeniy Ryabitskiy >> wrote: >>> Hi, >>> >>> If you want to store files with partition/replication, you could use >>> Distributed File System(DFS). >>> Like http://hadoop.apache.org/hdfs/ >>> or any other: >>> http://en.wikipedia.org/wiki/Distributed_file_system >>> >>> Still you could use Cassandra to store any metadata and filepath in DFS. >>> >>> So: Cassandra + HDFS would be my solution. >>> >>> Evgeny. >>> >>> > >