incubator-couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Fredrik Widlund <fredrik.widl...@qbrick.com>
Subject RE: CouchDB and Hadoop_
Date Fri, 16 Apr 2010 13:29:25 GMT


Are the files reopened for each write etc? If locking works glusterfs for example could be
a nice solution for the replication. Each write would be atomically written to all instances,
and reads would be local (using AFR with preferred servers).

Kind regards,
Fredrik Widlund


-----Original Message-----
From: Suhail Ahmed [mailto:suhailski@gmail.com]
Sent: den 16 april 2010 10:13
To: user@couchdb.apache.org
Subject: Re: CouchDB and Hadoop

Sure It can be done but for me the whole Java to Erlang layer would be a
mess since they are so different. The better way to go about doing this
would to be implement a distributed file system like Hadoop underneath Couch
for same effect.

On Fri, Apr 16, 2010 at 1:16 AM, Steve-Mustafa Ismail Mustafa <
m.i.mustafa@gmail.com> wrote:

> I swear, I spent over an hour going through the mailing list trying to find
> an answer.
>
> I know that CouchDB is a document oriented DB and I know that Hadoop is a
> File System and that both implement Map/Reduce.  But is it possible to have
> them stacked with Hadoop being the FS in use and CouchDB being the DB? This
> way, wouldn't you get the distributed/clustered FS abilities of Hadoop in
> addition to the powerful retrieval abilities of CouchDB?
>
> If its not possible, and I suspect that it is so, _why_? Don't they operate
> on two seperate levels? Wouldn't CouchDB sort of replace HBase?
>
> Thanks in advance for any and all replies
>


Mime
View raw message