couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Matt Halstead <m...@elyt.com>
Subject questions about the couchbase fork
Date Mon, 06 Feb 2012 23:45:59 GMT
Hi,

I recently asked a question on couchbase google group about moving
from couchbase single server to couchbase 2.0. I sent it Feb 2nd and
haven't had any response from that list. I was hoping this list was
more active and someone here knew the answers. Ultimately we are
working through whether to follow CouchBase or CouchDB (most likely
BigCouch). The following are my questions.

----------snip-------------
Some immediate thoughts and questions after moving from CouchBase
Single Server to CouchBase 2.0

Inevitably I felt like you had thrown some of my toys away. Namely,
MVCC, couchDB CRUD APIs, couchapp, the possibility that I could lose
data in memory and not have had it persisted, and any sense that this
was still a database and not just a uber-cache.

I looked through this mailing list and found some information on the
re-interpretation of versioning - CAS. Can I have a code example (any
API would do) that demonstrates what happens now if we want optimistic
type locking? I realize this might be trivial for memcached users, but
I'm just pointing out the transition for CouchDB people.

The SYNC instruction was also discussed in the mailing list for
offering synchronous/blocking writes that guarantee either persistence
to disk or replication to another node. But in the 2.0 manual you have
a release note saying that SYNC protocol had been removed. So is this
still possible? I understand the performance implication of this, but
this is a pretty important distinction between a cache and a database
that offers data integrity.

With CouchBase there is a lot of emphasis put on working sets and RAM.
My first impression was 'oh, I have hundreds of terabytes of data that
is seldom accessed but when I want to access it, I want to access it
fast'. But then I got thinking, If you build your map/reduce/re-reduce
jobs well, the search data set you want often for locating resources
will be in the working set and you can generally 'prime' it into
memory. Which leaves the inevitable question. I then want to access
the full objects that the map/reduce sets reference. These are likely
to not be in memory, but I would like them fast. A database usually
makes this acceptably fast - rotating spindles included - is it the
case the CouchBase will try to ensure this is acceptably fast too?

CouchApp. While I wasn't really that heavily into the wild corners of
couchapp, I did appreciate being able to build map/reduce jobs on the
filesystem and sync them to the design documents. Is there likely to
be support for something similar in CouchBase?

----------------snip-----------------


cheers
Matt

Mime
View raw message