couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Robert Newson <>
Subject Re: Validate uniqueness field
Date Sun, 28 Mar 2010 12:40:38 GMT
"I am wondering why not introduce locking in couchdb"

It's because locking doesn't scale. The locking strategy you outlined
works fine when your database runs on one machine, but fails when it
runs on two or more machines. A distributed lock, while possible,
would require all machines to lock, which requires them all to be up,
and, of course, ten machines are then no faster than one machine. Most
distributed locking protocols are blocking (like the usual 2PC
protocol), the non-blocking ones are either more overhead (3PC) or
more complex (Paxos).

CouchDB doesn't let you do with one machine that which won't work when
you have ten machines. It's quite deliberately not letting you do
something that won't scale.


On Sun, Mar 28, 2010 at 1:33 PM, Alexander Uvarov
<> wrote:
> On 28.03.2010, at 14:40, faust 1111 wrote:
>> Its sounds like pair of crutches ;)
> Agree with you. Lack of uniqueness, lack of transactions makes couch completely useless
for most cases. Solutions like multiple docs with _id as unique key, along with "inventory
tickets" sounds insane.
> I invented simple solution with Redis. Just an idea. You can use Redis setnx, msetnx
operations to lock desired documents, or just lock "User" by giving this string as key in
Redis to lock whole User type. Then just try to lock "User", create your User document, unlock.
If there is already a lock, wait and try again. But deadlocks are possible when process that
owned the lock is dead and no one can release a lock.
> Redis commands:
> I am wondering why not introduce locking in couchdb. Couchdb is designed to be extremely
fast, but there are also real world problems. Awesome technology, I am crying that such restrictions
taking it away.

View raw message