incubator-couchdb-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Robert Newson <robert.new...@gmail.com>
Subject Re: Validate uniqueness field
Date Sun, 28 Mar 2010 13:43:34 GMT
"not every application requires to be extremely distributed" -- for
everything else, there's CouchDB. :)

I completely agree that not every application needs to be distributed.
For applications that are relational in shape, it makes sense to use a
relational database solution. Conversely, for something very
relational, it makes no sense to use a non-relational database
solution.

If you need multi-document transactions, multiple constraints and
don't need to be distributed, don't need to shard, and don't need
offline replication, it's not clear that CouchDB is a good fit. On the
other hand, if your application is a natural fit for CouchDB, then
you'll also be able to scale up when needed.

Bulk update used to work in the way you are suggesting and this
behavior was removed because it cannot work the same way when you
replicate or shard. I suspect that trend will continue rather than
reverse.

B.

On Sun, Mar 28, 2010 at 2:03 PM, Alexander Uvarov
<alexander.uvarov@gmail.com> wrote:
> Not every document requires locking. Also, not every application requires to be extremely
distributed. Developers can make a decision what kind of application is cooking, in other
words, should it scale to thousands of nodes, or just single master with many readers would
pretty good.
> The same about transactions, offline replication not ever required. An option to reject
on conflict during bulk update would be really helpful (just use single master), but there
are obvious problems with sharding coming :(.
>
> On 28.03.2010, at 18:40, Robert Newson wrote:
>
>> "I am wondering why not introduce locking in couchdb"
>>
>> It's because locking doesn't scale. The locking strategy you outlined
>> works fine when your database runs on one machine, but fails when it
>> runs on two or more machines. A distributed lock, while possible,
>> would require all machines to lock, which requires them all to be up,
>> and, of course, ten machines are then no faster than one machine. Most
>> distributed locking protocols are blocking (like the usual 2PC
>> protocol), the non-blocking ones are either more overhead (3PC) or
>> more complex (Paxos).
>>
>> CouchDB doesn't let you do with one machine that which won't work when
>> you have ten machines. It's quite deliberately not letting you do
>> something that won't scale.
>>
>> B.
>>
>> On Sun, Mar 28, 2010 at 1:33 PM, Alexander Uvarov
>> <alexander.uvarov@gmail.com> wrote:
>>>
>>> On 28.03.2010, at 14:40, faust 1111 wrote:
>>>
>>>> Its sounds like pair of crutches ;)
>>>
>>> Agree with you. Lack of uniqueness, lack of transactions makes couch completely
useless for most cases. Solutions like multiple docs with _id as unique key, along with "inventory
tickets" sounds insane.
>>>
>>> I invented simple solution with Redis. Just an idea. You can use Redis setnx,
msetnx operations to lock desired documents, or just lock "User" by giving this string as
key in Redis to lock whole User type. Then just try to lock "User", create your User document,
unlock. If there is already a lock, wait and try again. But deadlocks are possible when process
that owned the lock is dead and no one can release a lock.
>>> Redis commands: http://code.google.com/p/redis/wiki/MsetCommand
>>>
>>> I am wondering why not introduce locking in couchdb. Couchdb is designed to be
extremely fast, but there are also real world problems. Awesome technology, I am crying that
such restrictions taking it away.
>
>

Mime
View raw message